Rod Hilton <a href='http://rodhilton.com'>Rod Hilton</a>&#039;s rants about software development, technology, and sometimes Star Wars http://www.rodhilton.com Retro Gaming Setup: A Beginner's Guide <p>I recently completed a project that I’d been working on for a while and I wanted to share the details of how I did it in case other people might find it useful. After upgrading to a 4K TV, I decided I wanted to get a nice retro gaming setup going where I could play old video games on it. I had some fairly specific goals in mind for this project and I’ve now completed all of those goals so I consider it done (with a few caveats) and thought it was time to write it up.</p> <p>It’s also worth noting that I am not an expert on a lot of this stuff. There are people out there who can talk to you about csync and SCART and mappers and all sorts of technical details about retro gaming, but I’m not one of those people. I didn’t want to get into the weeds on a lot of this stuff, and I had no interest in learning how to solder tiny little components together on 40-year-old gaming systems. This is more of a For Dummies sort of guide because that’s what I am. I very well could be getting things wrong, and nerdbags can feel free to correct me in the comments but I don’t recommend anyone read those comments because who cares? What I did here <em>works</em> and that’s all I ever cared about.</p> <p>Getting this entire system going was probably the dorkiest thing I’ve ever done, and that’s coming from a guy who once wrote a 5,000-word blog post about Star Wars. Well strap in folks, this sucker’s over 10,000 words for, like, Mario and stuff.</p> <p>UPDATE: A lot of these suggestions are out of date. Building a <a href="https://www.youtube.com/watch?v=lVPa5EW5mp8">MiSTer FPGA</a> is a far cheaper way to do exactly the same thing as what I detail in this post. It gives you the same lag-free FPGA-based experience but in a single device that is constantly being updated with new cores. Failing that, the <a href="https://github.com/mattpannella/pocket-updater-utility">Analogue Pocket</a> (which is harder to get ahold of) has a firmware updater utility that downloads ported cores from the MiSTer, and that device has a handy dock that can connect to a TV. Both of those solutions are more comprehensive than all of these different devices I get into here, and vastly less expensive.</p> <h1 id="motivation">Motivation</h1> <p>So first, why did I even want to do this? The impetus was getting a 4K television. It may seem a bit weird to make such a new modern piece of technology the driver for getting way into playing old retro games, but the reason for this is a fairly good window into my mindset going into this project and I think it will help frame things.</p> <h2 id="scaling">Scaling</h2> <p>The original Nintendo Entertainment System (NES) resolution was 256x240, meaning each screen consisted of 256 pixels horizontally and 240 pixels vertically, generally referred to as “240 lines”.</p> <p>If you’re playing something outputting 256x240 on a modern HDTV, then something has to <strong>scale</strong> the signal up. Otherwise, you’ll be looking at a tiny little box surrounded by huge black borders on all sides of your screen. Ideally, you want your television itself to do this scaling, quickly turning a signal with 240 lines into an image with 1080 lines (1080p) in a way that’s so fast it doesn’t introduce any <em>lag</em>. Introducing lag is the worst thing you can experience when retro gaming, because older games tended to rely on ultra-quick reflexes more than modern games do, and introducing just a few frames of lag might make some games unplayable.</p> <figure class="image aligncenter captioned"><img src="http://www.rodhilton.com/assets/retro_2xscale.png" /><figcaption><p class="caption">2x integer scaling example</p></figcaption></figure> <p>So let’s imagine an old retro game that was outputting at 480x270 resolution. If we wanted to scale this for display on a 1080p television, it would be incredibly easy. 1080 divided by 270 is exactly 4. So we can turn a single pixel from the old game into a 4x4 grid of identically-colored pixels. This is called “integer scaling” because we’re scaling up each pixel of the game’s output by an integer factor to display it at a higher resolution.</p> <p>The problem arises when you have an output like the NES’s 256x240 output. 1080 divided by 240 is 4.5, not 4 or 5. This means that when scaling up an NES’s output, you have two options. One, you can scale unevenly, so that the first pixel scales to a grid 4 pixels wide, then the next one scales up to 5 pixels wide, then 4, then 5, and so on. This results in pixels having slightly uneven sizes, which when scrolling can result in something called “shimmer”. The other option is to interpolate the overlapping “half” pixel on each side, basically blending them. This results in the image no longer looking perfectly crisp.</p> <p>In the below example, the question block has been scaled in two different ways. The rightmost way is via “smoothing” or interpolation and it results in a blurry image. The middle way is without interpolation and it might seem better, but take a look at the small black dots in the corners of the question block. The top left one is scaled to 4x4, but the top right is scaled to 5x4, the bottom left is 4x5, and the bottom right is 5x5. If this question block were to scroll horizontally, the top-left dot would alternate between being 4x4 and 5x4 on every frame of output, which would cause it to look weird (“shimmer”) as Mario moves.</p> <figure class="image aligncenter captioned"><img src="http://www.rodhilton.com/assets/retro_nonintegerscaling.png" /><figcaption><p class="caption">Non-integer scaling</p></figcaption></figure> <p>Shimmering and interpolation are both too distracting for me, I hate the way both effects look. Retro gaming purists would tell me that I need to get an <a href="https://www.retrorgb.com/rgbmonitors.html">RGB Monitor</a> but I wanted something that will just work with the one TV I own and make games playable in my living room. That was why for years I simply considered retro gaming a problem with no acceptable solution and didn’t bother.</p> <p>The thing that’s so great about a 4K television is that it solves this problem almost entirely. 4K’s vertical resolution, 2160, is evenly divisible by 240. So if you stretch every pixel of an NES’s 256x240 pixel display by an integer scale of 9, you fill 2304x2160 pixels evenly, which leaves some black bars on the left and right side of the image. 2160 is evenly divisible by many common vertical resolutions including 240, 360, 720, and of course 1080.</p> <p>Thus, getting a 4K TV meant that suddenly many retro game systems could output their native resolutions and be integer-scaled on my TV for pixel-perfect output. And so began my adventure.</p> <h1 id="goals">Goals</h1> <p>I imagine the comments of this post will fill up with suggestions of why I should have done things their way - one thing I’ve learned is that the retro gaming “community” is extremely opinionated, borderline elitist. So let me spell out extremely clearly what my goals were with this project and why. If you are interested in doing something similar and have similar goals, the details of this post may be useful to you. If you don’t have similar goals, my solutions here likely aren’t going to be useful to you but I wish you the best of luck in your endeavor.</p> <h2 id="1-no-emulators">1. No Emulators</h2> <p>First thing’s first, the most obvious solution to retro gaming is “install an emulator” or “build a raspberry pi” or “just get an NES Classic” or “buy a RetroN 5” - none of these solutions work for my goals.</p> <p>Emulators introduce differences that I, unfortunately, do notice, most notably around the sound not quite being right and around introducing lag that I find unacceptable. I played with an NES Classic for about 10 seconds before packing it up and returning it, I found the lag it introduced made it virtually unplayable to such a degree that I genuinely don’t know how anyone else can stand it.</p> <p>I’ve had emulators installed on HTPCs before and even worked with great frontends like <a href="https://www.retroarch.com/">RetroArch</a>. I’ve got ROMsets with every retro game ever made at the ready but I never found myself playing any of these old games because I disliked the graphical quirks, sound inaccuracy, and of course the lag, lag, lag, LAG!</p> <p>I wanted to be able to play old games without an emulator, so to pull this project off there could be no reliance on emulation of any kind.</p> <p>I also wanted to be able to use original controllers, or at least controllers looking and feeling original enough in my hands. I didn’t want to play Super Metroid with a PS4 controller, it just feels weird.</p> <h2 id="2-no-cartridge-collecting">2. No Cartridge Collecting</h2> <figure class="image alignright captioned"><img src="http://www.rodhilton.com/assets/retro_nothanks.png" /><figcaption><p class="caption">Pictured: not me</p></figcaption></figure> <p>When I was a kid I owned all 6 NES Mega Man games because Mega Man was my favorite franchise. If I wanted to play those again with the physical cartridges, I’d have to buy them on eBay and the lot of all 6 costs about $200. I wanted access to all my old games as well as games I never actually owned without filling shelves upon shelves with plastic. I devote enough space in my living room to physical media in the form of Blu-ray discs, I couldn’t justify becoming one of those dudes with a thousand dusty physical cartridges.</p> <p>Even though I grew up playing a lot of these systems, I sold everything associated with them long, long ago and would be starting mostly from scratch. The idea of having to acquire many pounds of physical goods just to catch back up was too daunting.</p> <p>In short, I didn’t want this project to take over my life in some way. I have access to the entire ROMsets for most consoles through various online channels, I wanted to be able to use them. I wanted to be able to seamlessly switch between different games without finding cartridges on meticulously alphabetized shelves or deal with <a href="http://www.thecoverproject.net/">custom printing plastic cases</a>.</p> <h2 id="3-integrate-with-living-room">3. Integrate with Living Room</h2> <p>I wasn’t interested in having a CRT monitor in my house, this whole project needed to integrate well into my living room, which I also use for watching movies and TV shows. This meant playing games on my one 4K TV from the couch, so wireless controllers (with no lag) were ideal as well.</p> <p>I’ve always liked the crisp <em>look</em> of emulators, the pixel-y graphics and blocky sprites. I didn’t want to be messing around with fake scanlines or shaders that smooth jagged edges. The retro <em>look</em> is part of the appeal to me, so I needed everything to look like it was running in an emulator, without actually doing so.</p> <figure class="image alignleft captioned"><img src="http://www.rodhilton.com/assets/retro_shaders.png" /><figcaption><p class="caption">Thanks I hate it</p></figcaption></figure> <p>Additionally, My TV is black and my entertainment center cabinet is black, I wanted everything to look like it belonged without standing out like a sore thumb. It was weirdly important to me that things just “look nice” so I wanted everything to be black, like a modern piece of electronics. I didn’t want this project to turn our TV room into a “video game room” or turn our house into a “weird video game collector house” - I just wanted to play some Zelda.</p> <p>This whole setup needed to be integrated well enough and simple enough to use that my wife could play games without me. My wife is the same age as me, so she also fondly remembers video games and wanted to be able to pick up a controller and play as well.</p> <p>However, I didn’t want to force her to nerd-out on this stuff the way I did just to get things to work, I wanted it to be easy to come downstairs and play a game with little technical knowledge of how things are connected. If she was futzing around with inputs and switches to play The New Tetris, I failed.</p> <h2 id="4-access-to-all-childhood-systems">4. Access to All Childhood Systems</h2> <p>I needed to be able to play every game and every game system I fondly remember from my youth. I was basically done with video games before the GameCube came out, so nothing from that era or later was important. However, my best friend had a Sega Genesis that I’d play at his house all the time, so that was essential. To get specific, here are the systems I wanted access to:</p> <ul> <li>Atari 2600</li> <li>Nintendo Entertainment System (NES)</li> <li>Super Nintendo Entertainment System (SNES)</li> <li>Nintendo Game Boy</li> <li>Sega Genesis</li> <li>Sega Game Gear</li> <li>Nintendo 64</li> </ul> <p>With the goals in mind, I tried many different options but have finally arrived at a setup that I feel checks every single box. I’m going to walk you through my entire setup, providing links and prices where possible to help anyone interested in doing something similar benefit from my many failures.</p> <p>So without further ado…</p> <h1 id="systems">Systems</h1> <h2 id="super-nes">Super NES</h2> <p>The Super Nintendo was hands-down my favorite system growing up. It wasn’t my first or my last system, but all my favorite games were on that system. I’m not going to sit here and list all of them, but I believe peak retro games happened on Super Nintendo - every great franchise had what I see as its best game released on the SNES.</p> <h3 id="console">Console</h3> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/retro_supernt.jpg" /></figure> <p>This was one of the easiest systems to get. No need for original hardware or modding, just picking up a <a href="https://www.analogue.co/super-nt">Super Nt from Analogue</a> ($189.99).</p> <p>The Super Nt relies on an <a href="https://en.wikipedia.org/wiki/Field-programmable_gate_array">FPGA chip</a>, which I do not consider emulation. Essentially this is a special kind of chip that can be updated via software to be any other kind of chip, and the one in the Super Nt has been programmed to be a Super Nintendo chip. This means that games running on the chip are, for all intents and purposes, running on an actual Super Nintendo chip and that means no emulation and no lag.</p> <p>Analogue sells a black edition of the system so it fits well with my entertainment center, and the unit works perfectly. There’s a jailbreak firmware update that lets the system do a little more, but I consider most of the extra stuff emulation so I don’t bother.</p> <p>Most importantly, the Super Nt directly outputs HDMI at 720p or 1080p resolution. 720 is a 3x integer multiple of 240. The Super NES outputs at 224 vertical lines, so with some small black bars at the top and bottom of the screen, the Super Nt integer scales up to a perfect 720p which my TV then integer scales again get to 2160p. No shimmer, no blurring, no interpolation.</p> <p>I actually have my resolution set to 1080p60 which means 1080 lines at 60 frames per second. I do a 5x integer scale and I turn off all Horizontal and Vertical interpolation so nothing should ever blur. Because the SNES only outputs 224 vertical lines instead of the full 240, doing a 5x scale gives me 1120 vertical lines, which is just slightly more than the 1080p. Thus, this setting cuts off a small amount of the top and bottom of the game’s screen, but the tradeoff is there are no black bars at the top and bottom from doing a 3x scale to 720p. It cuts off about the same amount of the image that a CRT screen used to, so game designers rarely put anything useful in that area and I prefer to the fill the screen a bit more.</p> <figure class="image aligncenter"><img src="http://www.rodhilton.com/assets/retro_superntsettings.png" /></figure> <h3 id="controllers">Controllers</h3> <p>I went wireless wherever I could, and in this case, the <a href="https://amazon.com/8Bitdo-SN30-Retro-Set-SN-nintendo/dp/B075WRZ6JB/">8BitDo SN30 Gamepad</a> ($39.99) works great. I did not go with the black edition here because the black ones don’t look like the controllers from my childhood, but they do work just as well with the black Super Nt. They mostly look and feel exactly like the original SNES controller. Some 8BitDo controllers can be a little hit or miss (the NES one is garbage, for example) but I’ve had no issues with these and aside from some general wireless connectivity hassles they work great.</p> <p>They charge with a standard Micro USB port which means I can use <a href="https://amazon.com/TOPK-Magnetic-Braided-Charging-Charger/dp/B07MYJQKJ6">magnetic charging cables</a> ($18.99). I keep all of my controllers in a little drawer and have them all charging without having to find the right kind of cable, which is great - I put the little magnetic adapters in all of the controllers and then just find an open magnetic cable to hook it to when a controller needs a juice-up.</p> <p>The range of the 8BitDo controller is adequate, battery life is great, and there is virtually zero perceptible lag added.</p> <h3 id="games">Games</h3> <p>There are many flash carts available for the Super Nintendo. Flash carts are cartridges that can be used with an actual system but when booted present a menu of games from an SD card, which you can then load as if that game was inserted. They work with real systems and since the Super Nt is effectively a real system, flash carts work with the Super Nt as well.</p> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/retro_sd2snes.jpg" /></figure> <p>For Super Nintendo there are two options: the Super Everdrive and the SD2SNES. Of these, go with the SD2SNES. The reason is that the SD2SNES cart has something the Everdrive doesn’t: its own FPGA chip. Many Super Nintendo games came with extra chips in the cartridge to give the system extra capabilities - most notably Super FX games like Starfox, Yoshi’s Island, and Stunt Race FX, as well as the CX4 chip used in games like Mega Man X2 and Mega Man X3. The Super Nintendo itself didn’t have these chips, it relied on the cartridges to have them, which means the Super Nt FPGA chip doesn’t have them either. The SD2SNES cartridge, however, does have an FPGA chip programmed with all these additional chips, meaning it’s the only flash cart that will allow you to play games that utilize extra chips.</p> <p>Recently a new model of the SD2SNES came out called the SD2SNES Pro which uses an improved FPGA chip, but the best thing is that <a href="https://stoneagegamer.com/flash/snes/carts/sd2snes-pro/na/">Stone Age Gamer</a> sells custom-designed cartridges, meaning you can get a nice black one that can just stay in the Super Nt at all times for $204.99. The entire SNES library will fit on a single 2GB MicroSD Card, which is actually so small it’s hard to find, a <a href="https://smile.amazon.com/SanDisk-Ultra-microSDXC-Memory-Adapter/dp/B073K14CVB">16GB card</a> will only run you $5.</p> <p>One nice thing is that you can access and play an entire library of <a href="https://www.romhacking.net/?page=hacks&amp;genre=&amp;platform=9&amp;game=&amp;category=&amp;perpage=20&amp;order=Downloads&amp;dir=&amp;title=&amp;author=&amp;hacksearch=Go">ROM Hacks</a> with this setup, and the SD card will have plenty of room for alternative versions of games. Most ROM Hacks are pretty stupid graphics hacks turning Mario into a giant penis or whatever, but many hacks are English translations of Japanese RPGs that were never ported to the US, or bugfix releases for broken games with critical flaws. There are even entirely new games that are playable on a flash cartridge, like <a href="http://www.zeldix.net/t1373-zelda3-parallel-worlds-v1-23">The Legend of Zelda: Parallel Worlds</a>.</p> <p>The SD2SNES does indeed give you the ability to save your game state like on an emulator as well, so this entire setup is similar to an emulator but with zero lag.</p> <table class="receipt" rules="groups"> <thead> <tr> <th>| Component</th> <th style="text-align: right">Cost</th> <th> </th> </tr> </thead> <tbody> <tr> <td> </td> <td style="text-align: right">Super Nt FPGA console</td> <td>$189.99</td> </tr> <tr> <td> </td> <td style="text-align: right">8BitDo SN30 Controllers (x2)</td> <td>$79.98</td> </tr> <tr> <td> </td> <td style="text-align: right">SD2SNES Pro Flash Cart</td> <td>$204.99</td> </tr> <tr> <td> </td> <td style="text-align: right">16GB microSD Card</td> <td>$5.00</td> </tr> </tbody> <tfoot> <tr> <td> </td> <td style="text-align: right">Total</td> <td>$479.96</td> </tr> </tfoot> </table> <h2 id="nes">NES</h2> <p>What kind of retro gaming setup would be complete without a Nintendo Entertainment System? While the NES didn’t house many of my favorite games, it certainly accounted for how I spent the bulk of my gaming childhood. The NES was like the system that got me into gaming before the Super NES perfected everything, and I played it more than any other singular system.</p> <h3 id="console-1">Console</h3> <p>Analogue (maker of the Super Nt) made a NES clone called an Nt Mini that actually recycled chips from original NES systems. They also ran out of stock and the systems are only now available on eBay for over $1,000 each, so that’s a hell no from me.</p> <figure class="image aligncenter captioned"><img src="http://www.rodhilton.com/assets/retro_avs_black.jpg" /><figcaption><p class="caption">I'm an artist.</p></figcaption></figure> <p>Luckily, there is an FPGA-powered NES clone that does for the NES what the Super Nt does for the Super Nintendo: The <a href="https://www.retrousb.com/product_info.php?products_id=78">retroUSB AVS</a> ($185.00). This system is incredibly ugly but it plays NES and Famicom games perfectly using an FPGA chip programmed to behave like an NES chip. It has 4 controller ports built in so there’s no need for an NES Four Score adapter if you want to play one of the almost-no-games for the NES that support 4 players.</p> <p>The AVS outputs at 720p over HDMI, integer scaling the NES’s 240 vertical resolution perfectly by 3x. It also has options to enable more sprites per scanline (which eliminates some annoying flickering in many NES games you’ve likely noticed) and options to cut off the left and right sides of the screen which often contain glitchy graphics that CRT televisions would cut off.</p> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/retro_paint.jpg" /></figure> <p>You may be wondering where the black option is for this system. Unfortunately, it does not exist - the AVS only comes in an ugly beige color that stands out on my entertainment center. So yes, I painted it.</p> <p>I <a href="https://www.youtube.com/watch?v=ANv72LwZSFs">carefully disassembled the case</a> and removed the beige plastic piece. The entire beige part was a single piece (two if you count the hinged door) so I took the outer case and spray painted it with primer and later matte black spray paint. The result is pretty damn good, and now the unit looks perfect on my entertainment center. This was not as hard as I was expecting, it just took a lot of patience and a few coats of paint, I set up a little cardboard box to spray into and taped the pieces to paper cups so they could air dry without touching anything.</p> <p>I did a truly insane amount of research into the best primer and spray paint to use for this job so let me save you some time: you can use pretty much any spray primer you want (but make sure you use something), but for the paint use Krylon’s Fusion All-In-One Paint+Primer (Flat Black). Yes, even though it includes Primer in the can, use a separate primer anyway. My hardware store didn’t carry this exact brand of paint, I had to go to a specialty paint store for it.</p> <p>I configure my AVS system to automatically boot into the cartridge rather than the menu (I can access it if I need to with a controller hotkey), and I tweaked the video settings to my liking. I enable the extra sprites per scanline and I hide the left side of the screen which is where <a href="https://www.youtube.com/watch?v=wfrNnwJrujw">graphical glitches can often occur</a>. And of course, I turn off interpolation.</p> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/retro_avs_settings.png" /></figure> <p>One of the weirder settings I enable is that I bump up the “Pixel Aspect” two notches. The reason for this is that, when game designers making NES games were designing graphics, they were designed for CRT televisions. CRT screens slightly stretched out the pixels so they were slightly wider than they were tall, meaning pixels weren’t rendered as perfect squares. As a result, whenever game programmers wanted to draw a “circle” they’d actually draw it as a slightly thinner oval, knowing the CRT screen would stretch the image out into a perfect circle.</p> <p>Modern TVs won’t do that, and we’re scaling 240 lines for display at 720 resolution, so we basically have the option of scaling each pixel into a 3x3 grid of a 4x3 grid. The 3x3 grid is perfect square pixels, which means circles will be slightly thin. The 4x3 grid is fatter pixels, which gets circles a bit closer to how they rendered on CRT televisions. They wind up being just a little too fat, but actual perfection isn’t possible without interpolation so I chose the fatter circles. Yes, I checked, and the fatter circles are further from perfect circles than the thin circles are, but I still prefer their appearance.</p> <figure class="image aligncenter"><img src="http://www.rodhilton.com/assets/retro_ghostbusters.png" /></figure> <h3 id="controllers-1">Controllers</h3> <p>The <a href="https://www.8bitdo.com/n30-2-4g/">NES 8BitDo controllers</a> are terrible. The D-Pad regularly sends up and down commands when I’m pressing left or right, making most games impossible. And more importantly, it comes with 2 extra buttons that the original controller didn’t have, making it feel wrong in my hands.</p> <p>The best option for an NES controller is the <a href="https://shop.8bitdo.com/products/mod-kit-for-nes-controller">8BitDo NES DIY Kit</a> ($19.99). These kits come with a screwdriver to open an original NES Controller and it’s pretty easy to slip one of these kits into the shell to have a wireless controller.</p> <p>You can go on eBay and buy some NES controllers but you’ll likely find that they’re in piss-poor condition, yellowed and covered in boogers or something - they’re gross. Luckily, however, the NES Classic Edition only came with a single controller but many two-player games. This means that there are tons of sold-separately <a href="https://amazon.com/Nintendo-Classic-Mini-Entertainment-System-Controller/dp/B01IH5O186/">NES Classic Controllers</a> ($27.99) floating around that are the EXACT shape and size of the original NES controllers, and there’s an <a href="https://shop.8bitdo.com/products/mod-kit-for-nes-classic-controller">8BitDo Mod Kit</a> for those too, also $19.99. You’ll also have to get an <a href="https://shop.8bitdo.com/products/retro-receiver-for-nes">8BitDo Retro Receiver</a> ($24.99) for the controller to connect to your AVS.</p> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/retro_nescontroller.jpg" /></figure> <p>The controller problem is easily the biggest hassle (and cost) of getting a good NES system working but please do not buy the <a href="https://www.retrousb.com/product_info.php?cPath=36&amp;products_id=154">retroUSB NES controller</a>, it is absolute dreck. It has the wrong shape, the buttons have an unnatural click, and the D-Pad is terrible, all for $65. I was grateful to have sold mine on eBay for $20 and I still feel kind of bad for the schmuck that bought it. The <a href="https://amazon.com/Wireless-Controller-Gamepad-Bluetooth-Joystick/dp/B07HRG236D/">8BitDo NES N30</a> wireless controller is marginally better, but the D-Pad is pretty terrible and it feels nothing like an original NES controller in your hand.</p> <p>As painful as it is, an NES Classic Mini Controller, Mod Kit, and Retro Receiver are the way to go, even though it means each controller will cost, in total, $72.97. The other bummer is that the charge port for these controllers is a little custom circular port with a proprietary connector, so the magnetic charger cables won’t work with it.</p> <p>Make sure that you update the firmware for BOTH your receiver and your mod kit. The mod kit’s firmware update port is only accessible when the controller shell is OFF, and unless both firmwares are updated to the latest version the NES Classic Kit won’t be able to communicate with an actual NES Reciever.</p> <p>I only got 2 controllers for this despite the built-in 4-controller support in the AVS. There are only 24 NES games that support 4 players and these are easily the most expensive (and annoying) controllers to deal with. The best 4-player game for NES isn’t even an official NES game, it’s homebrew game <a href="http://morphcat.de/micromages/">Micro Mages</a>.</p> <h3 id="games-1">Games</h3> <p>There’s only one candidate in town here, the <a href="https://stoneagegamer.com/flash/nes/carts/">EverDrive N8</a> ($127.99). On Stone Age Gamer you can get a black cartridge, though it doesn’t matter since it will always be hidden under the door of the AVS system.</p> <p>The N8 takes a regular SD card and a 2GB card is enough for the entire NES library. SD Cards are a little harder to find in such small sizes than microSD cards, a 2GB add-on at Stone Age Gamer is only $8, you can also pick up a <a href="https://smile.amazon.com/Transcend-Class-Speed-Memory-TS4GSDHC4/dp/B003QJV2IG/ref=sr_1_8?keywords=sandisk+4GB+SD+card&amp;qid=1571461637&amp;refinements=p_n_feature_two_browse-bin%3A6518302011&amp;rnid=493964&amp;s=pc&amp;sr=1-8">standard 4GB card</a> for about $10.</p> <p>The N8 supports save states and game genie codes if you’re into that sort of thing. The AVS also supports game genie codes, though I’ve never tried either and I think they both involve saving a bunch of special text files in various places to get them to work.</p> <table class="receipt" rules="groups"> <thead> <tr> <th>| Component</th> <th style="text-align: right">Cost</th> <th> </th> </tr> </thead> <tbody> <tr> <td> </td> <td style="text-align: right">retroUSB AVS console</td> <td>$185.00</td> </tr> <tr> <td> </td> <td style="text-align: right">NES Classic Controllers (x2)</td> <td>$55.98</td> </tr> <tr> <td> </td> <td style="text-align: right">8BitDo NES Mod Kit (x2)</td> <td>$39.98</td> </tr> <tr> <td> </td> <td style="text-align: right">8BitDo NES Receiver (x2)</td> <td>$49.98</td> </tr> <tr> <td> </td> <td style="text-align: right">Everdrive N8 Flash cart</td> <td>$127.99</td> </tr> <tr> <td> </td> <td style="text-align: right">4GB SD Card</td> <td>$10.00</td> </tr> </tbody> <tfoot> <tr> <td> </td> <td style="text-align: right">Total w/ 2 controllers</td> <td>$468.93</td> </tr> <tr> <td> </td> <td style="text-align: right">Total w/ 4 controllers</td> <td>$614.87</td> </tr> </tfoot> </table> <h2 id="game-boy">Game Boy</h2> <p>My Game Boy Solution is actually to use my original Super Game Boy cartridge with an <a href="https://stoneagegamer.com/flash/game-boy/system/everdrive-gb/x7/">Everdrive GB</a> ($138). I happened to already have a real Super Game Boy but they’re pretty easy to find on eBay for $20. You can even get one <a href="https://www.gamestop.com/video-games/retro-gaming/super-nintendo/accessories/products/super-nintendo-super-game-boy/111917.html?utm_source=sdi&amp;utm_medium=feeds&amp;utm_campaign=PLA&amp;utm_kxconfid=t9vz73bvj&amp;gclid=Cj0KCQjw6KrtBRDLARIsAKzvQIE4ZxJRN4gpVsTnMqvyUoFgNjmz-h_Oud_n8GEyQ52X3kWKoUz8TFwaAnwxEALw_wcB&amp;gclsrc=aw.ds">at GameStop</a> for $12, though I’m sure that link will be broken by the time you read this and GameStop is out of business.</p> <p>There’s also a Super Game Boy 2 that was only released in Japan and is rarer, but somewhat better as it plays the games at the correct speed (the original Super Game Boy plays games at a 2.4% faster clock speed than it technically should) as well as some extra borders you can draw around the game screen. Those sell on eBay for around $50-$100, but frankly, I don’t see much of the point.</p> <p>Analogue has announced that in 2020 they’re releasing a product called the <a href="https://www.analogue.co/pocket/">Pocket</a> that will play Game Boy, Game Boy Color, Game Boy Advance, Lynx, Neo Geo Pocket Color, and Game Gear games. Supposedly this will also have a dock that lets it hook to a TV via HDMI and use a standard wireless 8BitDo controller. I’m very interested in all of this, but somewhat skeptical as well because Analogue kind of tends to announce adapters and accessories for their products that never get released (I’m still waiting for the Game Gear adapter for their Mega Sg).</p> <p>In any case, the Game Boy problem (for non-color games) is solved for now, though I expect to fully replace this section of this post at some point when Analogue releases their Pocket with Dock, assuming it ever actually happens.</p> <table class="receipt" rules="groups"> <thead> <tr> <th>| Component</th> <th style="text-align: right">Cost</th> <th> </th> </tr> </thead> <tbody> <tr> <td> </td> <td style="text-align: right">Super Game Boy</td> <td>$20.00</td> </tr> <tr> <td> </td> <td style="text-align: right">Everdrive GB</td> <td>$138.00</td> </tr> <tr> <td> </td> <td style="text-align: right">4GB microSD Card</td> <td>$5.00</td> </tr> </tbody> <tfoot> <tr> <td> </td> <td style="text-align: right">Total</td> <td>$163.00</td> </tr> </tfoot> </table> <h2 id="sega-genesis">Sega Genesis</h2> <p>I never had a Genesis growing up, but my best friend Ben did. I went over to Ben’s house all the time and played his very limited collection of games. The Sega Genesis always felt like the “cool” system - its mascot had attitude, its Mortal Kombat had blood. I played enough Genesis that I needed it to be represented in this project, even though I never owned one.</p> <h3 id="console-2">Console</h3> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/retro_megasg.jpg" /></figure> <p>Once again, Analogue to the rescue with its <a href="https://www.analogue.co/mega-sg/">Mega Sg</a> ($189.99). And you better believe it comes in black - in fact, 3 different versions are all black.</p> <p>The Mega Sg plays Sega Genesis, Sega Master System, and Sega Game Gear games without missing a beat. It comes with an adapter for Sega Master System games and promises one for Game Gear coming soon, and it’s even compatible with the Sega CD add-on, though we’ll see in a moment why none of that actually matters.</p> <p>Genesis and Sega Master System games output at a variety of different resolutions, but the Mega Sg will handle most of this weirdness for you and send a clean 720 signal out over HDMI. Personally, I’ve noticed that the sound chip in the Mega Sg isn’t quite right, but I’ve always thought the Genesis had terrible sound so this doesn’t bother me too much.</p> <p>I set the Mega Sg to output at 720p60, and use 3x integer scaling for games running in x320 resolution and x256 resolution (some games use different resolutions, the Genesis was a real duct-tape-and-baling-wire kind of a system). I also, of course, turn off all horizontal and vertical interpolation.</p> <p>I have toyed around with another setting, which is outputting at 1080p60 resolution and setting the integer scaling to 5x in all dimensions. The net effect of this is that the top and bottom of the game are just very slightly cut off, but the screen is filled, just like with the Super Nt. Since CRTs often cut off about the same amount of space, games rarely put anything useful in this zone, and it looks quite nice. I go back and forth on these settings, I haven’t picked my preference yet but <a href="https://www.youtube.com/watch?v=vq5eQhCN6Co">My Life In Gaming has a great video</a> going over all the Mega Sg settings that can help you decide, though I obviously disagree with their suggestion you leave the default 4.5x scaling setting enabled.</p> <figure class="image aligncenter"><img src="http://www.rodhilton.com/assets/retro_megasg_settings.png" /></figure> <h3 id="controllers-2">Controllers</h3> <p>There are no 8BitDo mod kits for the original 3-button Sega Genesis controller that I most strongly associate with playing Genesis games. However, I actually always hated the feel of the Genesis controller so I’m making a slight exception to my rules for this one.</p> <p>Here I just recommend the <a href="https://amazon.com/8Bitdo-Wireless-Gamepad-Original-Genesis-Drive/dp/B07HB1XFQW/ref=sr_1_3?">8BitDo M30 Gamepad for Sega Genesis</a> ($24.99) which comes with the Genesis receiver. There is a mod kit for a 6-button Megadrive controller but it just seems like too much work for a controller I never liked - and the nice thing is that the 8BitDo controller comes with an extra button in the middle that you can map to pulling up the Mega Sg menu system.</p> <p>I prefer the D-Pad of the 8BitDo controller much more over any official Sega controller, which I’m sure is no good for Sega purists but since I never actually owned this system, I’m not as attached to the feeling of the controller as I am for other systems.</p> <h3 id="games-2">Games</h3> <p>There is indeed a <a href="https://stoneagegamer.com/flash/genesis/carts/x7/">Mega Everdrive</a> ($165.99) that will play Genesis, Master System, and Game Gear ROMs on the Mega Sg (so you don’t need to use the Sega Master System adapter or buy a Master Everdrive for those). And yes, it comes in black.</p> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/retro_megasd.jpg" /></figure> <p>However, there’s also something pretty new in this space called the <a href="https://shop.terraonion.com/en/products/16-megasd_megacd_segacd_fpga_cartridge.html">Mega SD from Terraonion</a> (€232.00, about $260). This sucker comes with an FPGA chip that implements, somehow, the entire Sega CD processor. This means that even without the Sega CD attachment, you can play Sega CD images off the cartridge directly in the Mega Sg. Just like the Mega Everdrive, it will load standard Genesis, Master System, and Game Gear games, but the ability to play Sega CD games is just way too cool to pass up - that was a system Ben didn’t even have!</p> <p>Springing for this option adds an extra $100 to the cost of the endeavor, not to mention that you’ll need a monster <a href="https://amazon.com/dp/B073JYC4XM/ref=twister_B07B3MFBHY?_encoding=UTF8&amp;psc=1">128GB microSD card</a> ($19) to store the entire Sega CD library in addition to Genesis, Master System, and Game Gear ROMs. Be very careful formatting this cart and follow the instructions with your Mega SD, improperly formatting the card will cause confusing and frustrating issues. The manual also gives specific instructions on some settings you need to change for the Mega Sg in particular, follow those or you won’t have sound.</p> <p>Stone Age Gamer lists their variant of the Mega SD coming soon, but I imagine they’ll offer different shell colors and whatnot when it’s available. Brutal honesty, the menu system and presentation alone make the Mega SD worth it over the Mega Everdrive.</p> <table class="receipt" rules="groups"> <thead> <tr> <th>| Component</th> <th style="text-align: right">Cost</th> <th> </th> </tr> </thead> <tbody> <tr> <td> </td> <td style="text-align: right">Mega Sg FPGA Console</td> <td>$189.99</td> </tr> <tr> <td> </td> <td style="text-align: right">8Bitdo Genesis Controllers (x2)</td> <td>$49.98</td> </tr> <tr> <td> </td> <td style="text-align: right">Mega SD Flash Cart</td> <td>$260.00</td> </tr> <tr> <td> </td> <td style="text-align: right">128GB microSD card</td> <td>$19.00</td> </tr> </tbody> <tfoot> <tr> <td> </td> <td style="text-align: right">Total</td> <td>$518.97</td> </tr> <tr> <td> </td> <td style="text-align: right">Total w/ Everdrive instead of Mega SD</td> <td>$415.96</td> </tr> </tfoot> </table> <h2 id="nintendo-64">Nintendo 64</h2> <p>The Nintendo 64 has my wife’s two favorite video games: The New Tetris and Dr. Mario 64. As such, it’s a system where it was most essential that it be easy to work with and that it supported all 4 players without a problem.</p> <h3 id="console-3">Console</h3> <p>There is no FPGA-powered Nintendo 64 clone unit, at least not right now. So the only game in town is an actual original Nintendo 64. I had one of these already, but I wanted an original Gray one because that was as close as I could get to black, and I was able to snag one on eBay with 4 working controllers for about $130.</p> <p>The N64 outputs composite only and looks pretty crappy. The main issue with this is that if you hook it up to a modern television, the TV will not see the 240p signal as 240p, but instead will see it as 480i, which will make it do all sorts of terrible things to the image.</p> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/retro_tink.jpg" /></figure> <p>One solution is to first run it through a simple scaling unit, the <a href="https://www.retrorgb.com/retrotink2x.html">RetroTINK 2x</a> ($100) does a great job. It basically takes the incoming 240p signal and just doubles each line (hence the 2x), doing a 2x integer scale with zero lag, and outputs a 480p signal that a modern TV can accept. The RetroTINK will actually do with this with any other retro consoles like if you’re messing around with a Neo Geo or a Turbo Grafx 16 or something. It’s a nice handy little piece of equipment, but it does suffer one fatal flaw.</p> <p>The RetroTINK requires its own separate power source, which means it’s always on. When it’s not receiving a signal to convert, it outputs a color test pattern. This is likely fine for most people, but as we’ll see in a later section of this writeup I’m hooking all of these units to an active HDMI switcher that switches input based on what it detects as sending a signal. Since the RetroTINK always outputs the rainbow test pattern, the switcher sees it as always on, which means my wife has an extra button to press to use the system and has to mess with inputs. This violates my “wife-friendly” rule, and in particular it does so on the very system she is most likely to use, which is no good.</p> <p>There’s something new called the <a href="https://www.retrorgb.com/unveiling-the-rad2x-hdmi-cables.html">RAD2x</a> (£47.99, about $63) that is, as of this writing, available only for pre-order. This is essentially a cable with a tiny RetroTINK 2x inside of it, which hooks directly to the N64 and sucks just enough amperage off the wires to power the embedded RetroTINK. The result should theoretically be something that doesn’t send any signal until the unit is turned on, allowed it to play well with active HDMI switchers like mine.</p> <p>Whatever you do, don’t go for the <a href="https://castlemaniagames.com/products/eon-super-64">EON Super 64</a> ($150). It’s nearly double the price of the RetroTINK and triple the price of the RAD2x, and it does the exact same thing. It was a little tempting for me (in fact, I did buy one) because it played better with the active HDMI switcher than the standard RetroTINK did, but with the RAD2x it simply serves no purpose, and there are reports of it shorting out N64s and causing damage to them.</p> <p>Of course, none of these are what I did. The best-looking solution to the N64 problem is the <a href="https://www.retrorgb.com/ultrahdmi.html">UltraHDMI</a> which is a kit and process for physically modding your N64 to give it an actual HDMI port. The <a href="https://www.game-tech.us/product/ultrahdmi/">UltraHDMI Kits</a> ($165) are currently sold out, but I was able to find some folks in a <a href="http://n64.reddit.com">small community about Nintendo 64</a> where people had some and were willing to install the mod for a small fee ($100). I sent the N64 and a <a href="https://twitter.com/CZroe">super cool dude</a> modded it for me, sending back a perfect UltraHDMI-modded N64.</p> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/retro_ultrahdmi.png" /></figure> <p>To be honest, the graphical difference between the UltraHDMI output and the scaled-up composite image through the RetroTINK is virtually nonexistent. Nintendo 64 games don’t look great anyway, and because of the 3D nature of the games, the crispness of the pixels isn’t as important as it is for something like a Super Nintendo. The only reason this mattered to me was that the game we play most often on Nintendo 64, Dr. Mario 64, uses 2D sprites and thus is one of the very few games where the graphical difference is somewhat noticeable. I’m very happy with my UltraHDMI-modded N64 but frankly, I don’t think it’s worth the trouble over the RetroTINK 2x, especially the RAD2x variant that’s less than $70 and works with an active switcher just as well as mine.</p> <p>So while I completed this project relying on an UltraHDMI mod as well as the kindness of strangers, I don’t recommend that route. And though I do not myself own an RAD2x, I do have a RetroTINK and if the RAD2x does what the RetroTINK does (but preserves an active signal for switching) then that’s what I recommend going with for an N64. The great part of this solution is that it can work with your existing N64 with no changes, so if that’s all you’re looking to hook up it’s a super cheap solution.</p> <p>All of that being said, I would be remiss if I didn’t point out that the RAD2x performs a 2x integer scale on the 240p N64 image, but the resulting 480p signal does NOT integer-scale to a modern television. 2160 is not evenly divisible by 480, your TV is going to have to do some interpolation to scale the 480p signal. Again for the N64 which doesn’t use pixel sprites in many games, this is unlikely to matter and I still recommend the RAD2x as the solution. But if you’re like me and really tweak out about this integer scaling thing, you’ll need an <a href="https://www.retrorgb.com/ultrahdmi.html">UltraHDMI</a> which can 3x scale the 240p resolution to 720p, which can then be integer-scaled by your television 3x to get a 2160p signal.</p> <h3 id="controllers-3">Controllers</h3> <p>You ever go to a friend’s house to play some Nintendo 64 and he gives you the MadCatz controller? Feels bad, right? It’s not fair he gets the good Nintendo controller and you have this dumb transparent piece of shit. That’s because the only good controllers for Nintendo 64 are official controllers, so just get those.</p> <p>There are no decent wireless options, Hyperkin keeps teasing <a href="https://nintendosoup.com/hyperkin-releasing-the-admiral-a-wireless-n64-controller/">a wireless controller</a> but even if that’s ever released, it probably won’t be as good as the original controller. There are <a href="https://www.hyperkin.com/n64-replacement-joystick-repair-box.html">replacement parts for the joysticks</a> and even <a href="https://www.hyperkin.com/n64-replacement-joystick-repairbox.html">joysticks that will make them feel like Gamecube sticks</a>.</p> <p>My suggestion is to just get controllers on eBay, there’s an absolute ton of them and they generally won’t run more than $25 apiece. You can pick up an <a href="http://retro-bit.com/n64-controller-extension-cable.html">N64 Controller Extension Cable</a> ($9.99) to get it to your couch just like I wanted to, I bought a cheap <a href="https://amazon.com/eLUUGIE-Replacement-Extension-Nintendo-Controller/dp/B073ZC8RNW?">pack of 4</a> for $11.59.</p> <h3 id="games-3">Games</h3> <p>The <a href="https://stoneagegamer.com/flash/nintendo-64/carts/everdrive64-x7/">Everdrive 64</a> will run you $179.99 and play every single Nintendo 64 game ever made. It’s got Gameshark code support as well. The biggest headache you’ll encounter is getting your save games off your original Nintendo 64 cartridges and onto the SD card - a process I never got working even after buying lots of weird accessories to do this, I wound up just losing our progress in The New Tetris (we had unlocked <a href="https://www.youtube.com/watch?v=uTwtf5ZYxvI">all of the Wonders</a>, which takes absolutely forever) and starting over which was a bummer.</p> <p>A 16GB Card will store the entire N64 library, should run you about $15.</p> <table class="receipt" rules="groups"> <thead> <tr> <th>| Component</th> <th style="text-align: right">Cost</th> <th> </th> </tr> </thead> <tbody> <tr> <td> </td> <td style="text-align: right">Nintendo 64 + 4 Controllers</td> <td>$130.00</td> </tr> <tr> <td> </td> <td style="text-align: right">RAD2x N64 to HDMI Cable</td> <td>$63.00</td> </tr> <tr> <td> </td> <td style="text-align: right">Controller Extension Cables</td> <td>$11.59</td> </tr> <tr> <td> </td> <td style="text-align: right">Everdrive 64</td> <td>$179.99</td> </tr> <tr> <td> </td> <td style="text-align: right">16GB SD card</td> <td>$15.00</td> </tr> </tbody> <tfoot> <tr> <td> </td> <td style="text-align: right">Total</td> <td>$399.58</td> </tr> <tr> <td> </td> <td style="text-align: right">Total w/ UltraHDMI instead of RAD2x</td> <td>$601.58</td> </tr> </tfoot> </table> <h2 id="atari-2600">Atari 2600</h2> <p>The Atari 2600 was the first video game system I ever played. The first time I played Pac-Man wasn’t in an arcade, it was on the Atari 2600. I owned E.T. the original cartridge. Some of my fondest memories are the Atari 2600, so it was critical that I have one working for this project. This also turned out to be the last system I was happy with, and it was easily the most difficult part of the project.</p> <h3 id="console-4">Console</h3> <figure class="image alignright captioned"><img src="http://www.rodhilton.com/assets/retro_atarivader.jpg" /><figcaption><p class="caption">Atari 2600, 'Vader' 4-switch variant</p></figcaption></figure> <p>Once again, there are no FPGA Ataris out in the world, and all of the variants of ‘Atari flashback’ systems are indeed emulators. Further, most Atari 2600 units used RF output to hook directly to old television sets, they didn’t even support basic Yellow/Red/White RCA outputs. This means there’s basically no way around modding an Atari 2600 console if you want to play it on a modern television. Because Ataris are so old and cheaply made, they often have busted capacitors or leaks that create additional problems. You can pick one up on eBay for $40.</p> <p>There are <a href="https://atariage.com/2600/archives/consoles.html">tons of variants</a> of the Atari 2600 with stupid confusing nicknames like “Heavy Sixer” and “Vader”. My first attempt to solve this was just buying a pre-modded Atari 2600 Jr (it’s small and black) that’s been modded with composite out. Unfortunately, this unit resulted in really bad <a href="https://www.retrorgb.com/tag/jailbars">Jailbars</a>. I talked to the seller but he himself was simply reselling a unit that he’d bought modded, he didn’t do the mod himself and had no idea why I was seeing the jailbars - he was hooking his unit to a CRT display rather than an modern TV like me.</p> <p>I actually bought a <em>different</em> pre-modded unit on eBay after this, a 4-switch Vader unit modded with S-Video output. The jailbars were gone, but the colors were terrible and the refresh rate was garbage. I was close to giving up when I discovered <a href="https://etim.net.au/2600rgb/">Tim Worthington’s Atari2600 RGB Mod</a>. This was a mod that allowed an atari to output a signal called RGB, which was much much higher quality than Composite or S-Video. Of course, I lacked the technical soldering skills to pull off this mod without risking the Atari, but I found that <a href="https://www.facebook.com/ifixretro">iFixRetro</a> would do it for $100. I just needed to order and send the appropriate <a href="http://etim.net.au/shop/shop.php?crn=207&amp;rn=553&amp;action=show_detail">2600RGB mod kit</a> (AU$77.00, about $55) and my Atari. Ben (different Ben, not the one I knew as a kid who owned a Genesis) did an amazing job and promptly sent back my modded Atari with an RGB out port and it worked great.</p> <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Etim’s 2600RGB board installed on an Atari 2600 Junior console! Play your games in pixel perfect display with the ability to swap color palettes with a push of a button! <a href="https://t.co/gQ5SjCBWjX">pic.twitter.com/gQ5SjCBWjX</a></p>&mdash; iFixRetro (@iFixRetro) <a href="https://twitter.com/iFixRetro/status/1182481444834295809?ref_src=twsrc%5Etfw">October 11, 2019</a></blockquote> <script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> <p>Unfortunately that’s not the end of the story. I still had to hook the unit to a TV. For this, I went a bit overkill and used a <a href="https://amazon.com/Micomsoft-FBA_DP3913547-Framemeister-XRGB-Mini/dp/B00QUBK6RK">Micomsoft Framemeister xRGB Mini</a> ($410), an upscaler unit that many retro game aficionados swear by. I actually bought this unit when I first started this whole project because I was using it to try and upscale various other signals like the N64, but in the end the only thing I needed it for was the Atari, since the mod outputted an 8-pin Mini-DIN xRGB port which was exactly what the Framemeister takes as RGB input. With a <a href="https://www.walmart.com/ip/IEC-M1521-8-Pin-Mini-Din-Male-to-Male-Straight-Through-Cable-6/179693382">straight-through Mini-DIN 8pin cable</a> ($16) I could get the signal from the Atari to the TV.</p> <p>The Framemeister always outputs a blue screen signal, which doesn’t play well with the active HDMI switcher, but that’s alright in this one case - I’ll explain why in a later section. It’s also worth mentioning that the Framemeister is a Japanese product and both the remote and the on-screen menu system are in Japanese. With a firmware update, you can get the Framemeister menus to be in English, and there’s an <a href="https://www.retrogamingcables.co.uk/xrgb-mini-framemeister-english-remote-control-overlay-lexan-black">English remote overlay</a> (£4.99, about $6.50) if you want to get fancy.</p> <p>I’m not going to detail my Framemeister settings here because the upscaler unit simply has tons and tons and tons of settings, it really is the end-all-be-all of retro gaming upscaler units which is why geeks like it so much. Basically, I just messed with settings until it looked nice - I’m using it for an Atari 2600 so it’s not going to come out looking like the Mona Lisa no matter what I do.</p> <h3 id="controllers-4">Controllers</h3> <p>You can get Atari 2600 controllers on eBay for about $10 each, good condition paddle controllers are a bit harder to find but run for around $20. Retrobit makes <a href="http://retro-bit.com/controllers/atari-2600.html">replacement controllers</a> ($16.99) but please avoid them, they are absolute garbage that will literally snap apart in your hand and make an annoying clicking noise that the original controller never had. Plus the button is the wrong color and feels wrong to press. <a href="https://smile.amazon.com/Controller-Extension-Cable-Atari-2600-Joystick/dp/B0733QY5TW">Extension cables</a> run about $10 each. I think eBay is the best bet, but I will admit that about 50% of the controllers I get are crap, so you have to buy lots before you get enough decent ones - this is even more true of the paddle controllers.</p> <p>Some old wireless Atari 2600 controllers are floating around out there but they are best avoided. You used to be able to use a Genesis controller with an original Atari, but sadly the 8BitDo wireless controllers do not seem to work with an original Atari, I think it might be a power issue.</p> <h3 id="games-4">Games</h3> <p>The Atari 2600 flash cartridge is called the <a href="https://harmony.atariage.com/Site/Harmony.html">Harmony</a>. The latest version is called the <a href="https://harmony.atariage.com/Site/Order_Encore.html">Harmony Encore</a> ($69.00) which is exactly like the original Harmony but can play homebrew games larger than 512K and it’s only $20 extra. What’s $20 when you bought a $400 Framemeister to play Frogger?</p> <p>You’ll need an SD card but the entire Atari library will fit on a small piece of paper so the smallest SD card you can find will do.</p> <table class="receipt" rules="groups"> <thead> <tr> <th>| Component</th> <th style="text-align: right">Cost</th> <th> </th> </tr> </thead> <tbody> <tr> <td> </td> <td style="text-align: right">Atari Console</td> <td>$40.00</td> </tr> <tr> <td> </td> <td style="text-align: right">2600RGB Mod</td> <td>$55.00</td> </tr> <tr> <td> </td> <td style="text-align: right">iFixRetro Modding Service + shipping</td> <td>$130.00</td> </tr> <tr> <td> </td> <td style="text-align: right">Framemeister Upscaler unit</td> <td>$410.00</td> </tr> <tr> <td> </td> <td style="text-align: right">8-Pin Mini-DIN cable</td> <td>$16.00</td> </tr> <tr> <td> </td> <td style="text-align: right">Various Controllers</td> <td>$60.00</td> </tr> <tr> <td> </td> <td style="text-align: right">Controller Extension Cables (x2)</td> <td>$20.00</td> </tr> <tr> <td> </td> <td style="text-align: right">Harmony Encore Flash Cart</td> <td>$60.00</td> </tr> <tr> <td> </td> <td style="text-align: right">2GB SD Card</td> <td>$10.00</td> </tr> </tbody> <tfoot> <tr> <td> </td> <td style="text-align: right">Total</td> <td>$801.00</td> </tr> </tfoot> </table> <p>That’s right, the ugliest and oldest system in this entire project was the most expensive, nearly double the next most expensive system. I hope my wife never sees this post.</p> <p>I keep pestering the guys who made the RAD2x cables to make one that takes 8-Pin xRGB Mini-DIN, which would remove the dependency on the Framemeister and reduce this cost by about $350. Unfortunately, there is no way around the modding of the Atari that I am aware of, and in between having my mod done and writing this blog post, the 2600RGB Mod Kit has completely sold out and I’m not aware of any useful alternative.</p> <h1 id="putting-it-all-together">Putting it All Together</h1> <p>Okay, so how do I connect all of this antiquated junk together? I have a standard receiver but it only accepts 10 HDMI inputs. After taking out the Playstation 4, Switch, 4k Blu-Ray player, Shield TV, Computer, and VCR (yes, really), I’m left with 4 slots available and 5 systems to connect.</p> <p>Furthermore, I don’t actually want these units going through the receiver. The receiver connects to HDMI 2 on my television set, and I don’t set HDMI 2 to be in “Game Mode” because I enable some processing to make movies look better, and that’s what I do with the TV most of the time.</p> <p>What I needed was some kind of HDMI switch unit that would introduce no lag and let me connect to HDMI 3 on the TV, then set my receiver to use the TV’s audio out to play sound through my speakers. I can set HDMI 3 on the TV to always be in Game mode with no processing, nearly eliminating lag. I can also set HDMI 3 to not use “Deep Color” (HDR) which plays weird with the Framemeister.</p> <p>The switch unit I went with is the <a href="https://amazon.com/gp/product/B005S0YNNM">IOGEAR 8-Port HDMI Switch</a> ($153). This switch is an “Active Switcher” which means that if it’s sitting on Input 3 for example, and all of a sudden Input 4 becomes active because the console attached to that input is turned on, it will automatically switch to Input 4. This is perfect because it means that, once everything is “on” merely pressing the physical Power button on one of these consoles will automatically switch to it, satisfying the wife-friendliness parameters of the project.</p> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/retro_switch.jpeg" /></figure> <p>It also is one of the very few HDMI switches I found which supports more than 4 ports (I need 5, plus I’ll need a 6th when the Pocket comes out). This switch also comes with a remote control that allows me to set an input. The reason this is useful is that it allows me to program a single ‘Play Retro Game’ activity on my Logitech Harmony universal remote control that turns on the TV, tunes it to HDMI 3, sets the receiver to TV Audio, turns on the IOGear switch, and sets the IOGear to Input 8. Input 8 is what I have the Atari’s Framemeister hooked to, so when the whole system comes up there’s a friendly blue screen indicating everything is online.</p> <p>At that point, no matter what game system you turn on physically to play, the correct thing will happen. If you turn on the Atari, it’s already on Input 8 so it just works. If you turn on anything else, because they aren’t sending HDMI signals until they are on the act of powering on the system will cause the switcher to change inputs to what you’re playing.</p> <p>The other nice bonus of this approach is that my TV has a weird habit of switching back to HDMI 2 if it doesn’t detect a signal on HDMI 3. If you’re not quick enough turning on an actual console, the universal remote and actual TV can get out of sync, but the Framemeister broadcasting a blue screen signal on Input 8 ensures this doesn’t happen.</p> <p>The final component is a strange one: it’s an <a href="https://amazon.com/gp/product/B004F9LVXC">HDMI splitter</a> ($18.50) which simply takes a single HDMI input and splits it to two HDMI outputs. You are almost certain not to need this, but I wound up needing it.</p> <p>HDMI is a fairly tricky interface, one major component of it is a handshaking process that is used to ensure only certain kinds of media are sent to certain kinds of devices, a process referred to as HDCP for High-bandwidth Digital Content Protection. The issue is that some of these devices are outputting HDMI from little custom circuit boards and other hacker-y type tools, and I often encountered problems where my LG OLED C8 4K Television would fail to properly handshake with a device and it would just display ‘Invalid Format’ on the screen. I searched all over the web and found almost nothing useful about my TV would say this - the error message almost implies it’s like a resolution issue or something.</p> <p>I finally determined after much troubleshooting that the LG TV is not properly negotiating the HDCP signal with a console - a problem found most frequently with my UltraHDMI board (it did this with two different boards). My solution was simply to find a low latency device that could connect directly to the TV and negotiate an HDMI handshake with it and keep its connection active while different systems were powered on and off downstream. The HDMI splitter does exactly this (plus, gives me a way to capture my gaming sessions to a laptop if I were so inclined which I am not). This splitter completely removed the Invalid Format error permanently by sitting between my HDMI Switcher and the TV.</p> <p>If you don’t go with the UltraHDMI for an N64, or you don’t have my specific television, you’re unlikely to need this splitter at all, but I’m mentioning here because I certainly did.</p> <p>Most of the newer consoles from Analogue came with their own HDMI cables but I sprung for some <a href="https://www.monoprice.com/product?p_id=24187">Ultra Slim HDMI cables</a> ($5) for everything to ensure they were all black and thin enough to not be obtrusive, which was useful given how many extra cables were going all over the place to put this project together. I needed 5 for all 5 consoles, 1 from the switch to the splitter, and one from the splitter to the TV.</p> <table class="receipt" rules="groups"> <thead> <tr> <th>| Component</th> <th style="text-align: right">Cost</th> <th> </th> </tr> </thead> <tbody> <tr> <td> </td> <td style="text-align: right">IOGear 8-Port HDMI Switch</td> <td>$153.00</td> </tr> <tr> <td> </td> <td style="text-align: right">HDMI Splitter</td> <td>$18.50</td> </tr> <tr> <td> </td> <td style="text-align: right">Ultra Slim HDMI Cables (x7)</td> <td>$35.00</td> </tr> </tbody> <tfoot> <tr> <td> </td> <td style="text-align: right">Total</td> <td>$206.50</td> </tr> </tfoot> </table> <h2 id="grand-total">Grand Total</h2> <p>Alright, so how much did I spend on all this dumb crap?</p> <table class="receipt" rules="groups"> <thead> <tr> <th>| System</th> <th style="text-align: right">Cost</th> <th> </th> </tr> </thead> <tbody> <tr> <td> </td> <td style="text-align: right">Super NES</td> <td>$479.96</td> </tr> <tr> <td> </td> <td style="text-align: right">NES</td> <td>$468.93</td> </tr> <tr> <td> </td> <td style="text-align: right">Game Boy</td> <td>$163.00</td> </tr> <tr> <td> </td> <td style="text-align: right">Sega Genesis</td> <td>$518.97</td> </tr> <tr> <td> </td> <td style="text-align: right">Nintendo 64</td> <td>$601.58</td> </tr> <tr> <td> </td> <td style="text-align: right">Atari 2600</td> <td>$801.00</td> </tr> <tr> <td> </td> <td style="text-align: right">Connectivity</td> <td>$206.50</td> </tr> </tbody> <tfoot> <tr> <td> </td> <td style="text-align: right">Total</td> <td>$3,239.94</td> </tr> </tfoot> </table> <p>Yikes, kind of wish I didn’t do all that math just now. I’m not including the price of the TV itself because I would have gotten that anyway, and also it would be depressing.</p> <p>All this being said, I think the Atari 2600 was way too much because of the Framemeister, and it’s very possible there’s a cheaper alternative. I think an <a href="https://smile.amazon.com/dp/B07QF95QP3">OSCC Scan Converter</a> ($179.99) and <a href="https://retro-access.com/products/8-pin-male-mini-din-to-male-rgb-scart">Mini-DIN to SCART cable</a> ($25) would have worked just as well and shaved about $200 off the overall price tag - I largely went with the Framemeister becuase I already had it from when I first started this entire project. And again, the UltraHDMI in the N64 was massive overkill, a RAD2x adapter could easily shave off another $200.</p> <p>Overall, someone could accomplish exactly what I did with this project by learning from my mistakes and spending under $3,000.</p> <h1 id="closing-remarks--remaining-work">Closing Remarks &amp; Remaining Work</h1> <figure class="image aligncenter captioned"><img src="http://www.rodhilton.com/assets/retro_setup.png" /><figcaption><p class="caption">The whole shebang</p></figcaption></figure> <p>I still don’t love my Atari setup, the Framemeister feels clunky and I secretly wish I could get the Atari to play nicer with the active switcher and only send a signal when it’s turned on. And like I mentioned earlier, I’ll be jumping at the chance to replace my Super Game Boy with a more permanent portable solution that can add Game Boy Color and Game Boy Advance games to my roster.</p> <p>I’d definitely love to see a decent wireless N64 controller, and I’d be happy to replace my extremely expensive NES wireless controllers with something that feels like a real NES controller but uses a standard USB port to charge instead of the weird custom thing the mod kit uses.</p> <p>I also don’t have a way to play old Intellivision games. I actually did have access to an Intellivision when I was a kid, but I really have no idea how something like that could even work with my hatred for physical cartridges since each game came with a controller overlay that was basically essential for playing.</p> <p>The Mega Sg does not currently support 32X games due to how the Sega 32X add-on was implemented, but I’d love to add that capability to this entire setup should some kind of adapter or add-on ever get released that does not rely on emulation.</p> <p>These are all minor tweaks and improvements though, I largely consider this entire effort “done” and I’m not seeking to sink any more money into it right now. I can go downstairs and pick up and play any old game from any old system I remember as a child, and I really love it. I learned a lot through this whole process (but not too much) and it was a blast walking down memory lane as I got everything working.</p> <h1 id="further-resources">Further Resources</h1> <p>I found a lot of web sites and YouTube channels super helpful getting up to speed on all this stuff, you might find them useful as well.</p> <p>Shopping:</p> <ul> <li><a href="https://www.retrousb.com/index.php">RetroUSB</a></li> <li><a href="https://www.analogue.co/">Analogue</a></li> <li><a href="https://www.retrogamingcables.co.uk/">Retro Gaming Cables</a></li> <li><a href="http://retro-bit.com/">Retro-bit</a></li> <li><a href="https://www.8bitdo.com/">8BitDo</a></li> <li><a href="https://www.hyperkin.com/">Hyperkin</a></li> <li><a href="https://krikzz.com/store/">Krikzz</a></li> <li><a href="https://stoneagegamer.com/">Stone Age Gamer</a></li> <li><a href="https://terraonion.com/en/">Terraonion</a></li> </ul> <p>Information:</p> <ul> <li><a href="https://www.retrorgb.com">RetroRGB</a></li> <li><a href="https://www.youtube.com/user/mylifeingaming">My Life in Gaming</a></li> <li><a href="https://www.youtube.com/user/MrGameSack">Game Sack</a></li> <li><a href="https://www.youtube.com/user/adric22">The 8-Bit Guy</a></li> <li><a href="https://youtube.com/user/RerezTV">Rerez</a></li> <li><a href="https://www.youtube.com/user/CGQuarterly/videos">Classic Gaming Quarterly</a></li> <li><a href="https://www.youtube.com/channel/UCwRqWnW5ZkVaP_lZF7caZ-g">Retro Game Mechanics Explained</a></li> </ul> <p>Feel free to add your own suggestions, highlight ways of doing this same stuff better/cheaper, or ask questions in the comment section below but <strong>do not ask where to get ROMs or BIOS images</strong>. You’re flat-out on your own for that stuff, I’m deleting every comment asking for such things.</p> <p>Game on!</p> Strap in folks, this sucker's over 10,000 words for, like, Mario and stuff. Sat, 19 Oct 2019 00:00:00 +0000 http://www.rodhilton.com/2019/10/19/my-retro-gaming-setup/ http://www.rodhilton.com/2019/10/19/my-retro-gaming-setup/ Technology gaming leisure #Technology #Gaming #Leisure There Are Great Tools in Your bin/ Directory Every Java developer is familiar with javac for compiling, java for running, and probably jar for packaging Java applications. However, many other useful tools come installed with the JDK. They are already on your computer in your JDK’s bin/directory and are invokable from your PATH Every Java developer is familiar with javac for compiling, java for running, and probably jar for packaging Java applications. However, many other useful tools come installed with the JDK. They are already on your computer in your JDK’s bin/directory and are invokable from your PATH Mon, 10 Jun 2019 00:00:00 +0000 https://medium.com/97-things/there-are-great-tools-in-your-bin-directory-54638f3e200e https://medium.com/97-things/there-are-great-tools-in-your-bin-directory-54638f3e200e Programming java #Programming #Java Smart Assholes: A Probing Examination <p>Whatever you do, <strong>don’t hire assholes</strong> at your company. I’ve touched on this topic <a href="http://www.rodhilton.com/2016/06/15/guidingprinciples-part1/#toc-don-t-be-a-jerk">previously</a> but I think it’s important enough to warrant a separate, longer post.</p> <p>Assholes are a disease that spreads through your organization, slowly killing it from the inside. Yet, employing assholes is one of the most common mistakes I see tech companies make, because they are so laser-focused on hiring people with technical skills that those skills become the sole determiner of an engineer’s value. Generally, <strong>for someone to be rejected during the hiring process for being an asshole, they have to act like the biggest asshole in the world</strong> - anything short of that is fine.</p> <p>As far as I’m concerned, hiring assholes is the biggest mistake you can make when staffing your organization. It’s worse than hiring people who aren’t that good at their job, and I’d like to devote some time to talk about why assholes are such a problem, and why it’s so important to flush them out.</p> <h1 id="sniffing-out-assholes">Sniffing Out Assholes</h1> <p>First, I need to be clear about what I mean by assholes. I’m specifically addressing the epidemic of <em>smart assholes</em> that are actually good at their jobs, at least on paper. There’s no point in talking about dumb assholes, those folks are easy to fire or avoid hiring just from being bad at their jobs alone.</p> <p>The real danger is people who habitually exhibit asshole behavior but are also really intelligent, knowledgeable, and good at what they’re hired for. People who may be great at building and delivering a product, but while doing so make people around them unhappy. These <strong>Very Important Assholes™ are everywhere in the software industry</strong>, many people are completely successful getting by on just their intelligence, knowing it’s such a valuable quality that employers will tolerate them being assholes out of a perceived need for their skills.</p> <p>So how do you know if someone is an asshole? Here’s a simple test: if someone walks away from another person feeling bad about themselves, they were probably interacting with an asshole. Assholes undermine your confidence, they talk down to you, they try to make themselves look good at your expense, and they generally make you regret having to talk to them.</p> <p>There are a number of indicators someone is exhibiting asshole behavior. All of us do some of these things sometimes, but true assholes will be doing many of these things, and frequently:</p> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/finger.jpg" /></figure> <ul> <li>Insulting or degrading individuals or groups</li> <li>Joking and teasing to belittle others</li> <li>Tersely worded group e-mails that make people feel uncomfortable</li> <li>Slapping down people of lower status in the company hierarchy</li> <li>Eyerolling, sighing, or otherwise negative body language while others are speaking</li> <li>Ignoring people trying to contribute</li> <li>Interrupting people who aren’t done talking</li> <li>Touching or invading personal space</li> <li>Threatening or intimidating confrontations</li> <li>Publicly calling out and blaming others</li> <li>Undermining someone’s confidence for asking questions</li> <li>Gossiping about coworkers to other coworkers</li> <li>Cliquey behavior and exclusion</li> <li>Taking credit for the ideas or work of others</li> <li>Stirring shit and troublemaking</li> <li>Singling people out for uncommon traits they have</li> <li>Dismissing the opinions and ideas of others without discussion</li> </ul> <p><strong>Every interaction with an asshole involves at least two people: the asshole and someone victimized by that asshole</strong>. Often victims are encouraged to “toughen up” or “get a thicker skin,” essentially placing the blame on the victim. There’s frequently an attitude that all of these interactions are just natural byproducts of healthy conflict that makes the business better, as if there’s no way for someone to be challenged or corrected without feeling poorly about themselves afterward. This is simply untrue - people with social skills and a desire to use them can easily adjust someone’s behavior or conceptions while making them feel more educated and smarter for the experience. When you learn something from a non-asshole you walk away thankful for the mentorship.</p> <p>Anything an asshole is trying to “accomplish” with their asshole behavior could be accomplished just as well without being an asshole. The key difference is that non-asshole interactions make the target of the interaction better, while asshole interactions exclusively make the <em>asshole</em> feel better, generally at someone else’s expense.</p> <p>There are basically two kinds of assholes: unintentional and intentional. Unintentional assholes are people who simply lack social skills and wind up being rude or hurtful without meaning to. Intentional assholes have the skills but don’t care to utilize them, because they are egotistical and think others don’t deserve to be treated respectfully because of some perceived inferiority.</p> <p>Unintentional assholes are often given a pass, particularly in the software industry. They didn’t mean it. They’re socially awkward. They don’t know better. Forget about the fact that it’s bizarre to pretend that fully-grown adults are incapable of learning to adjust their behavior, the truth is that it doesn’t matter <em>why</em> someone acts like an asshole, the effect on the victim is the same.</p> <h1 id="why-assholes-stink">Why Assholes Stink</h1> <p>When someone is victimized by an asshole, they either feel worse about themselves (“I’m so dumb”) or worse about the other person (“he’s such an asshole!”).</p> <p>It should be obvious why the former is a productivity-drainer that has effects on self-confidence, learning, morale, and turnover. The effects on productivity are very real, <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2677700">a 2015 study</a> showed that removing an asshole (or converting them to a non-asshole) enhances productivity more than replacing an average worker with a superstar. It’s also important to note the latter case has terrible consequences to team cohesion as well. People may simply want to avoid interacting with that person again, which may hurt productivity if indeed that person is an important part of getting things done. But in the worse case, the victim may seek out an opportunity to be an asshole back, or bully someone else to feel better about themselves (think about grade school behavior). It’s not hard to see how <strong>assholery can have cascading effects where eventually everyone is being an asshole to each other</strong>.</p> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/bullying_at_work.jpg" /></figure> <p><a href="https://www.taylorfrancis.com/books/e/9780429132483/chapters/10.1201/EBK1439804896-7">A 2010 study</a> by Loraleigh Keashly and Karen Jagatic discovered that 27% of workers felt mistreated by someone at work, with 16% reporting persistent abuse. A <a href="https://www.taylorfrancis.com/books/e/9780203928554/chapters/10.4324/9780203928554-27">2009 study</a> found 36% reporting “persistent hostility” from coworkers, experiencing at least one aggressive behavior weekly.</p> <p>In <a href="https://www.tandfonline.com/doi/abs/10.1080/13594320143000834">2010, Dieter Zapf and Claudia Gross</a> took 149 victims of self-described bullying at work and taught them various conflict resolution techniques and studied the results. The effect? Victims tried various strategies and even altered their strategies several times before realizing nothing worked. Many resorted to frequently skipping work, but even more resorted to fighting back with the same kind of behaviors. Eventually, most victims left the company.</p> <p>A <a href="https://psycnet.apa.org/record/2013-24013-003">2013 study</a> that had participants engage in a series of math tasks while fielding uncivil e-mails found that not only did people report lower levels of energy and higher levels of stress, they actually performed significantly worse on the math problems. Successfully building software requires creativity and problem-solving skills, there are <a href="https://chase-seibert.github.io/blog/2017/04/14/engineering-meeting-strategies.html">troves</a> <a href="https://dzone.com/articles/minimizing-the-impact-of-interruptions-on-engineer">of</a> <a href="https://medium.com/boost-vc/nothing-gets-done-when-you-interrupt-an-engineer-54ede58b652c">screeds</a> on the internet devoted to the negative effects of merely <em>interrupting</em> an engineer deep in thought, and that’s without the interruption being something that results in negative emotions. According to <a href="https://journals.aom.org/doi/abs/10.5465/amj.2007.20159919">a 2017 study</a>, being on the receiving end of rudeness has a drastic effect on the performance of both routine and creative tasks as well as decreasing helpfulness going forward. Asshole behavior at work can cost companies <a href="https://journals.sagepub.com/doi/abs/10.1177/009102600103000403">millions of dollars a year</a> in lost productivity, drained morale, employee loyalty, and worker commitment.</p> <p><a href="https://journals.aom.org/doi/abs/10.5465/amr.1999.2202131">Lynne Andersson and Christine Pearson found</a> that workplace incivility leads to an “incivility spiral” in which the victims of uncivil behavior engage in more uncivil behavior, which results in more uncivil behavior and so on until your organization is basically full of assholes. A <a href="https://psycnet.apa.org/record/2013-02951-001">detailed study in 2013</a> found the same thing: people essentially engage with coworkers in a positive, trusting manner until they are burned one time, at which point they enter a reciprocal relationship of incivility and hostility.</p> <figure class="image alignright captioned"><img src="http://www.rodhilton.com/assets/just_a_donut.jpg" /><figcaption><p class="caption">Picture unrelated</p></figcaption></figure> <p>The basic issue here is that <strong>assholes seem like a productivity gain on paper, but are making their peers feel oppressed, humiliated, demoralized, and de-energized</strong>. The loss of productivity to everyone around the asshole cancels out any productivity gains you’ve made by having them in the first place, but most mechanisms for evaluating employees will completely hide the nature of what’s happening. It will seem like everyone <em>around</em> the asshole is underperforming, not the asshole himself. This will make it easy to reward and promote the assholes, resulting in a layer of asshole leadership that makes a work environment even more oppressive and impossible to tolerate. Managers who aren’t extremely on top of detecting this kind of thing will be left scratching their heads as their teams have high turnover and employee dissatisfaction.</p> <p>Many organizations also rely on peer-based feedback for employee evaluations that only exacerbates this problem. With a handful of assholes peppered throughout your organization, everyone is virtually guaranteed to get peer feedback from an asshole at some point. These feedback schemes are generally permanent, with little in the way of refuting items an employee disagree with. They effectively become permanent records, so one stray comment from an asshole can follow an employee around forever, leaving them to feel the only way to advance their careers is to leave and get a clean slate with another company. Peer-led feedback essentially gives your company’s assholes a megaphone to wield, often anonymously. However, the same feedback systems rarely identify or call out assholes.</p> <p>From ‘<a href="https://smile.amazon.com/Asshole-Rule-Civilized-Workplace-Surviving/dp/0446698202/ref=sr_1_1?keywords=no+asshole+rule&amp;qid=1562018047&amp;s=gateway&amp;sa-no-redirect=1&amp;sr=8-1">The No Asshole Rule</a>’:</p> <blockquote> <p>The effects of assholes are so devastating because they sap people of their energy and esteem mostly through the accumulated effects of small, demeaning acts, not so much through one or two dramatic episodes.</p> </blockquote> <p>Big dramatic episodes are comparatively easy to detect and address through HR. However, <strong>small persistent acts that eventually destroy a person’s morale are individually too minor to even bother reporting or talking about with managers</strong> without seeming overly sensitive. This puts the onus entirely on managers to both notice what’s happening and prevent it, which can be a huge challenge and distraction for busy managers who have a lot on their plate. And if the manager him or herself is the asshole, this situation is basically a lost cause and the only real solution <a href="https://journals.aom.org/doi/abs/10.5465/1556375">is to quit</a>.</p> <figure class="image alignleft captioned"><img src="http://www.rodhilton.com/assets/surrounded_by_assholes.jpg" /><figcaption><p class="caption">Keep firing assholes!</p></figcaption></figure> <p><strong><a href="https://onlinelibrary.wiley.com/doi/abs/10.1348/096317905X40105">A small negative event impacts a person’s morale five times as much as a positive one</a></strong>. This means that, if someone is on a team with five other people and everyone is nice, helpful, and filling that coworker’s day with positive events except <em>one</em> asshole, that asshole is having as much of a negative impact on the team member as everyone else’s positive impacts combined. And as discussed above, most victims of asshole behavior reciprocate with their own asshole behavior, so it wouldn’t be long before those positive, encouraging team members also succumb to negative behavior and before you know it you’re surrounded by assholes.</p> <h1 id="how-assholes-spread">How Assholes Spread</h1> <p>Asshole behavior begets additional asshole behavior from others. Non-assholes are hardened into assholes over time to survive, and a spiral of incivility reigns. In a small organization of 100 people with only one asshole, even assuming that nobody leaves or joins the company, you’re guaranteed to have more people who are considered assholes over time as people adjust their behavior toward assholery.</p> <figure class="image alignright captioned"><img src="http://www.rodhilton.com/assets/just_a_camera.jpg" /><figcaption><p class="caption">Picture unrelated</p></figcaption></figure> <p>But the problem is significantly worse than that because successful organizations almost never stagnate on employee count, they grow. And to grow, they hire. And if you have a company with a few assholes on board that’s interviewing and hiring more people, you have a hiring team that also has some assholes.</p> <p>The fundamental problem is this: <strong>assholes hire other assholes</strong>.</p> <p>Non-assholes try to hire non-assholes, but occasionally hire an asshole by accident. However, assholes almost exclusively hire more assholes. The same mental motivators behind them treating those they see as inferior poorly also result in evaluating people who don’t exhibit these same asshole behaviors as inferior. Sure, they won’t walk into a post-interview huddle saying a candidate “didn’t seem like enough of an asshole” but they will be extra harsh on that person’s perceived abilities, or frequently their “communication abilities” which can loosely translate to lacking overconfidence in speaking (to an asshole). Assholes can tend to dominate interview huddles too, controlling the conversation and convincing people who liked a candidate to change their mind, not that it matters since many tech companies have a rule where a single “strong no-hire” is enough to exclude a candidate from an offer. Even if an offer is made, experienced non-asshole candidates can pick up on the asshole vibe from the asshole interviewer (who has no reason to hide it in an interview setting) and decline any offer made.</p> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/interview.jpg" /></figure> <p>So if you have 2% assholes in your organization, then everyone participates in hiring to grow, you’re never going to find yourself at 1% assholes, you’re going to find yourself at 5% assholes. Then those 5% hire more assholes, and so on. <strong>Since asshole behavior spreads BOTH by people reciprocating asshole behavior internally as well as onboarding new assholes externally, the candle is burning at both ends</strong>. Remember that asshole behavior packs 5 times the impact as non-asshole behavior, so reaching only a 20% asshole population results in an oppressive, miserable work environment that cannot be fixed, only escaped. And when non-assholes leave the company, you know who is going to be on the hiring committee for their backfills right? Assholes.</p> <p>If your company has been around a while and has been through multiple rounds of growth, but has never had any kind of explicit intention around not hiring assholes, you can virtually guarantee your company has a shitload of assholes by now.</p> <h1 id="squeezing-out-assholes">Squeezing Out Assholes</h1> <p>Getting rid of assholes can be incredibly difficult. Since they seem so competent on paper, it’s difficult to make a paper trail to justify a termination, and it’s very rare to see someone put on a Performance Improvement Plan just because they’re an asshole. Job requirements rarely specify social skills as an essential task, so HR often won’t get on board with letting someone go for being an asshole unless they commit some outrageous act of harassment. Formalizing social skills as part of the job description can help with this.</p> <p><span data-pullquote="It's better to have a hole in your team than an asshole " class="right"></span></p> <p>Often these assholes who combine competence with asshole behavior create a sense among others that they’re critical to the success of a codebase or product. They become the “experts” in particular areas of code nobody else understands, and because they’re difficult to work with nobody else is willing to endure their asshole behavior in order to share ownership of their domains. This results in a situation where assholes are deemed too important to lose, and the thought of just losing all of the assholes in the organization feels like losing your “best” people and a surefire way to destroy your company. Be assured that if you feel like you can’t “afford” to lose all of your assholes, your organization has been overrun by assholes. Really, you can’t afford to keep assholes around - it’s better to have a hole in your team than an asshole.</p> <p>I’ve been doing this for a long time and I’ve never once in 20 years seen a single person leave a team and then watched that team immediately fall apart as a result. I’ve constantly seen people (mostly managers) <em>worry</em> that this would happen if a particular person left, but I’ve never actually seen it happen. Teams seem to always bounce back and fill the hole that’s been left, often with surprising vigor if the person who left was an asshole.</p> <p>It may seem “unfair” to toy with the idea of losing the assholes, particularly the unintentional assholes. Since they “don’t know better” it seems almost cruel to let them go simply because they’re making everyone around them miserable, and it somehow feels like a smaller request to have 50 people tolerate one asshole’s behavior than to demand one asshole figure out how to not alienate everyone with whom they interact. Frankly, I think you’d be doing an asshole a favor by losing them, nothing is a better teacher than failure.</p> <p>Really though, nobody wants to just consider someone a complete “lost cause” just because they have an asshole personality, particularly if they have great skills. “If only,” you can’t help but wonder, “they could retain these great technical skills AND treat their coworkers like human beings. They’d be the ideal team member!” This kind of thinking is spot on, we can all be assholes sometimes and a little bit of correction can help us all learn to be great peers.</p> <p>Luckily, whether you’re reluctant to lose assholes because it’s too hard to create a paper trail to do so, because they seem too critical to lose, because it seems unfair to lose people for personality faults, or because you hold out hope they can be fixed, the solution to <em>all</em> of these conundrums is the same. <strong>Creating an asshole-free work environment is actually very straightforward</strong>, all it takes is a commitment to following a few steps.</p> <p>Think of assholes as an infection. When you have an infection, you have to:</p> <ol> <li>Decide to do something</li> <li>Stop it from spreading</li> <li>Heal the infection site</li> <li>Prevent new infections</li> </ol> <p>So first thing’s first, you need to decide, as an organization, you’re not going to tolerate assholes. This may seem super minor or worth skipping (“obviously we don’t want assholes”) but I’m actually advocating for making this completely clear to the entire company. Say you’re not going to hire assholes, and you’re going to make a point of improving social behavior within your company. Send out a mass e-mail, make posters, hire speakers. <strong>Whatever you have to do to convey to each and every employee that their social skills are now going to carry as much weight as their technical skills, do it.</strong></p> <p>That alone might actually be enough. People will start looking for asshole behavior in interviews and may start curbing their own asshole behavior simply to fit into the new official company culture. People will feel more comfortable calling out asshole behavior and empowered to point out when someone is making them feel poorly. Managers will watch for asshole behaviors and try not to reward it with promotions, which will turn being an asshole into a career limiter. Simply being explicit is often enough.</p> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/pke_meter.jpg" /></figure> <p>If that’s not enough, you need to put real effort into the remaining steps. So next, we need to stop the infection from spreading. However many assholes you have in your organization, you need to ensure you don’t develop any more. We can discuss how to clear up the situation later, but before that, we have to ensure it doesn’t get worse. To do this, you’re going to need to <strong>find assholes already in your organization</strong>.</p> <p>The best way to identify your assholes is for managers to get more involved in team meetings and sit with the teams to observe how they interact, particularly with people outside their immediate team. Watch for the kinds of things listed earlier and take note of the people who habitually exhibit these behaviors, even when the victim seems “cool with it”. Don’t rely on peer feedback for this, people will rarely report these small acts of asshole behavior and are generally reluctant to mention assholes out of fear of looking overly sensitive or looking like a “whiner” to their boss. Plus, in an environment with assholes, collecting peer feedback is fraught with danger as previously mentioned, as it gives assholes an anonymous way to exercise incredibly amplified power over victims.</p> <p>This obviously requires much more involvement and social skill from managers. If your management team largely thinks their job is to ensure butts are in chairs from 9 A.M. to 5 P.M., they’re probably not qualified for this task. But having or building these skills is essential for the remaining steps in the process of squeezing out assholes, so you’d better staff your management layer with good people managers anyway. And if you’re trying to figure out who the assholes are <em>in</em> management, that’s easy: just look at which teams have significantly higher turnover than other teams. In this case, leaving the team for another team (with a different manager) counts as turnover. Ignore “exit interviews” and all of that nonsense, nobody is going to burn a manager on their way out just because it’s a small world and that’s a career-limiting move. The only way to deal with an asshole manager is to get away from them, so pure numbers should reveal a pattern.</p> <p>Once you’ve identified all your assholes, get them as far from the hiring process as possible. You don’t even have to fire them, just tell them that their work is too important to waste their time on hiring, assholes love that kind of shit. Make sure you don’t promote them or give them any leadership positions, and if they’re already in leadership positions move them out right away.</p> <p>Third, you need to heal the infection. This is where things can get kind of ugly. Assuming you’ve followed the other steps, you should have a cabal of non-asshole managers who have identified all the assholes under them. So now, those managers need to do the hardest part of this process: <strong>consistently and repeatedly correct asshole behavior</strong>.</p> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/oneonone.png" /></figure> <p>Every time someone acts like an asshole, even a little bit, their manager needs to talk to them about it. This doesn’t need to be done publicly (unless it’s really egregious), it’s just to make it clear that the behavior was noticed and that it’s not consistent with how the organization wants its employees to treat each other. This may require speaking to certain employees on a daily basis, perhaps more often for particularly assholey folks. It should feel, to an asshole, almost like this process never lets up. They should get tired of having their behavior called out constantly, it should be kind of annoying. This may be exhausting for employees and managers alike, but keep pushing hard. Make it clear to the asshole employee that there will be no wiggle room on any of this, and every single instance will result in having to have an unpleasant conversation. Do this as a mentor, with a goal of actually correcting this person’s tendencies.</p> <p>One of two things will happen. One, the employee may realize that the environment is different and this is no longer a place they can “get away with” assholery, so they adjust their behavior accordingly with your help. This will transform your asshole into a smart non-asshole, exactly who you want on your team and you’ve done them a huge favor by training them on what is and is not socially acceptable. Or two, the other thing that could happen is that the employee is unwilling or unable to improve and feels so badgered about their behavior that they realize their work environment is not ideal for them, and they leave. In very rare cases, they’ll stick around and just put up with being constantly reprimanded, and yes indeed in these instances you’ll have to let them go, but by then you’ll be so annoyed at their obstinate asshole attitude that you’ll do so with pleasure.</p> <p>Once all the assholes are gone, you have one last thing to do, which is to prevent new infections. This may seem like the most difficult part of the process since it’s often hard to suss out asshole behavior in an interview. Everyone is on their best behavior during interviews, and it’s quite easy to hide one’s asshole qualities for a few hours. Should you ask cliched questions like “tell us about a time you resolved a conflict with a co-worker”? What kind of questions help figure out if someone is an asshole?</p> <p>Well, here’s the short answer: none of them. <strong>There is no question you can think of to determine if someone is an asshole that would not be trivially easy for a self-aware asshole to lie through</strong>. Don’t even bother trying.</p> <p>The trick here is that, for someone to lie and answer your question in a way that doesn’t betray their asshole nature, they have to know they’re an asshole. Most assholes know, on some level, that they’re kind of assholes. You can use this fact to your advantage, by not weeding out assholes but instead having them weed themselves out.</p> <p>All you have to do is <strong>make it clear during the interview process that you have a zero tolerance asshole policy</strong>. Word it however you want, and ask your HR department for words you can use other than “asshole” but make it just as apparent to the candidate as you made it to your employees in the first step that acting like a dipshit is a surefire way to put yourself into a world of unhappiness at your company. Their boss will be on their case constantly, promotions will be out of reach, and they’ll generally feel like it’s impossible to get ahead if they’re not socially smart. And since you’ve followed the previous three steps, this is actually true! Any asshole self-aware enough to lie about their asshole tendencies will decide this isn’t a good fit for them and won’t accept any offer you make, and any asshole who isn’t that self-aware will just make it clear they’re an asshole during the course of regular conversation, just have interviewers look for signs and red flag them.</p> <p>Here’s a handy way to remember the four steps, the F.I.B.R. method:</p> <figure class="image aligncenter captioned"><img src="http://www.rodhilton.com/assets/fibr.png" /><figcaption><p class="caption">FIBR: Focus, Identify, Badger, Represent</p></figcaption></figure> <p>So when you need to flush out some assholes, add a little FIBR.</p> <h1 id="assholes-closing">Assholes: Closing</h1> <p>I realize I’m dropping a bomb here; an engineer’s social skills aren’t usually something you’d let one go over. But it’s time to clear the air: if you feel like productivity is backed up and you need to relieve yourself, make this something you do do.</p> <p>The software industry desperately needs to start a movement, so make dumping assholes out the back door at least your number two priority. Let loose with your intentions, dump some excess weight, and fire away if you must: together we can nip assholes in the bud and stop their behavior from leaking into our organizations.</p> <p>So don’t loaf around! Grab a stool and cop a squat, you’ll have to log some time butting heads with assholes but at the tail end, do your business a favor and make wiping out assholes your crowning achievement.</p> It's better to have a hole in your team than an asshole Mon, 03 Jun 2019 00:00:00 +0000 http://www.rodhilton.com/2019/06/03/dont-hire-assholes/ http://www.rodhilton.com/2019/06/03/dont-hire-assholes/ Programming work career #Programming #Work #Career Strengths Only: A Peer-Review Philosophy <p>I’ve been a professional software engineer for nearly twenty years. Almost every place I’ve worked has had some kind of peer review or peer feedback system, and yet I’ve rarely, if ever, heard someone say something like the following:</p> <blockquote> <p>“I really appreciate the constructive criticism I got on a peer review. I made an effort to take the feedback to heart, and as a result, improved myself.”</p> </blockquote> <p>Your mileage my vary obviously, but in two decades I personally have never known anyone to truly appreciate their suggestions for improvement from a peer, at least not within the confines of an official peer feedback system.</p> <p>Virtually every time I’ve heard peer reviews mentioned in any kind of conversation, it’s been within one of three contexts:</p> <ol> <li>“Ugh, I have a bunch of peer reviews to write and I really don’t want to.”</li> <li>Someone revealing they don’t like someone else on the team, and a big part of the reason why is that the other person once gave them negative feedback on a peer review that they consider unfair or otherwise simply disagree with.</li> <li>A peer review comment being used to deny someone a raise, promotion, or transfer to another team.</li> </ol> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/criticism.jpg" /></figure> <p>At the engineering level, based on how I’ve seen peer reviews discussed, <strong>the only functions they serve are 1) annoying you 2) sowing seeds of discontent within the team and 3) fucking you over.</strong></p> <p>None of these are good things for your engineering organization, and you’re virtually guaranteed to be better off with no peer reviews whatsoever. Unfortunately, companies seem to have a total inability to discard peer reviews for the morale-obliterators they are.</p> <!-- more --> <p>Ostensibly, one of the value propositions of peer reviews is the negative feedback they solicit. Constructive criticism, after all, can be very valuable, and knowing where one can improve is immensely helpful to growing within one’s career. Yet it seems to me that <strong>valuable criticism can be expressed entirely on a private basis</strong>; involving someone’s manager in any way fundamentally alters how negative feedback is perceived. It’s virtually guaranteed to strip the feedback of its value.</p> <p><span data-pullquote="'Anonymous' feedback doesn't avoid confrontation, it just means every confrontation now involves three people instead of two. " class="right"></span> Often official peer review systems are purported to help draw out constructive criticism from people who are shy or struggle with confrontation. They can leave anonymous feedback for peers without having to speak to them directly. In practice, I’ve never seen this actually play out as designed: 100% of the time, someone can deduce who left them the “anonymous” feedback if it’s anything more detailed than a numeric rating. The confrontation still happens, and any negative feelings that would result from a direct communication still occur. “Anonymous” feedback doesn’t avoid confrontation, it just means every confrontation now involves three people instead of just two. If you can’t find a way to phrase your constructive criticism so that it wouldn’t offend the recipient, the absolute last person on the planet you should share your poorly-worded feedback with is the person who signs their paychecks.</p> <p>This is why, years ago, I adopted a philosophy of “Strengths Only” peer reviews. I’m happy to review peers and provide feedback to them, but <strong>I will only ever focus on my peer’s positive qualities.</strong> I’ll zero in on the skills and values that a peer brings to the table above most other engineers. This helps my peer’s manager better understand why they are valued on the team, and it helps my peer know exactly where they are excelling.</p> <p>Knowing one’s strengths is <a href="https://www.gallupstrengthscenter.com/home/en-us/cliftonstrengths-for-managers">far more valuable than focusing on one’s weaknesses</a>. Energy and time are finite resources, someone can focus intensely on improving a weak skill and likely will still fall short of someone for whom that skill is a natural strength. <strong>It’s a vastly more efficient use of resources to focus on playing to one’s strengths than improving one’s weaknesses.</strong></p> <p>If there is an area where I believe someone can improve, I will give them my constructive criticisms privately. This can sometimes be an uncomfortable process, but if the feedback isn’t valuable enough to push through that discomfort, it’s not valuable enough to express at all. If the other person feels like my suggestion is indeed something they’d like to focus on, they can take it up with their own manager on their own terms, set their own goals, and track their own progress.</p> <p>The above approach avoids an extremely common problem I see which is that when someone in the middle (like a manager) tries to put the feedback in their own terms (often to anonymize it), they accidentally alter the intent of the feedback provider. It’s basically a game of telephone, but with managers accidentally inserting their own biases and feedback into the rewording attempt. In extreme cases, I’ve caught managers hijacking someone else’s feedback to “reword” it to align with their own feedback for someone, which allows them to provide their own feedback without owning it, because it’s supposedly from peers.</p> <p>I’ve found that almost every time I’ve gotten feedback from someone via a manager, if I was able to figure out who provided it and talk to them, it turned out the manager’s version of their feedback was not what they quite meant. It’s less important that feedback be anonymous than it is be accurately preserved.</p> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/legday.jpg" /></figure> <p>So this is my promise to my fellow engineers and other peers: <strong>I will never type anything negative into a text box that’s going to be seen by your manager or potential manager.</strong> I will speak only of your strengths in an honest way to them (and you) and give you any constructive criticism privately.</p> <p>If the peer review system will not let me proceed with the review unless I conjure up some kind of negative feedback, I will input negatively-worded positive traits like “should have more confidence” or “works too hard”.</p> <p>I have taken some flack for this position from people on occasion. I’ve been told this makes peer review schemes “useless,” a critique I find bizarre. If someone believes there is literally “no use” for positive feedback in a peer review system, then it must be the case that they <em>only</em> want negative feedback. That’s not a peer review, that’s a peer critique. And <strong>if you think your engineering organization would be healthier if negative feedback between peers were maximized, you’re building a dysfunctional group.</strong></p> <p>The worst dysfunction I’ve seen is that some companies will factor in the skill of “giving constructive criticism well” when evaluating someone for a raise or promotion. In other words, someone has a financial and career development incentive to find negative things to say about their peers in order to further themselves within the organization. Many organizations will even adopt tools for performing peer-feedback that <em>require</em> peers fill in a box with suggestions for improvement. This has to be just about the most bafflingly destructive way to turn a healthy engineering team into a group of competitive, infighting, political, individualistic jerks as I can imagine. If you make someone’s bank account balance depend on their ability to find fault in their peers, they will.</p> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/fired.jpg" /></figure> <p>I’m not going to sit here and recommend everyone adopt this philosophy toward peer reviews (though I do think things would improve if everyone did). I know that some people think so highly of their own opinions that they cannot imagine a world where their constructive criticisms aren’t seen for the life-changing gems of wisdom that they clearly are, but I <em>would</em> like to assure those people that probably not once in their lives has anyone appreciated anything negative they had to say. People generally don’t take kindly to criticism, they get defensive and are much more likely to simply dismiss the critic than they are to take the feedback to heart. Even people who believe they are actively seeking criticism will usually find the criticisms they recieve to be invalid for some reason or another.</p> <p><strong>Most negative criticism, no matter how constructive, will have no positive impact on the recipient.</strong> <a href="https://www.fastcompany.com/3039412/the-art-science-to-giving-and-receiving-criticism-at-work">Our lizard brains see criticism as a threat to our survival</a>. Most of it will be dismissed, and the primary result is much more likely to be a damaged working relationship than a positive improvement. Packaging that criticism into an official system that becomes a permanent record shared by someone’s manager is a virtual guarantee that the feedback will be reacted to poorly. Mandatory peer review systems are a blight on the corporate world, eating away at employee morale and breeding animosity and dysfunction. There’s only one way to transform them into something that might actually help your team succeed: focusing on Strengths Only.</p> <p><strong>UPDATE</strong>: A few months after I posted this, <a href="https://hbr.org/2019/03/the-feedback-fallacy">Harvard Business Review published a very similar article saying a lot of the same things</a> - I strongly recommend you read this article, it’s basically what I wrote above but backed up by scientific studies instead of just my personal observations.</p> If you can't find a way to phrase your constructive criticism so that it wouldn't offend the recipient, the absolute last person on the planet you should share your poorly-worded feedback with is the person who signs their paychecks. Wed, 02 Jan 2019 00:00:00 +0000 http://www.rodhilton.com/2019/01/02/strengths-only-a-peer-review-philosophy/ http://www.rodhilton.com/2019/01/02/strengths-only-a-peer-review-philosophy/ Programming work career principles #Programming #Work #Career #Principles Programming Podcasts: A Roundup <p>A number of people have asked me what programming podcasts I listen to, and they’ve generally been pretty happy with the breadth and volume of my response. I thought it would be a good idea to share all of these here on my blog in case other programmers are searching for some good podcasts.</p> <p>I generally dislike blog posts like this one because I’ve discovered so many of them myself over the years, only to find that most of the links are broken, defunct, or link to podcasts that are no longer updated. But on the other hand, I haven’t posted anything in over a year and this post is super easy to write so, you know, yay for low-effort content.</p> <p>As far as this selection, I tend to like detailed discussions and interviews for my tech podcasts, and I’ve found that I actually like listening to interviews while coding more than I like listening to music. I also tend to work with functional programming languages on the JVM with a focus on backend development, scalability, and architecture, so my selections here will bias towards those topics.</p> <!-- more --> <h1 id="tech-industry">Tech Industry</h1> <ul> <li><a href="https://www.recode.net/recode-decode-podcast-kara-swisher">Recode Decode</a> - Kara Swisher is an experienced journalist who covers various topics in the tech industry; episodes are typically long interviews with noteworthy tech personalities. Updated every other day. [<a href="http://feeds.feedburner.com/Recode-Decode">Feed</a>]</li> <li><a href="https://itunes.apple.com/us/podcast/id1355212895">Techmeme Ride Home</a> - A great replacement for the Crunch Report if you were into that, Techmeme’s Ride Home is a daily summary of the biggest news in tech. It’s a great way to stay up to speed on what’s going in the industry, and episodes are generally pretty short and good for a quick drive. Updated every weekday. [<a href="http://feeds.feedburner.com/TechmemeRideHome">Feed</a>]</li> <li><a href="https://www.hanselminutes.com/">The Hanselminutes Podcast</a> - Scott Hanselman interviews big tech industry players covering various topics in the tech industry. Updated weekly. [<a href="https://rss.simplecast.com/podcasts/4669/rss">Feed</a>]</li> </ul> <h1 id="general-software-development">General Software Development</h1> <ul> <li><a href="https://www.oreilly.com/topics/oreilly-programming-podcast">O’Reilly Programming Podcast</a> - O’Reilly’s interview series, frequently featuring authors of new O’Reilly books as part of promotion, dealing with a variety of programming and architecture topics. Updated twice a month. [<a href="http://feeds.podtrac.com/2P68PDQSg03Y">Feed</a>]</li> <li><a href="https://www.programmingthrowdown.com/">Programming Throwdown</a> - Each episode typically features a thorough discussion of a specific topic or technology, often with book suggestions. Updated monthly. [<a href="http://feeds.feedburner.com/ProgrammingThrowdown">Feed</a>]</li> <li><a href="https://softwareengineeringdaily.com/">Software Engineering Daily</a> - Interview series with software engineers covering a variety of topics. Updated daily. [<a href="http://softwareengineeringdaily.com/category/podcast/feed/">Feed</a>]</li> <li><a href="Link">Software Engineering Radio</a> - A bit academically focused, run by people from the IEEE Software technical magazine. Updated a couple times per month. [<a href="http://feeds.feedburner.com/se-radio">Feed</a>]</li> <li><a href="https://nodogmapodcast.bryanhogan.net/">no dogma podcast</a> - Discussions and sometimes interviews on various topics, casts a very wide net; sometimes extremely technical dives into a technology, sometimes a higher-level industry discussion. Updated twice monthly. [<a href="http://feeds.feedburner.com/NoDogmaPodcast">Feed</a>]</li> <li><a href="http://herdingcode.com/">Herding Code</a> - Various development topics covered, usually skews towards .NET. Updated every other month. [<a href="http://feeds.feedburner.com/herdingcode">Feed</a>]</li> <li><a href="https://www.infoq.com/the-infoq-podcast">The InfoQ Podcast</a> - Complete mishmash of various software development topics, high-level to low-level. Updated 2-4 times per month. [<a href="http://feeds.soundcloud.com/users/soundcloud:users:215740450/sounds.rss">Feed</a>]</li> <li><a href="http://coder.show/">Coder Radio</a> - Wide variety of topics related to software engineering with great hosts. Updated weekly. [<a href="http://coder.show/rss">Feed</a>]</li> </ul> <h1 id="java-development">Java Development</h1> <ul> <li><a href="http://www.javapubhouse.com/">Java Pub House</a> - Very deep dives into Java topics, tools, and technologies. Updated monthly. [<a href="http://javapubhouse.libsyn.com/rss">Feed</a>]</li> <li><a href="http://enterprisejavanews.com/">Enterprise Java Newscast</a> - Discussion about the latest news in the Enterprise Java space, focuses largely on the release of various tools and libraries. Updated twice monthly. [<a href="http://enterprisejavanews.libsyn.com/rss">Feed</a>]</li> </ul> <h1 id="functional-programming">Functional Programming</h1> <ul> <li><a href="https://corecursive.com/">CoRecursive w/ Adam Bell</a> - Interview series talking with various prominent functional programmers, discussing FP techniques and topics [<a href="https://corecursive.com/feed">Feed</a>]</li> <li><a href="https://soundcloud.com/lambda-cast">LambdaCast</a> - Educational series on functional programming, each episode covering a different aspect of FP (Monads, Functors, Applicatives, etc). Updated occasionally. [<a href="http://feeds.soundcloud.com/users/soundcloud:users:239787249/sounds.rss">Feed</a>]</li> <li><a href="https://www.functionalgeekery.com/">Functional Geekery</a> - Discussion-focused podcast about functional programming topics covering a variety of languages. Updated monthly. [<a href="https://www.functionalgeekery.com/feed/mp3/">Feed</a>]</li> </ul> <h1 id="web-development">Web Development</h1> <ul> <li><a href="http://www.fullstackradio.com/">Full Stack Radio</a> - Heavy UI/JavaScript/Web development focus. Updated twice a month. [<a href="https://rss.simplecast.com/podcasts/279/rss">Feed</a>]</li> <li><a href="http://bikeshed.fm/">The Bike Shed</a> - Discussions on various topics, mainly dealing with Ruby, Rails, and JavaScript. Updated 2-4 times per month. [<a href="https://rss.simplecast.com/podcasts/282/rss">Feed</a>]</li> </ul> <h1 id="computer-science">Computer Science</h1> <ul> <li><a href="https://spectrum.ieee.org/multimedia/podcasts">IEEE Spectrum Podcast</a> - Focused primarily on academic and computer science topics. Updated rarely. [<a href="http://feeds.feedburner.com/ieee/spectrumo">Feed</a>]</li> <li><a href="http://podcasts.ox.ac.uk/">Computer Science</a> - The University of Oxford’s podcast on computer science research. Updated rarely. [<a href="http://mediapub.it.ox.ac.uk/feeds/137514/audio.xml">Feed</a>]</li> </ul> <h1 id="architecture">Architecture</h1> <ul> <li><a href="https://www.stitcher.com/podcast/software-architecture-radio">Software Architecture Radio</a> - Matt Stine’s interview series with prominent engineers and authors, focused entirely on software architecture. Updated rarely. [<a href="http://feeds.soundcloud.com/users/soundcloud:users:276322801/sounds.rss">Feed</a>]</li> <li><a href="https://www.codingblocks.net/">Coding Blocks</a> - Discussion series about best practices for engineers, strong focus on architectural concerns. Skews a bit toward .NET discussion but the topics are generally applicable in any language. Updated twice monthly. [<a href="http://feeds.podtrac.com/tBPkjrcL0_m0">Feed</a>]</li> <li><a href="https://www.nofluffjuststuff.com/podcast">No Fluff Just Stuff Podcast</a> - Michael Carducci, a frequent NFJS speaker, interviews various other speakers (usually at NFJS events) about a variety of topics, typically with a focus on software architecture. [<a href="http://nofluff.libsyn.com/rss">Feed</a>]</li> </ul> <h1 id="devops">DevOps</h1> <ul> <li><a href="http://www.devopsmastery.com/">Devops Mastery</a> - Kind of intended as a newbie educational series, helping DevOps newcomers improve. It hasn’t been updated in years but I’m still including it because it’s a basic tutorial series. [<a href="http://feeds.soundcloud.com/users/soundcloud:users:79143337/sounds.rss">Feed</a>]</li> <li><a href="http://www.devopsradio.libsyn.com/podcast">DevOps Radio</a> - Interview series covering various topics related to software delivery. Updated twice monthly. [<a href="http://devopsradio.libsyn.com/rss">Feed</a>]</li> <li><a href="https://www.arresteddevops.com/">Arrested DevOps</a> - Discussion series on good DevOps practices and patterns for effectiveness. Updated twice monthly. [<a href="https://www.arresteddevops.com/episode/index.xml">Feed</a>]</li> </ul> <h1 id="soft-skills">Soft Skills</h1> <ul> <li><a href="https://softskills.audio/">Soft Skills Engineering</a> - Meant for programmers but dealing with non-programming topics relevant to work. How to deal with co-workers, promotions, giving talks, interviewing, and all sorts of other soft skills are covered. It’s kind of a “Dear Abby” but for programmers. Updated weekly. [<a href="http://feeds.feedburner.com/SoftSkillsEngineering">Feed</a>]</li> <li><a href="https://ryanripley.com/agile-for-humans/">Agile for Humans with Ryan Ripley</a> - Focused on the software development process with an obvious slant towards Agile and Scrum. Updated weekly. [<a href="http://feeds.feedburner.com/agileforhumans">Feed</a>]</li> <li><a href="http://mentoringdevelopers.com/">Mentoring Developers</a> - Focused on career development for Software Engineers, focused on more junior or newcomers to the field. Updated monthly. [<a href="http://mentoringdevelopers.com/feed/podcast/">Feed</a>]</li> <li><a href="https://jaymeedwards.com/">Healthy Software Developer</a> - A little “self-help seminar” at times but generally good soft skill advice for engineers. Updated weekly. [<a href="http://feeds.soundcloud.com/users/soundcloud:users:332662728/sounds.rss">Feed</a>]</li> <li><a href="http://giantrobots.fm/">Giant Robots Smashing Into Other Giant Robots</a> - A bit focused on management and business but still a good listen about soft skills in the tech industry. Updated weekly. [<a href="https://rss.simplecast.com/podcasts/271/rss">Feed</a>]</li> </ul> <p>There are lots of other great podcasts out there but even as I went over my OPML export to write this post I realized a few of my favorites hadn’t been updated in ages. It doesn’t give me a lot of hope that this very post will stay relevant for long, but it is what it is.</p> <p>Did I miss your favorite podcast? Please leave a comment, I’d love to add some more feeds to my reader.</p> <p>I also left out a lot of common programming podcast categories, such as the various podcasts meant for newcomers to the field or people learning to program. I’ve been programming for nearly two decades so these types of podcasts don’t personally interest me and thus I can’t vouch for any of them, but if there are any you like please leave a comment for anyone who might stumble across this post.</p> A number of people have asked me what programming podcasts I listen to, and they’ve generally been pretty happy with the breadth and volume of my response. I thought it would be a good idea to share all of these here on my blog in case other programmers are searching for some good podcasts. Wed, 02 May 2018 00:00:00 +0000 http://www.rodhilton.com/2018/05/02/programmer-podcasts/ http://www.rodhilton.com/2018/05/02/programmer-podcasts/ Programming #Programming A Branching Strategy Simpler than GitFlow: Three-Flow <p>Of all the conversations I find myself having over and over in this field, I think more than anything else I’ve been a broken record convincing teams <strong>not</strong> to adopt <a href="http://nvie.com/posts/a-successful-git-branching-model/">GitFlow</a>.</p> <p>Vincent Driessen’s post “<a href="http://nvie.com/posts/a-successful-git-branching-model/">A successful Git branching model</a>” – or, as it’s become commonly known for some reason, “GitFlow” – has become the de facto standard for how to successfully adopt git into your team. If you search for <a href="https://encrypted.google.com/search?q=git+branching+strategy">“git branching strategy” on Google</a>, it’s the number one result. Atlassian has even adopted it as one of their <a href="https://www.atlassian.com/git/tutorials/comparing-workflows#gitflow-workflow">primary tutorials</a> for adopting Git.</p> <p>Personally, I hate GitFlow, and I’ve (successfully) convinced many teams to avoid using it and, I believe, saved them <a href="http://endoflineblog.com/gitflow-considered-harmful">tremendous headaches</a> down the road. GitFlow, I believe, leads most teams down the wrong path for how to manage their changes. But since it’s such a popular result, a team with no guidance or technical leadership will simply search for an example of something that works, and the blog post mentions that it’s “successful” right in the title so it’s very attractive. <strong>I’m hoping to possibly change that with this post, by explaining a different, simpler branching strategy that I’ve used in multiple teams with great success</strong>. I’ve seen GitFlow fail spectacularly for teams, but the strategy I outline here has worked very well.</p> <p>I’m dubbing this <strong>Three-Flow</strong> because there are exactly three branches. Not four. Not two. Three.</p> <p>First, a word of warning. This is not a panacea. This will not work for all teams or all kinds of development work. In fact off the top of my head, I don’t believe it would work well for 1) embedded programming 2) shrinkwrap release software or 3) open source projects. <strong>Basically Three-Flow works when:</strong></p> <ol> <li><strong>Everyone committing to a codebase works together.</strong> If not on the same team, at least at the same company. If you’re taking code from external developers via GitHub or something, this won’t work. Everyone making commits is “trusted.”</li> <li><strong>The product can be replaced live with another version without user awareness</strong>. In other words, hosted web applications and SaaS offerings.</li> </ol> <h1 id="whats-wrong-with-gitflow">What’s wrong with GitFlow?</h1> <p>In brief, <strong>the primary flaw with GitFlow is feature branches</strong>. Feature branches are the root of all evil, pretty much everything that results from using feature branches is terrible. If you take nothing else away from this post, or hell even if you stop reading entirely, please internalize an utter disgust for feature branches.</p> <figure class="image aligncenter"><img src="http://www.rodhilton.com/assets/boo_feature_branches.png" /></figure> <p>To be fair, Driessen’s post specifically does say that feature branches “typically exist in developer repos only, not in origin” but the graphics really don’t convey that very well, including a specific image of “origin” which includes a pink feature branch with three commits. Moreover, I’ve encountered many teams that have adopted or are considering adopting GitFlow and none of them have ever noticed that Driessen recommends branches only exist on a developer’s machine. Everyone I’ve ever met that adopts GitFlow has long-running remote feature branches.</p> <p>There’s nothing wrong with making a feature branch on your local machine. It’s a good way to hop between different features you might be working on, or have a clean <code class="language-plaintext highlighter-rouge">master</code> in case you need to make a commit to mainline without pulling in what you’re working on. But I’ll go further than the original GitFlow post and say <strong>feature branches should <em>never</em> be pushed to origin</strong>.</p> <p>When you have long-running feature branches, <a href="http://c2.com/xp/IntegrationHell.html">integration hell is almost inevitable</a>. Two engineers are happily working away making commit after commit to their own respective feature branches, but neither of their branches are seeing the other’s code. Even if they’re regularly pulling off mainline, they’re still only seeing the commits that make it into the main branch, not each others. Developer A merges their code into mainline, then Developer B pulls and merges theirs, but now they have to deal with tons of merge conflicts. Developer B might not be in the best position to understand and resolve those conflicts if they don’t fully understand what Developer A is doing, and depending on how long these branches have been alive, they might have tons of code to resolve.</p> <p><span data-pullquote="A developer's primary form of communication with other developers is source code. Long-running branches are silence. " class="left"></span></p> <p>Long-running feature branches are the exact opposite of what you want. <strong>A developer’s primary form of communication with other developers is source code</strong>. It doesn’t matter how often you have stand-up meetings, when it comes to the central method of communication, <strong>long-running branches represent dead silence</strong>. <a href="https://blog.newrelic.com/2012/11/14/long-running-branches-considered-harmful/">Long-running branches are the worst</a>.</p> <p>Feature branches also scale terribly. You can get away with one developer having a long-running feature branch, but as your team grows and you have more and more engineers in the same codebase, each pair of developers running feature branches is failing to communicate effectively about their work. If you have a mere 8 engineers each running their own feature branch, you have \(\frac{8^2}{2} = 32\) different failed communication lines. Add another engineer and it’s 40 missed lines of communication.</p> <h2 id="use-feature-toggles">Use Feature Toggles</h2> <p>Instead of using feature branches, use <a href="https://www.martinfowler.com/articles/feature-toggles.html">feature toggles</a> in your code. Feature toggles are essentially boolean values that allow you to not execute new code that isn’t ready for production while still sharing or possibly even deploying that code. It looks in code exactly as you might expect:</p> <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">if</span><span class="o">(</span><span class="n">newCodeEnabled</span><span class="o">)</span> <span class="o">{</span> <span class="cm">/*new code*/</span> <span class="o">}</span> <span class="k">else</span> <span class="o">{</span> <span class="cm">/*old code*/</span> <span class="o">}</span> </code></pre></div></div> <p>The old code will continue executing until the newCodeEnabled toggle is flipped. These toggles can be implemented through a config file or even some kind of globally accessible boolean, though in my experience the best way is to use an external config like <a href="https://www.consul.io/docs/agent/options.html">Consul</a> or <a href="https://zookeeper.apache.org/">Zookeeper</a> so that features can be toggled on or off without requiring a redeployment. <strong>Product owners and other stakeholders love being able to view a dashboard of toggles and turn features on and off without asking developers.</strong></p> <p>If two developers are working from the same branch but using different feature toggles, the chances for a conflict are far lower. And since they’re working off the same branch, they can pull and push multiple times per day to stay in-sync. At a bare minimum, developers should pull at the start of the day and push at the end of the day, so that no two local repositories are out of sync by more than a workday.</p> <p>Automated tests should be written for the cases where the toggle is both on and off. This basically means that when a new feature is being developed, the existing tests simply need to be adjusted to setUp with the flag off. Then new tests get added with the flag on. The test suite ensures that the “old way” never breaks. Sometimes the execution path through the code can be affected by more than one toggle. If you have two toggles that intersect in some way, you need 4 groups of tests (both off, both on, and both variants of one on and one off). Again this ensures that two developers working in the same area of the code are regularly seeing each other’s changes and integrating constantly. Code coverage tools can easily tell you if you’re missing a potential path through the code.</p> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/toggle.jpg" /></figure> <p>Feature Toggles can also be expanded to be more dynamic. Rather than simply being booleans, you could build your system so that toggles could depend on the status of users, allowing users or groups of users to “opt-in” to a beta program that gives access to bleeding edge features as they’re developed to solicit customer feedback. Toggles could be dependent on geographic location or even a dice-roll, allowing for A/B testing and canary releases when features are ready to be turned on.</p> <p>When the feature is finished and turned on in production, you can schedule a small cleanup task to delete the old code paths and the toggle itself. Or you can leave the code in place if it’s a feature that may have reason to turn off again in the future - I’ve left feature toggles in place that have really saved the day down the road when some major backend system was experiencing a catastrophic problem and stakeholders wanted to simply turn the feature off temporarily. If you do intend to remove the toggle, it’s a good idea to schedule it into your regular process as soon as you start the feature, lest the team forget when bigger, shinier work comes along.</p> <p>I cannot overstate the value of feature toggles enough. <strong>I can virtually guarantee that if you start using toggles instead of branches for long-developed features, you’ll never look back or want to use feature branches again</strong>. In nearly every case that I gave in to a team member that pushed hard for a feature branch because of this or that reason, it wound up being a massive pain later on that delayed the release of important software. I’ve pretty much always regretted feature branches, and never once regretted making a feature toggle. It takes some getting used to, particularly if you’re accustomed to long-running branches, but the positive impact toggles have on your team is tremendous.</p> <h1 id="introducing-three-flow">Introducing Three-Flow</h1> <p>Alright, now that we’ve gotten feature branching off the table, we can talk about the workflow that I’ve used successfully on multiple teams.</p> <figure class="image aligncenter"><img src="http://www.rodhilton.com/assets/three-flow.png" /></figure> <p>In this approach, all developers work off <code class="language-plaintext highlighter-rouge">master</code>. If a feature is going to need to be in development for a while, it’s put behind a feature toggle, and still kept on master with all the other code. <strong>All commits to master are rebased</strong>. It’s a good idea to set <a href="http://stevenharman.net/git-pull-with-automatic-rebase">automatic rebase on pulls</a>. If you have a local feature branch for your work, it should be rebased onto master, there should be no trace of the branch in origin.</p> <p>That’s it. That’s where all the main development work happens. You have one branch, the default <code class="language-plaintext highlighter-rouge">master</code> branch. And everyone codes there. Everything else about Three-Flow is concerned with managing releases.</p> <h2 id="releasing">Releasing</h2> <p>When it’s time to do a release (regular cadence or whenever the stakeholders want it, your call), the <code class="language-plaintext highlighter-rouge">master</code> branch is “cut” to the <code class="language-plaintext highlighter-rouge">candidate</code> branch. The same <code class="language-plaintext highlighter-rouge">candidate</code> branch is used over and over again for this purpose.</p> <p>The purpose of <code class="language-plaintext highlighter-rouge">candidate</code> is to allow a QA team to do any kind of regression testing it would like to do. Theoretically, all of the features themselves have been tested as part of accepting that the work is done. But this release candidate branch allows one last check to make sure everything is in order before going to production. The <code class="language-plaintext highlighter-rouge">master</code> branch where all the work is done and accepted should be tested in a production-like environment where <strong>the relevant feature toggles are on</strong>. The <code class="language-plaintext highlighter-rouge">candidate</code> branch where work is sanity-checked before release should be tested in a production-like environment where <strong>the relevant feature toggles are off</strong>. In other words, it should run the code the same way that production itself will, with the new toggles defaulted to off.</p> <p>To cut a release candidate, you’d do this:</p> <pre><code class="language-output">$ git checkout candidate #assume candidate already tracks origin/candidate $ git pull #make sure we're up to date locally $ git merge --no-ff origin/master $ git tag candidate-3.2.645 $ git push --follow-tags </code></pre> <p>The reason for using <code class="language-plaintext highlighter-rouge">--no-ff</code> is to force git to create a merge commit (a new commit with two parents). This commit will have one parent that’s the previous HEAD of <code class="language-plaintext highlighter-rouge">candidate</code> and one that’s the current HEAD of <code class="language-plaintext highlighter-rouge">master</code>. This allows you to easily view your git history and see when branches were cut, by whom, and which commits were pulled over.</p> <p>You’ll also noticed we tagged the release. More on that in a bit.</p> <p>If bugs are found in the <code class="language-plaintext highlighter-rouge">candidate</code> branch as part of the testing effort, they are fixed in <code class="language-plaintext highlighter-rouge">candidate</code>, tagged with a new release tag, and then merged down into <code class="language-plaintext highlighter-rouge">master</code>. These merges should also use the <code class="language-plaintext highlighter-rouge">--no-ff</code> parameter, so as to accurately reflect code moving between the two branches.</p> <p>When a release candidate is ready to go out the door, we update the <code class="language-plaintext highlighter-rouge">release</code> branch so that it’s HEAD points to the same commit as the HEAD of the <code class="language-plaintext highlighter-rouge">candidate</code> branch. Since we’re tagging every release we make on the <code class="language-plaintext highlighter-rouge">candidate</code> branch <strong>we can simply push the tag itself to be the new HEAD of <code class="language-plaintext highlighter-rouge">release</code></strong>:</p> <pre><code class="language-output">$ git push --force origin candidate-3.2.647:release </code></pre> <p>The <code class="language-plaintext highlighter-rouge">--force</code> basically means to ignore whatever else is on the origin <code class="language-plaintext highlighter-rouge">release</code> branch and set it’s HEAD to point at the same commit that <code class="language-plaintext highlighter-rouge">candidate-3.2.647</code> points to. Note that this is not a merge - we don’t want to complicate the git history with this, really the only reason we’re even bothering with the <code class="language-plaintext highlighter-rouge">release</code> branch at all is so that we have a branch to make production hotfixes to if need be. Yes, this force push means any hotfix work in <code class="language-plaintext highlighter-rouge">release</code> would get overwritten - if you find yourself releasing new candidates to production while there is ongoing hotfix production work, your team has a serious coordination/communication issue that needs to be addressed. Either that or you’re doing way too many production hotfixes and have a major quality problem. <strong>Production hotfixes should be possible but rare</strong>.</p> <p>The reason we do a <code class="language-plaintext highlighter-rouge">push --force</code> rather than a merge is that if you do a merge, it means that the commit at the HEAD of <code class="language-plaintext highlighter-rouge">candidate</code> and the commit at the head of <code class="language-plaintext highlighter-rouge">release</code> may have different sha-1’s, which isn’t what we want. We don’t want to make a <em>new</em> commit for the release, we want <em>exactly</em> what was QA’d and that’s the commit at the HEAD of <code class="language-plaintext highlighter-rouge">candidate</code>. So rather than create a merge, we forcefully tell git to make the tip of <code class="language-plaintext highlighter-rouge">release</code> exactly match that of the release candidate, the HEAD of <code class="language-plaintext highlighter-rouge">candidate</code>.</p> <p>Any production hotfixes that need to happen are made to <code class="language-plaintext highlighter-rouge">release</code> and then merged into <code class="language-plaintext highlighter-rouge">candidate</code> and then into <code class="language-plaintext highlighter-rouge">master</code>, all with <code class="language-plaintext highlighter-rouge">--no-ff</code>. This is quite a bit of git work for a production hotfix (2 distinct merge operations), but production hotfixes should be rare anyway.</p> <p>If you follow this workflow exactly, then when you view your git history as a graph it will look pretty much exactly like the above picture, showing exactly which commits moved between branches.</p> <figure class="image aligncenter"><img src="http://www.rodhilton.com/assets/threeflow-history.png" /></figure> <p>You’ll notice that the way the above graph does NOT resemble the earlier picture is that you don’t see the dotted lines pushing to <code class="language-plaintext highlighter-rouge">release</code> except the most recent one. That’s because we always do a <code class="language-plaintext highlighter-rouge">--force</code> push, meaning that every time we release to production, we completely ignore what production once was. This is intentional - it doesn’t matter what was on production and when, all that matters is what’s on production <em>right now</em> so we can hotfix it in case of a production emergency. The only time you’ll even see the <code class="language-plaintext highlighter-rouge">release</code> branch at all on this graph is for whatever is currently in production, and whenever hotfixes were made that had to be merged into <code class="language-plaintext highlighter-rouge">candidate</code> and <code class="language-plaintext highlighter-rouge">master</code>. This is exactly what we want: no unnecessary information adding noise to our graph.</p> <h2 id="release-notes">Release Notes</h2> <p>You can easily generate “release notes” for a deployment to production. You just need to compare the tag for the current <code class="language-plaintext highlighter-rouge">release</code> branch to the tag for the current <code class="language-plaintext highlighter-rouge">candidate</code> branch.</p> <p>If you’re using tags, you can do this comparison by using the tag names. It’s easy to remind yourself of which tag is in production, because every time we force an update of the <code class="language-plaintext highlighter-rouge">release</code> branch pointer, we use the tag. That means that there’s always exactly a tag that points to the same commit that the HEAD of <code class="language-plaintext highlighter-rouge">release</code> points to. You can find out which tag this is by running:</p> <pre><code class="language-output">$ git describe --tags release candidate-3.1.248 </code></pre> <p>So if we know that our <code class="language-plaintext highlighter-rouge">candidate</code> branch has been tagged as <code class="language-plaintext highlighter-rouge">candidate-3.2.259</code> you can get the list of commits that make up the difference between those two tags like so:</p> <pre><code class="language-output">$ git log --oneline candidate-3.1.248..candidate-3.2.259 </code></pre> <p>You could also do this if you didn’t want to mess with tags. The following will always just compare what’s on <code class="language-plaintext highlighter-rouge">release</code> (production) with what’s on <code class="language-plaintext highlighter-rouge">candidate</code> (what’s planned to go to production):</p> <pre><code class="language-output">$ git log --oneline release..candidate </code></pre> <p>Running these commands will show you every single commmit that is in the new candidate that wasn’t in the previous release. At my last gig, we liked to include the ticket numbers for our issue tracker in our commits, which allowed a script to cross-index this list of commits with actual work items in Jira.</p> <h2 id="common-operations">Common Operations</h2> <p>Just to summarize a bit, here are some of the operations you might want to be able to do. All of these examples assume that your local branches are properly set up to track the remote branches, and that those local branches are up to date. If you’re not sure, it’s often a good idea to do a <code class="language-plaintext highlighter-rouge">git fetch</code> and then use names like <code class="language-plaintext highlighter-rouge">origin/master</code> instead of <code class="language-plaintext highlighter-rouge">master</code> to ensure you’re using the origin’s version of the branch in case yours is stale.</p> <h3 id="how-do-i-cut-a-release-candidate-off-master">How do I cut a release candidate off master?</h3> <pre><code class="language-output">$ git checkout candidate $ git pull $ git merge --no-ff master $ git tag candidate-3.2.645 #optionally tag the candidate $ git push --follow-tags </code></pre> <h3 id="how-do-i-release-a-candidate">How do I release a candidate?</h3> <pre><code class="language-output">$ git push --force origin &lt;tag for the candidate&gt;:release </code></pre> <p>Alternatively if you aren’t using tags you could just do:</p> <pre><code class="language-output">$ git push --force origin candidate:release </code></pre> <p>or, if you’re not sure you’re up to date locally:</p> <pre><code class="language-output">$ git fetch $ git push --force origin origin/candidate:release </code></pre> <h3 id="how-do-i-find-which-branches-have-a-particular-commit-on-them">How do I find which branches have a particular commit on them?</h3> <p>Often people want to know if a particular code change is currently in production or set to go out to production in the next release. Here’s an easy way to find which of the three branches a commit is on.</p> <pre><code class="language-output">$ git branch -r -contains &lt;sha of commit&gt; </code></pre> <h3 id="how-do-i-find-which-tag-a-branch-is-pointing-to">How do I find which tag a branch is pointing to?</h3> <p>Or in more accurate terms, for a given branch pointer, how do I find which tag(s) point to the same commit as the branch HEAD?</p> <pre><code class="language-output">$ git describe --tags &lt;branch&gt; </code></pre> <h3 id="how-do-i-find-out-which-commits-are-going-to-go-out-with-a-release">How do I find out which commits are going to go out with a release?</h3> <pre><code class="language-output">$ git log --oneline release..&lt;tag of release candidate&gt; </code></pre> <p>You could also do this if you didn’t want to mess with tags:</p> <pre><code class="language-output">$ git log --oneline release..origin/candidate </code></pre> <h3 id="how-do-i-set-up-the-candidate-and-release-branches-for-the-first-time">How do I set up the candidate and release branches for the first time?</h3> <p>You can create what’s called an ‘orphan’ branch with no commits to it, but you’ll be unable to push it to origin to set up the remote branch until you have some kind of commit.</p> <p>Pretty much every project starts with an initial commit, usually just a readme or something. I recommend just making a branch off that commit and pushing that. What you’re looking for is the first merge commit into <code class="language-plaintext highlighter-rouge">candidate</code> to have two parents so that it shows up in logs correctly. So really, any commit on <code class="language-plaintext highlighter-rouge">candidate</code> will work, may as well choose the first one.</p> <pre><code class="language-output">$ git branch candidate `git log --format=%H --reverse | head -1` $ git checkout candidate $ git push </code></pre> <p>If you try the approach where you create a fresh orphan commit, you’ll find that the first time you try to merge, git will tell you “refusing to merge unrelated histories”. You basically need the branches to all share a commit, so it may as well be the first commit. Word of warning though, you might get merge conflicts the very first time you actually cut a release candidate (but probably not).</p> <p>To set up the release branch for the first time, just do a release. As soon as you force push the right commit to the remote <code class="language-plaintext highlighter-rouge">release</code> branch, it will be set. You’ll also want to check out a local copy of the same branch for any hotfixes you may want to do:</p> <pre><code class="language-output">$ git branch release $ git branch release --set-upstream-to=origin/release </code></pre> <h1 id="questions">Questions</h1> <h2 id="isnt-this-just-cactus-model">Isn’t this just cactus model?</h2> <p>You may be wondering if Three-Flow is simply Jussi Judin’s <a href="https://barro.github.io/2016/02/a-succesful-git-branching-model-considered-harmful/">cactus model</a>, an alternative to GitFlow that uses the default <code class="language-plaintext highlighter-rouge">master</code> branch for all development work.</p> <p>For the most part, yes, it is. The key difference is that Judin recommends moving commits between the <code class="language-plaintext highlighter-rouge">master</code> and <code class="language-plaintext highlighter-rouge">release</code> branches via cherry-picks. I very much recommend against that, cherry-picks are a last resort, to only be used when correcting a mistake. I prefer rebasing to merging, and I prefer merging to cherry-picking. I think it’s important to be able to use merge commits to actually see when and what commits were merged, and by whom. Being able to pull up an accurate graph of merges is important. I only use cherry-picking when I put a commit on the wrong branch by mistake.</p> <p>The other main difference is the <code class="language-plaintext highlighter-rouge">candidate</code> branch which I accept as something of a necessary evil. While my goal is always an always-deployable master where all commits automatically go to production, I’ve found that most organizations and teams are not ready or comfortable with that kind of deployment schedule. Most groups like to have some kind of QA buffer time and that’s basically what <code class="language-plaintext highlighter-rouge">candidate</code> provides. The goal of the team should be to remove the need for the <code class="language-plaintext highlighter-rouge">candidate</code> crutch but in the mean time Three-Flow provides a very usable, simple branching model that generally gives teams everything they need to be successful with git.</p> <h2 id="isnt-this-just-gitflow-without-feature-branches">Isn’t this just GitFlow without feature branches?</h2> <p>I have actually explained this branching strategy to GitFlow adopters by telling them it is essentially just GitFlow except that you don’t have feature branches, all development happens on <code class="language-plaintext highlighter-rouge">develop</code> but you rename GitFlow’s <code class="language-plaintext highlighter-rouge">develop</code> to <code class="language-plaintext highlighter-rouge">master</code> and you rename GitFlow’s <code class="language-plaintext highlighter-rouge">master</code> to <code class="language-plaintext highlighter-rouge">release</code>.</p> <figure class="image alignleft captioned"><img src="http://www.rodhilton.com/assets/always3.png" /><figcaption><p class="caption">Always 3 there are. No more, no less.</p></figcaption></figure> <p>The motivator behind Three-Flow is simplicity. GitFlow encourages the creation of a multitude of feature branches, release branches, and hotfix branches. As a project goes on, the log can start to look impossibly complex. With Three-Flow, there are no feature branches or hotfix branches. Hotfixes simply happen on the production <code class="language-plaintext highlighter-rouge">release</code> branch. And instead of having multiple release branches, you have a single <code class="language-plaintext highlighter-rouge">candidate</code> branch that you just keep reusing.</p> <p>You don’t need a system of what to name your branches because there are literally exactly three branches in origin: <code class="language-plaintext highlighter-rouge">master</code>, <code class="language-plaintext highlighter-rouge">candidate</code>, <code class="language-plaintext highlighter-rouge">release</code>.</p> <p>Answering the question of “where does my code go?” is very straightforward. Is it a production hotfix? If so, it goes in <code class="language-plaintext highlighter-rouge">release</code>. Is it fixing a bug that was found while QAing the release candidate? If so, it goes in <code class="language-plaintext highlighter-rouge">candidate</code>. Anything else and it goes in <code class="language-plaintext highlighter-rouge">master</code>.</p> <h2 id="what-about-code-reviews">What about code reviews?</h2> <p>Code reviews are just a gate on <code class="language-plaintext highlighter-rouge">master</code>, so you use this same process but instead of committing directly to master, you commit however your code review tool requires. You can do this by creating short-lived feature branches just for the purpose of gating a commit to master, or whatever your code review tool prefers.</p> <h2 id="what-about-a-codebase-with-multiple-artifacts">What about a codebase with multiple artifacts?</h2> <p>A lot of people have a single codebase that builds multiple, independently deployable artifacts. Those individual buildable artifacts need their own separate QA cycles and different artifacts will have different version numbers in production. How does Three-Flow work with such a setup?</p> <p>I’ve actually worked this way very recently. We had a single git repository that built multiple different artifacts that deployed independently. The solution was simple, and each independent artifact only adds two branches to Three-Flow.</p> <p>You still have a single shared codebase in <code class="language-plaintext highlighter-rouge">master</code> and you use feature toggles instead of branches. But let’s say you have two artifacts foo and bar. You simply have a <code class="language-plaintext highlighter-rouge">foo_candidate</code>, <code class="language-plaintext highlighter-rouge">foo_release</code>, <code class="language-plaintext highlighter-rouge">bar_candidate</code>, and <code class="language-plaintext highlighter-rouge">bar_release</code>. When you tag release candidates, you tag in the format <code class="language-plaintext highlighter-rouge">foo-candidate-2.1.423</code> and <code class="language-plaintext highlighter-rouge">bar-candidate-3.2.126</code>.</p> <p>Otherwise the process works exactly the same way. This scales better than you might expect, I was very recently on a large project that had 4 different independently deployable artifacts that came out of a single codebase. 8 <code class="language-plaintext highlighter-rouge">candidate</code> and <code class="language-plaintext highlighter-rouge">release</code> branches, plus <code class="language-plaintext highlighter-rouge">master</code>. Generally there was a pretty strong mapping between an individual “team” and one of these artifacts, so a team or a group still just worked with 3 branches.</p> <h2 id="is-there-a-way-to-not-have-to-manually-type-so-many-arguments">Is there a way to not have to manually type so many arguments?</h2> <p>One of the weirder aspects of this flow is that pretty much every command I suggest typing into git has additional arguments.</p> <p>Any time you do a <code class="language-plaintext highlighter-rouge">merge</code>, I’m asking you do to a <code class="language-plaintext highlighter-rouge">merge --no-ff</code>. When you cut a release and tag it I suggest you <code class="language-plaintext highlighter-rouge">push</code> using <code class="language-plaintext highlighter-rouge">push --follow-tags</code> so your tag gets up to origin as well.</p> <p>You can actually set these arguments to be defaults. Since all merging in Three-Flow uses <code class="language-plaintext highlighter-rouge">--no-ff</code>, you’re safe to run:</p> <pre><code class="language-output">$ git config --global merge.ff no </code></pre> <p>If you run this then from that point on you can simply run <code class="language-plaintext highlighter-rouge">git merge</code> without the <code class="language-plaintext highlighter-rouge">--no-ff</code> argument.</p> <p>Similarly, you can set <code class="language-plaintext highlighter-rouge">push</code> to always push locally-created tags:</p> <pre><code class="language-output">$ git config --global push.followTags true </code></pre> <p>And I mentioned this up above but it’s a good idea to set your master branch to automatically rebase whenever you pull. You can do this like so:</p> <pre><code class="language-output">$ git config --global branch.master.rebase true </code></pre> <p>You can actually set any new branch to automatically rebase on pulls in case you’re making local feature branches that track master:</p> <pre><code class="language-output">$ git config --global branch.autosetuprebase always </code></pre> <p>You could leave out the <code class="language-plaintext highlighter-rouge">--global</code> from any of these commands so the configuration only applies to the specific git repository you’re working in, as well.</p> <h2 id="cant-i-just-use-merging-for-the-release-branch">Can’t I just use merging for the release branch?</h2> <p>First of all, you can do whatever you want. This is just a strategy that worked for me on multiple different teams and I wanted to spread it around because I think it’s much simpler than GitFlow.</p> <p>But moreover, yes, if you don’t like the idea of doing a <code class="language-plaintext highlighter-rouge">push --force</code> to update <code class="language-plaintext highlighter-rouge">release</code> and losing some historical information, but would rather just do a <code class="language-plaintext highlighter-rouge">merge --no-ff</code>, by all means do it. This has the advantage of being fewer things to remember how to do, basically any time you move code between the three branches you’re doing a <code class="language-plaintext highlighter-rouge">merge --no-ff</code>.</p> <p>In fact, an early version of this strategy did just that, <code class="language-plaintext highlighter-rouge">--no-ff</code> merges to <code class="language-plaintext highlighter-rouge">release</code>. It worked out fine, reading the git history was really straightforward. The only thing I don’t like about it is that it’s <em>kind of</em> a fib, in that what goes out to production should be the exact same HEAD of <code class="language-plaintext highlighter-rouge">candidate</code> that went through QA, and doing a merge commit creates a brand new commit on <code class="language-plaintext highlighter-rouge">release</code> that didn’t necessarily get tested. You could, of course, not do a merge commit to <code class="language-plaintext highlighter-rouge">release</code> and only do fast-forwarding commits. But then you sort of lose the history anyway, and there’s always a chance that the branch can’t be fast-forwarded and you need a merge commit anyway. And forget about rebasing to <code class="language-plaintext highlighter-rouge">release</code>, you’re pretty much guaranteed to have to work your way through a ton of merge conflicts, often very similar ones over and over as you individually resolve each commit in the release.</p> <p>For my money, doing the force pushes kind of reinforces that <code class="language-plaintext highlighter-rouge">release</code> isn’t really a branch and shouldn’t be treated like one. It’s really just an updated pointer to production. It’s basically just a series of tags, except that since it’s a branch you can easily make a new commit on it for production hotfixes. To each their own though, there are definitely some simplicity advantages to always doing the same thing with <code class="language-plaintext highlighter-rouge">candidate</code> that you do with <code class="language-plaintext highlighter-rouge">release</code>. But hell, either way is preferable to using GitFlow. Have I mentioned how much I hate GitFlow? It’s like, a <em>bunch</em>.</p> <h1 id="summary">Summary</h1> <p>To summarize the main Three-Flow branching model outlined here:</p> <figure class="image alignright captioned" style="width: 300px;"><img src="http://www.rodhilton.com/assets/triforce-threeflow.png" width="300" /><figcaption><p class="caption">It's dangerous to git alone. Take this.</p></figcaption></figure> <ul> <li>There are three branches in origin: <code class="language-plaintext highlighter-rouge">master</code>, <code class="language-plaintext highlighter-rouge">candidate</code>, <code class="language-plaintext highlighter-rouge">release</code></li> <li>Normal development happens on <code class="language-plaintext highlighter-rouge">master</code>. All new commits are rebased.</li> <li>Features that are incomplete are put behind feature toggles, ideally dynamic toggles that can be changed without a redeploy</li> <li>To cut a release, <code class="language-plaintext highlighter-rouge">master</code> is merged into <code class="language-plaintext highlighter-rouge">candidate</code> with a <code class="language-plaintext highlighter-rouge">--no-ff</code> merge commit</li> <li>Any bugs found during a candidate’s QA phase are fixed in <code class="language-plaintext highlighter-rouge">candidate</code> and then merged into <code class="language-plaintext highlighter-rouge">master</code> with a <code class="language-plaintext highlighter-rouge">--no-ff</code> merge commit</li> <li>When a candidate is released to production, it’s <code class="language-plaintext highlighter-rouge">push --force</code>d to the tip of <code class="language-plaintext highlighter-rouge">release</code></li> <li>Any production hotfixes happen in <code class="language-plaintext highlighter-rouge">release</code> and are then merged into <code class="language-plaintext highlighter-rouge">candidate</code> which is then merged into <code class="language-plaintext highlighter-rouge">master</code>.</li> </ul> <p>That’s really all there is to it. Like I say above, there are all kinds of development paradigms that this won’t apply to, it’s largely geared toward web applications. But if you think Three-Flow might work for your organization, I highly recommend giving it a shot before adopting the future headache and incomprehensible git history that is GitFlow.</p> <p><strong>In my opinion, Three-Flow is the quickest and easiest way to get up and running with a sensible branching strategy with minimal rules to follow and the fewest complexities to understand.</strong></p> <p>Tried something similar and loved it? Tried something similar and found an issue that you solved? Think my use of <code class="language-plaintext highlighter-rouge">--force</code> is a blasphemous use of git and I’m the stupidest dumb idiot that ever ate his own boogers? Feel free to leave a comment below.</p> Three-Flow has exactly three branches - no more, no less: master, candidate, release. Sun, 09 Apr 2017 00:00:00 +0000 http://www.rodhilton.com/2017/04/09/a-different-branching-strategy/ http://www.rodhilton.com/2017/04/09/a-different-branching-strategy/ Programming #Programming Software Engineering Guiding Principles - Part 2 <p>Here are five more Guiding Principles I use when making technical decisions as a software engineer. You can also check out <a href="http://www.rodhilton.com/2016/06/15/guidingprinciples-part1/">Part 1</a>.</p> <p>Just as before, this list is really a list of principles I use when making difficult technical decisions or mantras I use to snap myself out of being stuck - it’s really not about just how I try to write good code (SOLID, DRY, etc) although there is a little bit of that as well.</p> <h1 id="perfect-is-the-enemy-of-good">Perfect is the Enemy of Good</h1> <p>When it comes to designing code, I think it’s better to get started as soon as possible and make changes and modifications via refactoring as needed. It’s better to get something up and working quickly, rather than spending time debating in front of whiteboards about the correct way to do things. In my experience, engineers in particular have such an affinity for elegance that we can get wrapped around the axle trying to figure out the perfect, most elegant solution.</p> <p>I’m not saying to write shitty code, obviously. It’s still important to follow good design principles like <a href="https://en.wikipedia.org/wiki/SOLID_(object-oriented_design)">SOLID</a>, the <a href="https://en.wikipedia.org/wiki/Law_of_Demeter">Law of Demeter</a>, <a href="https://en.wikipedia.org/wiki/KISS_principle">KISS</a>, <a href="https://en.wikipedia.org/wiki/Defensive_programming">defensive programming</a>, <a href="https://christiantietze.de/posts/2015/09/clean-code/">CLEAN</a>, <a href="https://en.wikipedia.org/wiki/Separation_of_concerns">separation of concerns</a> and so on. It’s just that you don’t have to get every little thing perfect, it’s better to get something that’s imperfect but works built and then refactor to perfection later.</p> <p>Remember <a href="https://en.wikipedia.org/wiki/John_Gall_(author)">Gall’s Law</a>:</p> <blockquote> <p>A complex system that works is invariably found to have evolved from a simple system that worked.</p> </blockquote> <p>It’s important to realize when you or your team have gotten into a state of <strong>analysis paralysis</strong>, which is one of the reasons I like Pair Programming so much - it’s handy to have a second person around to recognize when you’re wrapped up analyzing instead of building. Nobody really asked you to build the world’s greatest, most reusable, most well-designed system on the planet. <strong>The company doesn’t need the perfect solution, it just needs one that’s good enough</strong>.</p> <p>There are lots of ways engineers can get gridlocked doing analysis, and it’s important to recognize all of them.</p> <h2 id="premature-optimization">Premature Optimization</h2> <p>Don Knuth calls Premature Optimization the <a href="http://c2.com/cgi/wiki?PrematureOptimization">root of all evil</a>. It can happen both in code/design, as well as architecture.</p> <p>If you find yourselves talking about caching layers, circuit breakers, or geo redundancy before building even the first version of the software, you might be getting ahead of yourself. <strong>Those things are all just as easy to add later as they are to add now</strong>, so there’s no reason to get wrapped up on these concerns early.</p> <p>Obviously I’m not advocating writing inefficient algorithms when an efficient one is just as easy to implement, but if the code is substantially cleaner with something less efficient, leave well enough alone and just get it working. Even dumbass bubble sort is usually good enough, and it has the advantage that you remember how it works right now without double checking anything on Wikipedia.</p> <h2 id="bike-shedding">Bike Shedding</h2> <p>Otherwise known as the <a href="https://en.wikipedia.org/wiki/Law_of_triviality">Law of Triviality</a>, this is when a disproportionate amount of weight is given to trivial concerns when designing something. The term comes from the fact that teams will tend to focus on the minor issues that are easy to understand, such as what color to paint the staff bike shed.</p> <figure class="image alignright captioned"><img src="http://www.rodhilton.com/assets/sheldon-cooper.png" /><figcaption><p class="caption">You're in my spot</p></figcaption></figure> <p>The more time you devote to making a decision, the more you need to periodically ask yourself “does this really matter?” A lot of times, it doesn’t matter to anyone else on the team, it doesn’t matter to your users, and it certainly doesn’t matter to the company. <strong>If it only matters to you, you’re probably being, you know, kind of a dork</strong>.</p> <p>An entire team can bike shed as well. Recognize when your team is bike shedding and stop the conversation, drive it toward the things that matter. If people keep gravitating toward the trivial, it means that there’s a lack of comprehension of the difficult decisions that actually matter. You either need to stop and get everyone on the same page about the challenging stuff, or you have the wrong group of people making the decision.</p> <h2 id="overengineering">Overengineering</h2> <p>Premature reusability. Engineers have a tendency to want to design components to be as generic and reusable as possible, there’s an old joke from Nathaniel Borenstein I’m fond of:</p> <blockquote> <p>No ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure.</p> </blockquote> <blockquote> <p>Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter.</p> </blockquote> <p>A really great example of over-engineering is found in Bob Martin’s <a href="https://smile.amazon.com/Software-Development-Principles-Patterns-Practices/dp/0135974445?sa-no-redirect=1">Agile Software Development</a>. In it, Bob Martin and Bob Ross sit down to do the <a href="http://butunclebob.com/ArticleS.UncleBob.TheBowlingGameKata">Bowling Game Kata</a>, a programming exercise where you simply write code to calculate the scores for a bowling game.</p> <p>The two engineers started talking about what classes they were going to have. There would need to be a <code class="language-plaintext highlighter-rouge">Game</code>, which of course would have 10 <code class="language-plaintext highlighter-rouge">Frame</code> instances, each of which would have between 1 and 3 <code class="language-plaintext highlighter-rouge">Throw</code> instances. This seemed natural, like how you might answer a “design a the object model for a bowling game” question in an interview.</p> <p>But as they tried to write tests to drive out the behavior of <code class="language-plaintext highlighter-rouge">Frame</code> and <code class="language-plaintext highlighter-rouge">Throw</code> they found that there were no behaviors to those classes. A <code class="language-plaintext highlighter-rouge">Throw</code> is really just an <code class="language-plaintext highlighter-rouge">int</code>. In the end, they wound up with a simple <code class="language-plaintext highlighter-rouge">Game</code> class and nothing else, with a handful of methods on it to say how many pins were hit, and a method to get the score.</p> <figure class="image aligncenter"><a href="http://xkcd.com/974/"><img src="http://imgs.xkcd.com/comics/the_general_problem.png" /></a></figure> <p>Don’t start any large endeavor with a mind on generality and reuse. Follow the <a href="https://blog.codinghorror.com/rule-of-three/">Rule of Three</a> - make everything designed for single use and naturally you will eventually discover reusable components falling out when refactoring after you’ve done the same or similar things in multiple places.</p> <h2 id="exception---architecture">Exception - Architecture</h2> <p>It’s important to note that there is one exception to this idea: your code <em>design</em> can be just good enough. But your system <em>architecture</em> needs to basically be perfect from the start. This can be extremely difficult to get right, but it’s important, so a little bit of analysis paralysis is somewhat forgivable.</p> <p>When it comes to code design, evolutionary design is the way to go - just build it and evolve it. But for architecture, get the team into a room with a whiteboard and hash out the details before you start building. <strong>Evolutionary design, up-front architecture</strong>.</p> <p>How do you know the difference between design and architecture? One analogy I’m fond of is that architecture is strategy while design is tactics. Doing the right thing vs doing things right. That’s a helpful distinction but I find myself most fond of <a href="http://www.ibm.com/developerworks/library/j-eaed10/">Martin Fowler’s definition</a>:</p> <blockquote> <p>Architecture is the stuff that’s hard to change later. And there should be as little of that stuff as possible.</p> </blockquote> <p>Anything that would be extremely difficult to change later on is something deserving of a substantial amount of upfront analysis. The language you choose for your code is architecture because changing it would require a full rewrite. If you’re using a highly opinionated framework like Rails, Grails, or something that spreads throughout your entire codebase like Spring, that’s architecture.</p> <p>If you go with microservices, lots of decisions that are typically architecture suddenly become design, because you could swap one microservice for another easily, or quickly swap out the language or framework of one service. However, now the contracts between services - which would be easy to refactor if they were all in a single codebase together as simple classes - cease being design and become architecture. And of course the decision to use microservices or not altogether <em>is</em> architecture.</p> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/architecture.jpg" /></figure> <p>The data store you use is likely architecture. It can sometimes be easy to swap out MySQL for Oracle if you’re using a strong database abstraction layer or relying on JPA or ActiveRecord something similar, but as your data needs grow you’ll quickly find yourself using customized queries or perhaps even stored procedures, and migrating becomes difficult. Even if you choose something like Postgres and try to keep the option open to switch to Oracle or MariaDB, you’re still picking a relational database at all, and switching to a NoSQL store would be extremely difficult so no matter how you slice it, it’s architecture.</p> <p>Public-facing APIs are a strange middle ground. Once you’ve decided on the APIs, they’re impossible to change without affecting your users, so they’re architecture. However, you can introduce a new API version later fairly easily, so it’s not that hard to change your mind, making it sort of design? Of course, the WAY you version the APIs in general is architecture, because if you provide no facility for versioning early on it becomes difficult to add a new version later.</p> <p>Overall the dichotomy is subjective so you need to use your best judgement, but what’s important is that you don’t spin your wheels making something perfect that could be perfected later if it can be good enough now.</p> <h1 id="if-you-break-my-code-its-my-fault">If You Break My Code, It’s My Fault</h1> <p>I’ve blogged about this one before, under the more provocative title “<a href="http://www.rodhilton.com/2011/10/21/i-broke-your-code-and-its-your-fault/">I Broke Your Code, and It’s Your Fault</a>”. In fact, there was even a <a href="https://www.reddit.com/r/programming/comments/qbg9y/i_broke_your_code_and_its_your_fault/">lengthy reddit discussion</a> about it in which folks tried to decide if I was clinically insane, or just a regular moron.</p> <p>Hyperbolic title aside, I still stand by the original point. Even if someone else does something as annoying as change the interface I was depending on in my code, it shouldn’t be possible for them to so thoroughly break the code I wrote without SOMETHING telling them that they did so. <strong>All it takes is one failed test to say “hold up, you broke shit.”</strong></p> <p>When I push code up to the shared repository, it’s my job to ensure it works, not QAs. But it’s also my job to ensure that a junior engineer or a new hire can’t just break it without something telling him or her it happened. When I write code, I try to imagine, what would happen if some other engineer came in and modified the class I just wrote, maybe didn’t understand why I was doing <code class="language-plaintext highlighter-rouge">-1</code> somewhere, and so they just removed it? Would that be an annoying thing to do? Sure, and I would hope that the other engineer might ask me why I was doing it if I failed to make it obvious from the code itself. But maybe this is years from now and I’m not even at the company anymore, so they remove the <code class="language-plaintext highlighter-rouge">-1</code>, or they think my code sucks so they rewrote the entire function from scratch. The instant they do that, a test I wrote somewhere should fail (hopefully with an explanation of why it needed to be the way it was).</p> <p>By writing my code like this, and creating what reddit argued is <em>too many tests</em>, I am encouraging the other members of my team to embrace <strong>fearless refactoring</strong>. Don’t like how I wrote something? Refactor it, and don’t worry about breaking anything - I wrote enough tests to ensure that you can’t. Is it possible I’ll make a mistake and fail to cover something I should have? Of course it is, but when this happens, the refactor-er in question did me a favor by highlighting a mutation that I missed.</p> <p>Top comment on that thread questions the wisdom of being happy that an app breakage highlights a missing test we can add. The commenter says to try telling the client about your unit test suite while they’re losing customers left and right due to the bug. I guess that’s a fair poi– wait, what? You’re developing applications where breakages and bugs can utterly destroy your company, and you’re <em>not</em> writing a metric ton of tests? That’s some serious Evel Knievel shit right there. Um, Evel Knievel was a stuntman in the 70’s. Er, the 70’s were a decade about 30 years before Spongebob first aired. Nevermind.</p> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/safetynet.jpg" /></figure> <p>Look, the safety net of an overabundance of unit tests combined with some high-level smoke tests to ensure that basic functionality is always working should give the entire team the freedom to refactor and rewrite anything they don’t like. If everyone on a team is able to adopt this attitude, the end result is code that is incredibly clean. <strong>If the team isn’t fearlessly refactoring, and they’re afraid to make tiny changes and improvements because something might break somewhere, your team is hamstrung</strong>. Modules start to have a “here be dragons” vibe, with everyone afraid to improve them and so they rot until your entire codebase is rotten and you think you need to rewrite it (we talked about that <a href="http://www.rodhilton.com/2016/06/15/guidingprinciples-part1/#toc-the-team-unqualified-to-refactor-is-unqualified-to-rewrite">already</a> though).</p> <p>I’m not saying it should be impossible to break my code. Changing the interfaces of things I depend on, or literally going in and modifying what I wrote could easily make it behave incorrectly. I can’t stop that. I’m saying it <strong>should be impossible to break it without a test automatically telling you that it happened</strong>.</p> <p>When you actually imagine that another engineers might come in and accidentally (or maliciously) modify your code, your tests get much stronger. You’ll find that your assertions are better when you try to guard against this sort of thing, which is really what unit testing is all about. Lots of people track coverage for tests, but coverage basically just counts lines hit during the testing phase. You could write a suite of unit tests that actually hits every single line of code, giving you 100% code coverage, but makes no assertions whatsoever. Your coverage is high, but your tests are borderline useless in this case. <strong>Raw coverage isn’t what I’m talking about here</strong>.</p> <p>It’s not about how many tests you have or how many lines they cover, it’s about how strong the tests you have are. And approaching your tests with the attitude that it should be impossible to break your code without a test failing is how you make them strong.</p> <h2 id="zealotry">Zealotry</h2> <p>This is probably the <em>strong opinion</em> I hold that comes closest to zealotry for me. As a counterexample, I really <a href="http://www.rodhilton.com/2009/02/21/i-love-pair-programming/">love pair programming</a> but I left the gig where I did it regularly and took a job where the team really didn’t like pairing, and I adjusted fine to not pairing. I usually write my tests first and enjoy the TDD red-green-refactor cycle, but there are times when I suspend this practice and write tests later. There are plenty of things I really love doing that I’m more than happy to stop doing as the situation demands, but I don’t think I can go back to not testing at all, and I might be unwilling to listen to arguments to convince me to.</p> <p>At this point in my career, the level of physical discomfort I feel writing code with no tests at all is unbearable. Not too long ago I was extremely busy with one task but was forced to switch gears to implement a small change I didn’t really agree with to an unrelated part of the code. As some kind of juvenile form of protest, I half-assed the code and wrote no tests, just to get it done and off my plate so I could go back to what I was doing. I pushed it up to the central git repo and felt so uncomfortable with what I had done that I lasted about 60 seconds before going back in and writing some tests to cover the change and explain why in the test case. My rebellion was brief, I am not a badass.</p> <p>I’ve heard of places where bosses will declare that unit tests are a waste of time that slow down development, and I genuinely don’t think I could work in a place like that anymore. Ten years ago and I wouldn’t have cared, but today it just seems like an impossible request that I don’t write tests, like asking me to drink lighter fluid or something. I’ve fallen into such a comfortable cycle of code-a-little, test-a-little that eschewing the process feels completely unnatural and foreign; whiteboard coding interviews seem so bizarre to me now, I’d never write so many lines of code without tests at work. My god, there’s an <code class="language-plaintext highlighter-rouge">if</code> statement in it, that’s two tests!</p> <p>My code design has vastly improved by thinking about testability. Once upon a time I’d have used <code class="language-plaintext highlighter-rouge">Math.random()</code> or <code class="language-plaintext highlighter-rouge">System.currentTimeMillis()</code> or <code class="language-plaintext highlighter-rouge">new FileReader("whatever.txt")</code> without a second thought, but viewing code through the lens of testability made me realize that all of those things are subtle integration dependencies on the underlying system. Figuring out how to write unit tests for code that depends on random number generators, a clock, or the filesystem has forced me to consider things as candidates for dependency injection that I’d never have considered without those tests. Even if I were to delete those tests afterwards, the code is still cleaner and better for having been designed with them in mind.</p> <h1 id="if-you-hate-it-do-it-more">If You Hate It, Do It More</h1> <p>This one is easy to say, but very hard in practice to commit to. Basically, whenever I find myself dragging my feet on something I don’t want to do, I need to sit down and ask myself why I hate doing it. Chances are, when I get to the root cause of my disdain or anxiety, I find that it’s because something is extremely inefficient or error-prone.</p> <p>Hate performing deployments? Why? There’s a good chance it’s because it involves a bunch of manual steps, handbuilding artifacts and manually uploading them somewhere, then shelling into multiple boxes and executing commands. The desire to get away from anything unpleasant is very strong, but it’s these situations that would benefit the most from doubling down and doing it more often.</p> <p>If you’re deploying every quarter because it’s such a pain, you need to start deploying every month. If you still hate it, every week. If you still hate it, every day. At some point you’ll hit a point where you say enough is enough, and if you’re going to deploy this crap every day then it needs to be easier. And that’s when you start developing deployment pipelines and writing automated scripts. <strong>The more you do something you hate, the better you’ll get at doing it</strong>, if only to keep your sanity.</p> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/hate.jpg" /></figure> <p>Hate provisioning machines? Start adding and removing boxes from clusters on a regular basis. At first it will be difficult and annoying - <em>that’s good</em>, that’s what will make you better. In no time you’ll be using OpenStack or AWS, augmenting setup with Puppet or Chef, or maybe even containerizing your entire process with Docker. Your infrastructure will be better for it, everything that you hate doing is likely a weak spot in your development.</p> <p><strong>Hating something is your brain’s way of telling you “this sucks,” but instead of responding by hating it, respond by taming it.</strong> The more you do it, the easier it is to figure out which parts suck the most, and how you can improve them.</p> <p>One of my favorite examples of this is Netflix’s <a href="http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html">Chaos Monkey</a> approach. Dealing with failure was such a negative experience for Netflix that they started doing it all the time, so often that they built software that would randomly fail-out nodes, clusters, or even entire regions. It forced Netflix to revisit how their software works, and handle failure better. What came out the other end was a vastly superior product. And also “Daredevil”.</p> <p>This principle is tough because it’s a lot like cleaning an incredibly messy room or a trainwreck of a garage. Things start pretty bad, but the real issue is that <strong>things have to get worse before they can get better</strong>. Only by embracing the things you hate doing the most do you force your own hand, resulting in something that can do the horrible job you hate automatically, on-demand, and quickly.</p> <h2 id="meetings">Meetings</h2> <p>Yes, this principle applies to basically every aspect of your job. This one is particularly tough for me but: if you hate meetings, have more of them. Start having daily meetings if you need to. In so doing, you and the rest of the team will discover exactly what it is you hate about meetings so much. The only way to really figure out EXACTLY what you hate about meetings is to expose yourself to them so often that it becomes immediately apparent what doesn’t work about them for you.</p> <p>Once you’ve identified what meeting dysfunctions make you despise them so much, it’s easier to fix those things and make meetings more enjoyable. Honestly, I hate meetings too but I need to ask myself: geeze, why? Should it really be so unpleasant to meet and chat with other engineers I respect and enjoy working with? Are we really such misanthropic jerks that we can’t enjoy exchanging ideas? And don’t say that the reason you hate meetings is because they prevent you from doing <a href="http://www.rodhilton.com/2012/10/05/getting-real-work-done/">Real Work</a>, I’ve already talked about how dumb that is.</p> <p>After you and your team realize what doesn’t work about meetings, you can take steps to address them until meetings aren’t something you despise. And once you don’t hate it, the inverse of the rule applies: <strong>if you like it, you can survive doing it less</strong>. Dial your meeting schedule back down once the thing you hate is <em>not meeting</em>.</p> <h1 id="be-the-worst-person-in-the-band">Be the Worst Person in the Band</h1> <p>I got this from Chad Fowler’s “<a href="https://amazon.com/Passionate-Programmer-Remarkable-Development-Pragmatic-ebook/dp/B00AYQNR5U/">The Passionate Programmer</a>” who in turn took it from jazz guitarist Pat Metheny, who said:</p> <blockquote> <p>Always be the worst guy in every band you’re in.</p> </blockquote> <p>This idea has resonated with me ever since. Is it uncomfortable to be the worst person on the team? Yeah, it sure is. And it’s this discomfort that will drive you to be better. When you’re the best person in the band, you walk around with tons of confidence but you aren’t learning anything and you aren’t improving, because nothing is driving you to. When you’re the worst, you have to step it up.</p> <p>One of the great things about this career is that it’s absolutely impossible to ever know all of it. It’s growing and new tools and ideas are being added at a rate faster than you can possibly learn them. I can see this being stressful for some people, but it’s my favorite thing about it. There’s always, <strong>always</strong> more stuff to learn and improve. It’s like being a bookworm and walking into a library of infinite size.</p> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/yoko.png" /></figure> <p>Nothing makes me want to learn more and be better than being surrounded by people who are better than me. I’ve worked plenty of jobs where I was the worst guy in the band, and plenty where I was the best, and I always come out of the ones where I was the worst guy in the band feeling like I just spent the entire time leveling up like crazy. Being the best fills you with confidence, which is nice on an emotional level, but it’s nowhere near as satisfying as coming out the other end of a job a vastly improved person.</p> <p>I’ve modified this slightly to <strong>be the second worst guy in the band</strong>. Being the truly worst can make you feel useless, like you’re not making any valuable contribution. Plus it actually helps to be able to mentor someone, one of the most effective ways to learn something is by teaching. In any case, definitely don’t be the best person in the band.</p> <p>Another way I’ve heard this phrased comes from Scott Bain as quoted in <a href="https://smile.amazon.com/Beyond-Legacy-Code-Practices-Software/dp/1680500791">Beyond Legacy Code</a>:</p> <blockquote> <p>Always strive to be mentoring someone and to be mentored by someone.</p> </blockquote> <p>You can subscribe to all the blogs, read all the books, and attend all the conferences, but nothing will help you learn and keep up with the ever-changing world of software development like working every day with someone better than you. The more people that you’re working with that are better than you, the stronger this effect.</p> <h1 id="your-first-loyalty-is-to-your-users">Your First Loyalty is to Your Users</h1> <p>This one might be a little controversial, and proudly proclaiming it on my blog might make me unemployable. But at the end of the day, I as a software engineer answer to an authority greater than my product owner, my boss, my VP, my CTO, or anyone else who signs my paychecks: I owe my users quality software. If I wouldn’t be willing to attach my personal cell phone number to the feature I’m developing, I shouldn’t write it.</p> <figure class="image alignleft captioned"><img src="http://www.rodhilton.com/assets/tron2.jpg" /><figcaption><p class="caption">I fight for the users</p></figcaption></figure> <p>I have, on more than one occasion, gotten into a heated debate with a product owner or even a supervisor about a feature I was asked to implement. Often, this stems from situations where the people who are USING the product aren’t the ones PAYING for it, and the client’s higher-ups are writing the checks for features that their underlings using the product might dislike. I’ve found myself usually able to win these arguments by helping the product owners understand how unhappy their users will be, and pointing out that happy users will eventually leave their current company and become a sales lead at their next gig, but the most heated and intense arguments I’ve been involved in at work always stemmed from me advocating on behalf of the voiceless users who would end up on the receiving end of antagonistic features.</p> <p>Unlike most the other principles on this list, this one won’t result in better quality codebases or more SOLID or testable designs - it actually affects the product I build at a business level. I will never do anything half-assed, never lie or mislead my users, never take advantage of them, and never intentionally create a negative experience for them because it will line my or someone else’s wallet. This is especially true when my end-users are not engineers themselves - <strong>they have no power or control in this software-centric world, so taking advantage of the power imbalance is particularly unethical</strong>.</p> <p>I’m not arguing that everything you build needs to make the world a better place at some cosmic level, or that you need to be carbon neutral in every facet of your life or anything like that. I understand that sometimes you need to pay the bills. But what I’m saying is to never, ever forget that at the end of the day some poor schmuck is going to be using the thing you’re building, and this person has people who care about him or her as much as you care about your loved ones. Imagine your mother or husband or best friend using your software - do you feel good about yourself? If not, don’t build it.</p> <p>Don’t write software that <a href="https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal">tricks emissions tests</a> just because your asshole boss told you to. Don’t write <a href="https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal">copy protection schemes that phone home with a user’s private data</a> just because your CEO thinks he’s entitled. Don’t develop code that <a href="http://www.cnet.com/news/e-tailers-snagged-in-marketing-scam-blame-customers/">opts users into monthly charges if they are dumb enough to trust you when using your product</a>, there’s no such thing as a “stupid tax”, <strong>your stupidest users need your advocacy the most.</strong></p> <p>Just remember, someday there might be a scandal and a court case that involves engineers being held accountable for the features they built, and “I was just following orders” may not be enough to save you. Be proud of what you create. It’s not enough to assume <a href="https://groups.google.com/forum/#!msg/comp.lang.c++/rYCO5yn4lXw/oITtSkZOtoUJ">the guy who ends up maintaining your code will be a violent psychopath who knows where you live.</a> - assume that your poor users are as well.</p> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/uncle-ben.jpg" /></figure> <p>I think there’s a tendency for developers to “just do what they’re told.” The users and their experience is the concern of the product owners, marketing types, salespeople, and other stakeholders - the developers just build the software according to the requirements, right?. In all honesty, I wish that this was a safe mindset to adopt - I’d rather concern myself only with the code and my fellow developers who have to work with it, and leave the features and user experiences up to other people. But time and time again, I’ve found that for whatever reason the folks in those positions lose sight of the user experience and request antagonistic features. At the end of the day, the engineer is where the rubber meets the road - we’re the gatekeepers on what actually gets created, so <strong>we’re the last line of defense before something goes out the door that will make the lives of users worse</strong>. Product Owners and marketers can draw boxes and do photoshop mockup designs all they want, but the engineers are the only ones with the <em>power</em> to actually build the stuff users will be interacting with, and as the sole wielders of this power, we have the <em>responsibility</em> to consider those users even when others don’t.</p> <h1 id="conclusion">Conclusion</h1> <p>I feel like there are more things I wind up saying a lot, but one of the most challenging parts of writing up this list was even stepping back enough to realize which things could be written down. When you live by certain ideals long enough, they become so ingrained that it’s hard to even remember what the principles are. Most of the ones on this list I realized only because I’ve been called out by other people for saying them so often.</p> <p>Anything missing? Any principles that you live by as an engineer? Leave a comment, I’m curious what other people see as their <strong>Software Engineering Golden Rules</strong>.</p> Here are five more Guiding Principles I use when making technical decisions as a software engineer. You can also check out Part 1. Mon, 20 Jun 2016 00:00:00 +0000 http://www.rodhilton.com/2016/06/20/guidingprinciples-part2/ http://www.rodhilton.com/2016/06/20/guidingprinciples-part2/ Programming work career principles #Programming #Work #Career #Principles Software Engineering Guiding Principles - Part 1 <p>I find that I repeat myself often at work. There are a handful of things I say so often when discussing decisions that I’ve been called out for it on occasion for acting like a broken record.</p> <p>But the reason I keep repeating these phrases is that I think they inform a great deal of my decision-making. They are, in effect, my guiding principles when developing software professionally.</p> <p>I thought it might be fun to write a few of these things down because I think that they’re worth sharing - I feel like these principles have steered me in the right direction time and time again. Obviously, there are exceptions to these and there are times when they should be ignored (after all, not being a zealot is one of the principles) but I think they will generally take an engineer down the right path.</p> <h1 id="have-strong-opinions-weakly-held">Have Strong Opinions, Weakly Held</h1> <p>I think the phrase I’ve heard more than any other in my life is “tell us how you really feel!” which is I guess people’s way of telling me I’ve made them uncomfortable by expressing an opinion too aggressively. It’s true, I can be very strongly opinionated, and I’ve gotten into more than my fair share of, oh, let’s call them “passionate discussions” in the workplace. I’m never insulting or personal, but I have strong opinions on how to do things.</p> <p>That said, I think it’s important to always be open to having my mind changed. If anything, I think I’m TOO easy to convince to change my mind on something, often it takes only one strong counterpoint to completely demolish an opinion I’ve held firm to for years. My opinions are informed by years and years of experience, but that experience doesn’t always apply in every situation, so it’s important to be willing to adjust in light of new information or facts.</p> <p>Apparently this phrase “strong opinions, weakly held” comes from Stanford Professor Bob Sutton. I think it’s a good way to approach every opinion really. I’ve switched between polar opposite positions on a number of issues, including political and philosophical issues that I won’t get into on this blog, but I think I do a good job of allowing my convictions of experience to be suspended to make way for alternative arguments. <strong>I never assume I’m objectively right just because I care</strong>.</p> <figure class="image alignright captioned"><img src="http://www.rodhilton.com/assets/zealot.jpg" /><figcaption><p class="caption">Unless you hunger for battle, don't be a zealot</p></figcaption></figure> <p>It’s important that the thing that makes an opinion weakly held is a strong, rational, logical argument for the alternative position. I won’t back down on something I think is important because of how passionately another person disagrees, or how upset it makes them that they’ve met opposition. This is what makes the opinion strong: I genuinely care about believing the largest possible number of true and correct things, so the only way to dislodge a strong opinion is with true and correct things that work to counter it.</p> <h2 id="dont-be-a-jerk">Don’t Be A Jerk</h2> <p>I cringed when I watched Season 3, Episode 6 of my favorite show Silicon Valley, as the main character Richard felt so strongly about Tabs over Spaces that he alienated everyone in his life over it. These debates are so incredibly pointless to me, I do not understand how people waste so much time caring about them. <strong>Strong opinions are not the same as zealotry</strong>, zealotry is company and team poison. Strong opinions only matter if the things they’re about matter. Having extremely strong opinions about tabs vs spaces, or emacs vs vim makes you borderline un-hireable to me, bringing zealots onto your team violates the <a href="https://smile.amazon.com/Asshole-Rule-Civilized-Workplace-Surviving/dp/0446698202">No Asshole Rule</a> (though for the record, spaces and vim, #sorrynotsorry).</p> <p>Additionally, it’s fine to have strong opinions but if you find yourself belittling or mocking other people in order to stand by them, they probably aren’t that strong. Your positions on technical matters should stand on their own weight, without needing to knock people down. Don’t be one of those people that walks around acting like a jerk and then justifying it by saying you have strong opinions. The best engineers I’ve worked with have consistently been skilled at <strong>not only having well-reasoned strong opinions, but communicating those opinions respectfully to others.</strong></p> <p><span data-pullquote="It's better to have a hole in your team than an asshole. " class="left"></span></p> <p>Being a technical wizard doesn’t give someone the right to be a pompous ass to everyone else. I’m a strong advocate of taking people who are, at a personal level, insufferable, and firing them for being a poor cultural fit, regardless of how much they know about this or that technology. It’s better to have a hole in your team than an asshole.</p> <p>I started this list with this one in particular because it’s important. The rest of this list is, essentially, a list of strongly held opinions I maintain. But it’s important that even these opinions, having reached Guiding Principle level, are subject to change in the light of strong counterarguments, or subject to suspension in light of unique circumstances.</p> <h1 id="the-team-unqualified-to-refactor-is-unqualified-to-rewrite">The Team Unqualified to Refactor is Unqualified to Rewrite</h1> <p>I strongly, strongly believe that a full-on code rewrite is nearly always the wrong thing to do. Either you pull everyone off the current iteration of the product to do the rewrite, which means your main product languishes, or you pull some people off to do the rewrite, meaning the rewrite team has to always be catching up with the ever-growing main product.</p> <p>From a simple project management standpoint, this is a disaster. Want to know how long the rewrite will take? Well, in the former case, you’re working with a team that’s dealing with new technology and new development, so there’s no way to apply any previously recorded team velocity as a prediction of future performance. Moreover, you don’t actually have any sense of the scope of the project, because the requirements are basically “everything the app does now”, which will include weird corner cases that have long since been forgotten. So you have an unknown scope and an unknown team velocity, and you’re trying to make a prediction of when this work will be completed? So development is going to stop on the main product line for an indeterminate amount of time. And this is the BEST case scenario, the one where everyone can focus on doing the rewrite.</p> <p>In the latter case, it’s even more unpredictable - you still have the unknown scope issue, but it’s worse because you also have to include, in the scope, getting to parity with whatever else is built while the rewrite is being worked on. If the rewrite would take 3 months, you have 3 months worth of new features on the main product to catch up to. If it would take 6 months, you have 6 months of features to catch up on. And since you don’t know how long it will take just to reach current parity, you can’t predict how far in the hole you’re going to be when it’s “done”, which means it adds ANOTHER layer of unknown time into the mix. Maybe adding those 6 months of features takes you 5 months, so when you’re done you’ve got another 5 months to catch up on. That 5 months of work takes you 3 months to complete, so you have another 3. You’re basically asymptotically approaching done. And remember, the velocity of the “main product” team will be affected by the loss of resources who peel off to do the rewrite, so you have little sense of the velocity of not one, but both teams. If you know your car’s speed, you can predict when it will pass a landmark - but you can’t possibly know when it will pass another moving car if you don’t also know that car’s speed perfectly. If you know neither car’s speed, you’re utterly done for.</p> <figure class="image alignleft captioned"><a href="https://www.flickr.com/photos/8047705@N02/5463789169/in/photolist-9jPmax-9DDYgg-nuZtxE-qhFho-ejtid-6c7xeY-ze5cZ-nmKFqr-b1AcSK-9FVdCx-b1AhTZ-b1A75H-dZkGN1-nB5JZn-qhFau-2DKnb-f4N574-cBqwpb-dD7wZD-5cTVaL-zCEF8-9F679e-ogasoM-aDd8E1-9bBdFG-4LeSwt-aDd8s9-J1y1LF-aD9gKD-Curwec-8MGTB8-9cP1vZ-dfCV95-HHBNMQ-oqYWpW-CUmf1-6Skbcz-5SptQa-5qD4Md-4mtSKC-eftGLt-8P76u6-oHB4w9-F8KKsW-7pZA7u-9XRFMM-tPAFC-7xYyVJ-6YmJXf-9BVWnR"><img src="http://www.rodhilton.com/assets/sorry.jpg" /></a><figcaption><p class="caption">Go back to start</p></figcaption></figure> <p>Moreover, from an engineering standpoint this is a terrible idea. Everyone likes doing greenfield work because it’s new and exciting, but you have to ask, why do the engineers want to avoid maintaining and refactoring the existing product? Is the codebase such a spaghetti mess that it’s too difficult to add anything, so the team wants to try again from scratch? <strong>Who the hell do you think made that dumpster fire in the first place?</strong> Why on earth would that same team suddenly do it right the second time around? Especially when under the pressure of “we have to get caught up” and the time-pressure of the company’s primary software products being frozen or at least slowed while the team develops it? It’s even MORE likely that corners will be cut and quality will suffer, not less likely.</p> <p>Refactoring the codebase is almost always the right way to go. Take the awful parts that you want to rewrite and slowly but surely refactor them into the clean codebase you want. It might take overall longer to be “done” with the effort, but the entire time it’s happening the main product is still in active development without the “two cars racing” situation. Refactoring code is, though slower, also easier to do than rewriting it from scratch, because you’re able to do it in small steps with (hopefully) the support of a huge test suite to ensure you don’t break anything. <strong>Since refactoring is easier than rewriting, any team that says “it’s too hard” to the idea of refactoring the existing codebase instead of rewriting it is inherently not good enough to do the rewrite.</strong> The end result will actually be worse.</p> <h2 id="exceptions">Exceptions</h2> <p>There are a couple noteworthy exceptions to this. One, when the reason for the rewrite is a complete change in technology, specifically the language of implementation. If you’re working with Java and want to rewrite in Scala or Clojure, the team should be able to refactor piece by piece since it all compiles to the same bytecode. However, if the team needs to move from a dead technology such as ColdFusion to something else like .NET, a full rewrite is the only way to go. This may also apply in the case of using a prototyping technology to develop the first iteration of a product, only to discover that there’s no way to make the system scale, such as in the case of Twitter’s abandonment of <a href="http://www.gmarwaha.com/blog/2011/04/11/twitter-moves-from-rails-to-java/">Rails in favor of Scala</a>. Not every company has the resources to <a href="http://readwrite.com/2010/02/02/update_facebook_rewrites_php_runtime_with_project/">develop a new PHP runtime</a> just to avoid rewriting their codebase in something other than PHP, sometimes you have to bite the bullet and pick different technology.</p> <p>Another exception is when you find yourself in an “over the wall” situation. Perhaps a team of contractors or consultants or offshore engineers were hired to develop the first iteration of a project, and then the codebase was tossed over the wall to another team to maintain. In this case, the new team may in fact be qualified to both refactor OR rewrite the codebase, and may simply decide the codebase as-is is too much of a mess to bother with and do a rewrite. In this instance, I still would encourage exploring every possibly opportunity to refactor first, but believe me when I say I’ve been on the recieving end of these codebase bombs enough to fully appreciate that sometimes you just need to rewrite the whole thing.</p> <p>One more exception, if your “product” is mostly just a collection of microservices and you’re talking about rewriting some of them, that’s another story. In the land of microservices, rewriting a service essentially <em>is</em> refactoring, and presumably you have a collection of integration-style tests against each microservice, so a rewrite can be done relatively quickly and relatively safely. Even if you want to rewrite all of the services, you’re able to do it one at a time - this is one of the big advantages of microservice architectures.</p> <h1 id="choose-boring-technology">Choose Boring Technology</h1> <p>I really can’t say this any better than Dan McKinley’s original post <a href="http://mcfunley.com/choose-boring-technology">Choose Boring Technology</a>. In it, McKinley argues that every team or company should start out with three innovation tokens. You can spend these tokens whenever and however you please, but they don’t replenish quickly. Every time you pick an exciting or buzzwordy or cutting edge technology instead of an old standard, you spend a token.</p> <p>Relational Databases are boring. Java is boring. JQuery is boring. Apache is boring. Linux is boring. Tomcat is boring. Choose something “cool” instead of something boring, and you’ve spent an innovation token. Boring technology is boring because it’s <em>known</em>, not because it’s <em>bad</em>. Its failure modes are understood, and it probably has a host of libraries and support tools make it easier to live with in the long term.</p> <p>There’s nothing wrong with Java, tons of scalable applications have been built on Java, and “it’s boring” isn’t a good enough reason to choose something else. If your team truly feels like Scala or Clojure or Erlang or whatever is the right tool for the job, by all means use it, but that’s one innovation token spent. Pick MongoDB over MySQL or Oracle and you’ve got one left. Any time you COULD use technology you’re already using (“our other codebase is .NET”) but decide to pick something new instead, you spend a token.</p> <p>Boring Technology is easy to pick up, easy to research, easy to debug, and frankly easy to staff for. I’m sure the engineering team is happy to pad their resumes with cool buzzwords while simultaneously making themselves irreplaceable, but is that really the best thing for the product and the company? When boring technology fails you, there are stacks of books and internet forums available to assist you - there’s nothing worse than the feeling of excitement you get when you search for your error message and find that someone else has had the EXACT same problem as you before, only to be followed by the crushing blow of zero replies.</p> <figure class="image aligncenter"><a href="https://speakerdeck.com/mcfunley/choose-boring-technology"><img src="http://www.rodhilton.com/assets/boringtechnology.png" /></a></figure> <p>I’ve worked plenty of jobs where the team was building plain old Java Web Applications using Spring, backed by MySQL or Oracle databases. You know what? Those products worked just fine. Did the teams have the <em>most</em> fun in the world writing that code? No, probably not, but we got the job done and the products performed quite well (and were easy to fix when they didn’t). A buddy of mine is fond of watching engineers pick and choose cool technologies out of the pool of the latest-and-greatest, only to remind us that he worked on a 911 call routing application written in Java with a MySQL database, and it ran just fine saving tons of lives.</p> <p><span data-pullquote="It's not about how much fun I have. " class="right"></span></p> <p>At my current gig, we decided to build a 150,000-line codebase using Scala. Scala seemed like the right tool for the job, given the particular constraints we had about scalability and throughput in the system. I like Scala a lot, and there’s no doubt that we’ve made tremendous productivity gains by utilizing features exclusive to Scala, but if I’m truly honest with myself did we actually make an overall <em>net</em> productivity gain? When you factor in time lost trying to understand confusing code, time lost by the compiler doing a <a href="https://wiki.scala-lang.org/display/SIW/Overview+of+Compiler+Phases">twenty-pass compilation</a> (holy shit), and time lost by having to manually perform refactorings that our IDEs couldn’t automate due to weak tooling support, I’m not actually sure we came out ahead. Especially given Java 8’s functional programming features, I’m not sure I’d bother picking Scala over Java 8 today, as much fun as I have working with it. It’s not about how much fun I have.</p> <p>Ultimately, it’s really not about me or how much I enjoy working with particular tools and technologies. My job isn’t to have a blast, hell it’s not even really to “write code” - my job is to solve business problems, and it so happens the best tool I’m most competent using for that is code. It’s important to stay up to speed on the latest and greatest technologies so that you as an engineer have the knowledge to know when it’s time to spend an innovation token, but honestly I think most of that effort should be relegated to conference attendance, reading, and personal github accounts. Don’t make company decisions based on how many buzzwords you can add to your resume.</p> <h2 id="inventing-languages">Inventing Languages</h2> <p>I’d like to add that “writing your own programming language” should be worth four innovation tokens all on its own. If you develop an in-house programming language, you’d better have a staggeringly good reason. Good programming languages are hard to write, and unless you have a number of Computer Science PhDs with specializations in Programming Language Design and Implementation on the team, chances are all you’re actually doing is writing an overly complex DSL. The kind of thing whose compiler/transpiler/transliterator fails with “syntax error somewhere” in the event of a mistyped character, rather than a helpful diagnostic and a line number.</p> <p>Don’t create your own programming language. Your language will be weak, your tools will be poor, and language support within other tools will be nonexistent. You probably aren’t going to properly staff the design and support of the language you’ve created. Unless you have an entire team of people devoted exclusively to maintaining that language and writing Eclipse plugins for it or whatnot, your technical debt is so crater-like that you can’t even tell you’re standing in a hole because it extends past the horizon. Whatever huge productivity gains you think your new language is offering your team, they’ll be canceled out and then some.</p> <p><strong>99 times out of 100, a new language isn’t what you want to build, but a library or a framework is</strong>. By all means, develop those in house if need be (but staff their development). Unless you’re developing a language as part of your core business, like Apple developing Swift, don’t do it.</p> <h1 id="will-you-understand-this-at-3am">Will You Understand This at 3AM?</h1> <p>Frequently John Carmack is cited as an example of an eccentric genius, the kind of guy who is way ahead of his time. I have to admit, I’m also in awe of a great deal of what he’s done with code. Take this square root function he wrote for Quake III arena:</p> <div class="language-c highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kt">float</span> <span class="nf">Q_rsqrt</span><span class="p">(</span> <span class="kt">float</span> <span class="n">number</span> <span class="p">)</span> <span class="p">{</span> <span class="kt">long</span> <span class="n">i</span><span class="p">;</span> <span class="kt">float</span> <span class="n">x2</span><span class="p">,</span> <span class="n">y</span><span class="p">;</span> <span class="k">const</span> <span class="kt">float</span> <span class="n">threehalfs</span> <span class="o">=</span> <span class="mi">1</span><span class="p">.</span><span class="mi">5</span><span class="n">F</span><span class="p">;</span> <span class="n">x2</span> <span class="o">=</span> <span class="n">number</span> <span class="o">*</span> <span class="mi">0</span><span class="p">.</span><span class="mi">5</span><span class="n">F</span><span class="p">;</span> <span class="n">y</span> <span class="o">=</span> <span class="n">number</span><span class="p">;</span> <span class="n">i</span> <span class="o">=</span> <span class="o">*</span> <span class="p">(</span> <span class="kt">long</span> <span class="o">*</span> <span class="p">)</span> <span class="o">&amp;</span><span class="n">y</span><span class="p">;</span> <span class="c1">// evil floating point bit level hacking</span> <span class="n">i</span> <span class="o">=</span> <span class="mh">0x5f3759df</span> <span class="o">-</span> <span class="p">(</span> <span class="n">i</span> <span class="o">&gt;&gt;</span> <span class="mi">1</span> <span class="p">);</span> <span class="c1">// what the fuck? </span> <span class="n">y</span> <span class="o">=</span> <span class="o">*</span> <span class="p">(</span> <span class="kt">float</span> <span class="o">*</span> <span class="p">)</span> <span class="o">&amp;</span><span class="n">i</span><span class="p">;</span> <span class="n">y</span> <span class="o">=</span> <span class="n">y</span> <span class="o">*</span> <span class="p">(</span> <span class="n">threehalfs</span> <span class="o">-</span> <span class="p">(</span> <span class="n">x2</span> <span class="o">*</span> <span class="n">y</span> <span class="o">*</span> <span class="n">y</span> <span class="p">)</span> <span class="p">);</span> <span class="c1">// 1st iteration</span> <span class="c1">// y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed</span> <span class="k">return</span> <span class="n">y</span><span class="p">;</span> <span class="p">}</span> </code></pre></div></div> <p>But notice line 10, <code class="language-plaintext highlighter-rouge">i = 0x5f3759df - ( i &gt;&gt; 1 );</code>? It’s easy to find, because it’s elucidated with the helpful <code class="language-plaintext highlighter-rouge">what the fuck?</code> comment. There’s no doubt that this code is extremely clever, and it’s beyond question that it’s extremely fast. It also requires an entire <a href="https://en.wikipedia.org/wiki/Fast_inverse_square_root">2000-word Wikipedia article</a> to understand.</p> <p>In fact, Carmack himself wasn’t even the creator of this bit of wizardry, it came from Terje Mathisen, an assembly programmer who had contributed it to id Software previously. And in fact, he likely got it from another developer, who had gotten it from someone else. This is why the comment <code class="language-plaintext highlighter-rouge">what the fuck?</code> is right there - nobody understood it. And yet there it was, pasted into the Quake III engine code because it seemed to work and it was fast. Obviously this worked out for id, and <a href="https://www.youtube.com/watch?v=PcbpIntnG8c">Quake III is awesome</a>, but it probably wasn’t the wisest idea to stake their company’s product on code that nobody understood.</p> <p>Was it clever? Absolutely. <strong>But <a href="https://simpleprogrammer.com/2015/03/16/11-rules-all-programmers-should-live-by/">clever is the enemy of clear</a>.</strong></p> <p>I try not to ever write comments in my code. Comments should not be used to explain how something works, that should be apparent from the code itself. And if that means adding a few temporary variables so that their names can be helpful (or inspected while debugging), or having some comically long method names, so be it. Often people say that comments can be used to explain “why” something works instead, but frankly I find that a few unit tests for the code in question will do a better job of explaining the why than a comment ever could - at the very least, take the comment you’d write explaining why and make it the name of the test. <strong>Code is for <em>what</em>, tests are for <em>why</em>. Comments are for jokes.</strong></p> <p>Obviously it’s difficult not to be proud of yourself when you’ve gotten some long method down to a one-liner (even if it is one incredibly long line) or invented some massively clever solution to a problem. And indeed, sometimes these clever tricks really are necessary to get the required performance out of a system (as in the Quake III square root example). That’s why I’ve found this heuristic so handy (hattip to <a href="http://neidetcher.com/">Demian Neidetcher</a>):</p> <p><strong>If your cell phone rings at 3AM because this code causes a production outage a year from now, will you be able to understand and reason about the code enough well enough to fix it?</strong></p> <p>Imagine that your job is basically on the line here, you’re now in a conference call with your boss, your boss’s boss, your boss’s boss’s boss, and the CTO. Hell, maybe the CEO is on talking about the millions of dollars in lost revenue every minute the product is offline. Your heart is racing from being startled awake, and your eyes are barely able to focus enough to read your laptop screen. Do you <em>really</em> want this to be what comes into focus in the middle of the night?</p> <div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">(</span><span class="n">n</span><span class="k">:</span> <span class="kt">Int</span><span class="o">)</span> <span class="k">=&gt;</span> <span class="o">(</span><span class="mi">2</span> <span class="n">to</span> <span class="n">n</span><span class="o">)</span> <span class="o">|&gt;</span> <span class="o">(</span> <span class="n">r</span> <span class="k">=&gt;</span> <span class="nv">r</span><span class="o">.</span><span class="py">foldLeft</span><span class="o">(</span><span class="nv">r</span><span class="o">.</span><span class="py">toSet</span><span class="o">)((</span><span class="n">ps</span><span class="o">,</span> <span class="n">x</span><span class="o">)</span> <span class="k">=&gt;</span> <span class="nf">if</span> <span class="o">(</span><span class="nf">ps</span><span class="o">(</span><span class="n">x</span><span class="o">))</span> <span class="n">ps</span> <span class="o">--</span> <span class="o">(</span><span class="n">x</span> <span class="o">*</span> <span class="n">x</span> <span class="n">to</span> <span class="n">n</span> <span class="n">by</span> <span class="n">x</span><span class="o">)</span> <span class="k">else</span> <span class="n">ps</span><span class="o">)</span> <span class="o">)</span> </code></pre></div></div> <p>Yes it’s clever, yes it’s fast, congratulations on how smart you are. But your company code repository isn’t the place to show off your l33t coding ski11z, do that shit in your personal github account. You’re not being paid to fluff your e-peen, you’re being paid to solve the company’s business problems, and that means writing something that can be understood by the other people they hired. Code’s primary purpose is to be read by other human beings (<a href="https://mitpress.mit.edu/sicp/front/node3.html">and only incidentally for machines to execute</a>), otherwise we’d all be writing directly in machine language. So if this future version of yourself won’t understand the code just from being tired, what chance does the dumbest person on your team have of understanding it? Stop showing off, your job (and maybe even your employer’s future) may someday depend on it.</p> <h1 id="deliver-working-software-early-and-often">Deliver Working Software Early and Often</h1> <p>I realize this is just a rewording of a standard part of the <a href="http://www.agilemanifesto.org/">Agile Manifesto</a>, and I could just as easily say “Be Agile!” here. But I think the truth is Agile has come to mean a lot of different things to a lot of different people, and has become a term so overloaded and hijacked that it’s effectively become <a href="https://pragdave.me/blog/2014/03/04/time-to-kill-agile/">useless as a phrase</a>.</p> <p>I like most of the ideas of the Agile Manifesto, but I think the most important thing to take away from it is the unparalleled value of getting working software into the hands of users as quickly and frequently as possible. I absolutely detest when features are held back so that they can be released in a “big bang” to really wow and excite users (hey Product Owners, your users really don’t care as much as you think, you’re just building a thing they’re forced to use to accomplish something). As long as a feature actually works end to end, get it into the hands of users and solicit feedback right away; every day you keep working code behind a gate is a day you give your competitors to steal users away from you. It’s also a day that you are effectively lying to your users - the most important people to your software - about what your product is capable of doing.</p> <p>I despise long-running feature branches in version control as well, almost any time you want to make a branch I think it’s better to make a feature flag that people (specifically, product owners) can turn on and off at will. Long-running branches are incredibly susceptible to <a href="https://en.wikipedia.org/wiki/Ninety-ninety_rule">the 90/90 rule</a>. And if two subteams wind up creating simultaneous long-running branches off the same mainline trunk, pack it in, you’re done for.</p> <p>Every “big bang” release I’ve been a (reluctant) part of has ended in some form of failure. People think that the software is mostly done and then the effort spins its wheels at the end, trying to “harden” the release and remove bugs. Or the software is finally delivered only to discover that <a href="https://en.wikipedia.org/wiki/Pareto_principle">80% of the users are only using 20% of the features</a>, meaning that a more targeted, earlier release of those top 20% features would have been a far better use of engineering time and resources. The other 80% is now just cruft in the codebase, making it more difficult to add features later on, and nobody is using it.</p> <h2 id="plans-are-the-opposite-of-working-software">Plans are The Opposite of Working Software</h2> <p>I think a corollary to this rule is, don’t sell your users on non-working software. I really hate the tendency for “marketing” to <em>need</em> delivery dates on software features so that they can start selling the features now, a situation I’ve seen at company after company. Don’t try to sell users on features you plan on delivering, even if you’re nearly certain about when those features will be done (but, hint, you’re probably less certain than you think). That’s selling vaporware, anything can change between now and then causing those features to be shelved or to not work properly. Instead, deliver working software early and often, and let the marketing folks sell users on what features are actually <em>done</em>, because more stuff will actually <em>be</em> done due to the team not wasting tons of time coming up with estimates (<a href="https://www.happybearsoftware.com/all-estimates-are-still-lies">read: lies</a>).</p> <center> <blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Just start referring to “estimates” as lies.<br /><br />“how long will that take?”<br />“well, if I had to lie, a week?”</p>&mdash; Trek Glowacki (@trek) <a href="https://twitter.com/trek/status/636286667087851520">August 25, 2015</a></blockquote> <script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script> </center> <p>Obviously sometimes there are occasions where people need some sense of how long something will take, most notably when the company is deciding between two different features to implement and they’re performing an analysis based on their cost (though in my experience, rarely does this happen and usually both features are requested anyway). But for the most part, using some roadmap or a plan to inform the company on how to sell their products is a mistake - give engineers the time to properly implement features well, and then when the features are done sell people on them. And remember, <a href="http://www.bloomberg.com/news/articles/2016-05-18/this-5-billion-software-company-has-no-sales-staff">good software sells itself</a>.</p> <h1 id="part-2">Part 2…</h1> <p>I split this list into two posts for really no good reason aside from length. If you want more, check out <a href="http://www.rodhilton.com/2016/06/20/guidingprinciples-part2/">Part 2</a>.</p> I find that I repeat myself often at work. There are a handful of things I say so often when discussing decisions that I’ve been called out for it on occasion for acting like a broken record. Wed, 15 Jun 2016 00:00:00 +0000 http://www.rodhilton.com/2016/06/15/guidingprinciples-part1/ http://www.rodhilton.com/2016/06/15/guidingprinciples-part1/ Programming work career principles #Programming #Work #Career #Principles Star Wars Machete Order: Update and FAQ <p>Wow, this <a href="http://www.rodhilton.com/2011/11/11/the-star-wars-saga-suggested-viewing-order/">Machete Order</a> thing got big! After the post first “went viral” and got mentioned on <a href="http://www.wired.com/2012/02/machete-order-star-wars">Wired.com</a>, I started getting around 2,000 visitors to it per day, which I thought was a lot. But then in the months before <em>Star Wars Episode VII: The Force Awakens</em> was released, it blew up like Alderaan, peaking at 50,000 visitors DAILY. This year, over 1.5 million unique users visited the page. <a href="http://www.google.com/trends/explore?hl=en-US&amp;q=machete+order,+cure+for+cancer,+lindsay+lohan+naked&amp;cmpt=q&amp;tz=Etc/GMT%2B5&amp;tz=Etc/GMT%2B5&amp;content=1">It’s been nuts</a>.</p> <p>So let me start out by thanking everyone for liking and spreading the original post - I’m truly floored by how well-received the post was. Considering I wrote a nearly 5,000-word essay on Star Wars, I’m pretty amazed that it was only a handful of times someone told me I was a loser neckbeard who needs to move out of my parents’ basement and get a girlfriend (I’m married with a kid by the way). People only called for my public execution a couple times. On the internet, that’s the equivalent of winning an Oscar, so thanks everyone!</p> <figure class="image aligncenter captioned"><img src="http://www.rodhilton.com/assets/machete_order_popularity.png" /><figcaption><p class="caption">Holy shit!</p></figcaption></figure> <p>In all seriousness, I’ve had thousands of people tell me I “fixed” Star Wars and made the saga more enjoyable for them. I think this is an unnatural amount of praise - after all, I’m just a guy who watched some movies in the wrong order and skipped one, then wrote down why. I didn’t create fanedits or anything truly difficult like that. But at the same time, the reason I published the post in the first place was that I felt Machete Order “fixed” Star Wars for me personally, allowing me to use the relevant parts of the Prequels to make Return of the Jedi a better movie, so it’s really awesome that so many other people felt similarly. <strong>All joking aside, thank you.</strong></p> <p>Since it’s been about 4 years since the original <a href="http://www.rodhilton.com/2011/11/11/the-star-wars-saga-suggested-viewing-order.html">Machete Order</a> post, and now that Episode VII is out, <strong>I thought I’d post a small update answering a lot of the questions I’ve been asked</strong> and responding to the most common criticisms of Machete Order. <strong>There will be no spoilers of Episode VII here</strong>, though I will be talking about it a bit and I can’t predict what people will post in the comments, so if you haven’t seen it yet, make like a Tauntaun and split.</p> <!--more--> <h1 id="but-episode-i-has-maul">But Episode I has Maul!</h1> <p><em><strong>“Are you really advocating I never watch Episode I or show it to anyone?”</strong></em></p> <p>Man, no. By far the most common complaint is that I am advocating never watching Episode I, and that’s a shame because it has the best podrace/duel/song/whatever. So let me be perfectly clear, I am not advising anyone to pull their Episode I disc out of their box set and throw it in the garbage. By all means, watch Episode I. Hell, I think Episode I is probably a better movie than Episode II is.</p> <p>The point of Machete Order is not, and has never been, ignoring Episode I because it’s bad. It’s been about skipping it because it’s not relevant to Luke’s journey. Episodes II and III are, because we see how his father falls to the Dark Side, and we see elements of his path that are mirroring his father’s.</p> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/fates.jpg" /></figure> <p><strong>By all means, if you like Episode I, watch it.</strong> What I’m advocating though, is watching it sort of like an Anthology film - remember that we’re going to be getting Han Solo origin movies and Boba Fett spinoffs and Rogue One films, and so on, until Disney stops making money off Star Wars. These movies are all going to take place at different times, between different Episodes, or before all of them. If you enjoy or want to share Episode I, I say view it as an Anthology movie, sort of like a prequel to the entire series.</p> <p>In other words, when you’re watching “The Main Saga”, like maybe if you’re doing a Marathon or you’re introducing someone to Star Wars for the first time, watch in Machete Order: IV, V, II, III, VI. When you’re done and that “book” is closed, you can pull in whatever “Anthology” stuff you enjoy, such as the Clone Wars TV shows or movies, the Han Solo spinoff, and Episode I.</p> <p>But for some kind of contiguous viewing experience, I think Episode I should be skipped, because it provides mostly backstory to the Republic itself and political goings-on. This makes it an interesting prequel to the entire saga, but a useless distraction from Luke’s journey.</p> <h1 id="but-episode-i-has-backstory">But Episode I has backstory!</h1> <p><em><strong>“Aren’t parts of Episode I crucial pieces to the story?”</strong></em></p> <p>No, they aren’t. They might be crucial pieces to the Star Wars overall story, but not to Luke’s story, which is the whole point of Machete Order: re-centering the main saga narratively on Luke.</p> <p>Yes Sheldon, <a href="http://www.youtube.com/watch?v=keSFjjhUyVA">Chancellor Valorum is relevant</a> to understanding Palpatine’s rise to power. Yes, Qui-Gon’s belief that Anakin is the chosen one, combined with his untimely demise are very directly responsible to understanding Anakin’s fall. Those make them interesting backstory - but they are <strong>not relevant to Luke’s journey</strong>.</p> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/amidalabeforesenate.jpg" /></figure> <p>People who point this out act like it’s sacrilege to (temporarily, see above!) skip Episode I because it fleshes out the Star Wars universe in various ways. So they might advocate Episodes I, II, III, IV, V, VI, VII, in order. But imagine that Disney releases an Episode 0, all about how Qui-Gon ignored some other ancient Jedi prophecy, and as a result his entire family died or something. This would provide a great understanding of why Qui-Gon is so insistent on training Anakin, and why he passes that burden to Obi-Wan. If someone were to suggest skipping Episode 0, by the logic of Machete Order detractors this would be impossible, because it’s critical in understanding Qui-Gon’s motivations. But skipping it would simply be regular Episode Order that we have now, which is what they’re arguing for. This could go back forever, the exact order being advocated as “correct” is somehow now missing a critical component, because it skips hypothetical “Episode -1” and “Episode -2”.</p> <p>In other words, we don’t really need to know why Qui-Gon is so intent on Anakin being trained or why he believes so strongly in a prophecy that the rest of the council doesn’t seem to care much about. “He just does” is a perfectly fine answer for now, and it would be a perfectly fine answer if Episode 0 existed too. Similarly, we don’t really need to know all of the machinations that led to Anakin embracing to the dark side, “he just does” is perfectly suitable, and in fact I argue that “he lacks proper training” is a far less sympathetic answer than “it’s very seductive”, which is what we’re left with skipping Episode I.</p> <p>All of these movies make references to past events that we don’t ever see on screen. That’s what these big “worldbuilding” movies are all about, and why there’s a whole business for books and comics and video games to support them. We don’t <em>need</em> to see Anakin’s mother becoming a slave (not even in a movie), just like we don’t <em>need</em> to know exactly why Nute Gunray hates Padme so much in Episode II. It’s all backstory and fleshes things out a bit, but it’s not critical, your mind fills in the gaps, makes educated guesses, and so on.</p> <p>Bear in mind, people happily enjoyed Star Wars without ANY of the prequels for sixteen years, and nothing that happened in the original trilogy left some kind of gaping unanswered question in the minds of the audience. So really, since the whole point of Machete Order is refocusing the story on Luke, claiming that any part of the prequels is truly <strong>necessary</strong> is a bit of a hard sell. I argue that Episodes II and III make Luke’s story more enjoyable to watch in VI, but <em>crucial</em>? As in, unable to be understood without them? Nah.</p> <h1 id="but-the-prequels-arent-that-bad">But the prequels aren’t that bad!</h1> <p><em><strong>“I grew up with the prequels and they’re not as bad as you think! You’re blinded by nostalgia for the originals!”</strong></em></p> <p>I had no idea what a huge population there was of Prequel fans, people who were born in the 90’s and grew up watching the prequel trilogy and love them. Many people even claim Episode I is their favorite, or their favorite character is Jar-Jar. These people are not trolls, they genuinely love these movies. In fact they claim that the only reason that myself and others dislike the prequels is because our own nostalgia for the original trilogy blinds us to their flaws.</p> <p>First, a bit of an admission: I am not a huge Star Wars <a href="https://www.washingtonpost.com/lifestyle/in-what-order-should-you-watch-the-star-wars-movies/2015/12/09/25e96e88-9cf8-11e5-a3c5-c77f2cc5a43c_story.html">“superfan”</a>; I’m just a movie geek. If I was some kind of rabid Star Wars fanboy, I would imagine I’d consider it borderline blasphemous to advocate skipping an entire film in the Gospel of Star Wars. But as a movie nerd, I’m more than happy to make whatever adjustments I think make for a better film-watching experience, because Star Wars is just a bunch of movies to me. I skip Godfather III and The Incredible Hulk too. They’re just movies.</p> <p>So, here’s my big secret: <em>I did not grow up watching Star Wars</em>. In fact, whenever I saw clips or images from the movies, I thought they looked boring (it looked like they mostly took place in the desert), and I skipped them. I liked parody movies, so I watched Spaceballs instead (a bunch). It was not until I was a senior in high school that my older sister discovered I still hadn’t seen any Star Wars movies, and insisted I watch them. This was in 1999. To reiterate: <strong>I saw Episodes IV, V, VI, and I all for the first time, the same year, when I was seventeen.</strong></p> <figure class="image alignright captioned"><img src="http://www.rodhilton.com/assets/spaceballs.jpg" /><figcaption><p class="caption">My Star Wars</p></figcaption></figure> <p>As a result, I can confidently say that I am not blinded by nostalgia for the original trilogy - they played no role in my childhood. I saw Episode I almost immediately after seeing the original trilogy, and I feel totally justified in saying that the prequel trilogy films, every single one of them, is vastly inferior to the original trilogy entries. I think my opinion here is pretty much objective - in fact I think the younger crowd talking about the greatness of the prequels are the ones blinded by their nostalgia.</p> <p>Further, the very first versions of the original trilogy I saw were the Special Editions, because that’s what was available on VHS at the local video store at the time. Han never shot first for me. A cartoon Jabba always talked to Han after Greedo, Jabba’s palace has always had an extended dance number, and the entire galaxy (not just Ewoks) always celebrated the fall of the Empire, at least for me. I didn’t see the “Despecialized” versions until years and years later, and so I can once again confidently say, with total objectivity, that they are better than the special editions. The improved special effects for Cloud City and some matte improvements are welcome, but otherwise the Special Editions make the movies worse.</p> <p>Look, you can like or even love the prequels, and I totally understand why you might if you grew up watching them. But really, they are dreadfully bad movies, as far as movies go. Frankly I also think Return of the Jedi isn’t a very good movie either, it’s a mediocre movie that’s elevated by having stellar <em>moments</em>. But all three of them are parsecs better than all of the prequels (yes, even III, “the good one”).</p> <p>It doesn’t make the prequels genuinely good movies just because you liked them when you were a kid. Kids are completely capable of loving terrible movies. Kids are stupid. When I was a kid, I thought the two best movies in the world were Back to the Future and Superman III. Turns out, one of them is genuinely good, and one of them is actually dog shit.</p> <p>I am officially completely dismissing outright any criticism that my dislike for the prequels is because of my nostalgic childhood affection for the originals. I have no such childhood affection, and the prequels are dreck. Sorry.</p> <h1 id="what-about-force-lightning">What About Force Lightning?</h1> <p><em><strong>“Doesn’t Machete Order ruin the surprise that Emperor Palpatine can shoot lightning?”</strong></em></p> <p>Yep, sure does. This was something I hadn’t realized before, and was pointed out to me by a commenter. But indeed, if you’re watching the original trilogy, the first time Palpatine starts electrocuting Luke, it’s quite a shock (har har).</p> <p>With Machete Order, this surprise happens when Count Dooku just casually does it in Episode II. It’s a real shame because it doesn’t have the emotional or narrative impact here. I have no real defense for this, and I actually now consider it Machete Order’s greatest flaw.</p> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/forcelightning.jpg" /></figure> <p>I kind of always thought the lightning wasn’t a “Sith power” so much as something that Palpatine could do because he’s so incredibly fucking evil. But no, the prequels make it clear this is just one of the video game powers you get by embracing the darkside, and they just do it willy nilly all over the place. Apparently you can just absorb it with a lightsaber if you have one handy, or without one if you’re Yoda (hint to Luke, don’t throw your lightsaber away, it has a +2 against Force Lightning!)</p> <p>It’s even kind of annoying that this is typically referred to as “force lightning” now, like it’s some kind of standard-issue thing you learn in Graduate Level Sith Academy before you get your diploma. I think it was better when it was just “that evil scary crazy lightning shit The Emperor does out of nowhere.” But alas, the prequels ruined this (have I mentioned that they suck?) and Machete Order is unable to fix it.</p> <p>The only way to preserve this twist is to simply move Episode VI two movies earlier, which is effectively just Release Order (IV, V, VI, I, II, III). I like the lightning surprise a lot but I think overall it’s worth giving it up in order to make the final confrontation between the Emperor, Vader, and Luke more enjoyable by watching II and III first.</p> <p>The best defense I can offer is that there’s basically no way to preserve this twist without moving the “Luke and Leia are twins” surprise back to Episode VI. And as I’ve pointed out elsewhere, it actually works far better at the end of III, when the audience has no idea they are related, but does know who they are (by watching IV and V before it). So in a sense, you kind of have to choose if you want an effective twin twist or an effective lightning twist, and I personally choose the twins.</p> <h1 id="where-do-episode-vii-and-rogue-one-fit">Where Do Episode VII and Rogue One fit?</h1> <p><em><strong>“Since Rogue One is basically a prequel to IV, should Machete Order start with it? Where do the new Episodes go? What about the Star Wars Story entries?”</strong></em></p> <p>Every time a new Star Wars movie comes out, I get a bunch of tweets and e-mails asking where it fits in Machete Order. It’s flattering people care so much, but my answer is probably going to always be the same. So I’m going to try and answer it once and for all.</p> <figure class="image aligncenter"><img src="http://www.rodhilton.com/assets/machete_order_final.png" /></figure> <p>The Force Awakens, The Last Jedi, and all of the new numbered Episodes are a chronological continuation of the story. If nothing else, they can be seen as both a fresh start for new characters, and as an epilogue to Luke’s journey. They are all in both episode order and chronological order, so there’s no reason to play musical chairs with them. I don’t see any narrative benefit to watching them out of order at all, so <strong>watch all new numbered Episodes in order after Machete Order, no matter how many they make</strong> (hint: they’ll keep making them until they stop making money).</p> <p>The “A Star Wars Story” films are a bit different, since they seem to take place at all sorts of different points in time (though, as of this writing, all of them take place between III and IV). Rogue One is particularly interesting since it literally takes place seconds before Episode IV, so a lot of people are suggesting Machete Order actually <em>start</em> with it.</p> <p>In my opinion, it doesn’t matter that Rogue One takes place right before A New Hope. <strong>The purpose of Machete Order was and always will be to refocus the story of the Original and Prequel Trilogies to be about Luke’s journey</strong>. Episodes II and III aren’t included for all their mythos and world-building, they’re included because Anakin’s fall is directly relevant to Luke’s path.</p> <p>Lots of people are claiming Rogue One is “necessary” now because it helps explain a lot of A New Hope. I disagree. The original Star Wars (Episode IV) is a timeless piece of groundbreaking cinema, and it’s been beloved by generations for nearly 40 years without Rogue One. <strong>I don’t know how much less “necessary” a film could get than having 40 years of fans being unbothered by its nonexistence</strong>. It is true that Rogue One is essentially a two-hour retcon of a 2-meter-wide “plothole”, but the film is structured as a retcon, not as a new introduction to the series. Some have suggested Rogue One should be the first film in the viewing order and I don’t see it at all. That’s like suggesting you read “Rosencrantz and Guildenstern Are Dead” before “Hamlet”. Rogue One doesn’t work as an introduction, it does none of the worldbuilding that A New Hope does (or hell, even that The Phantom Menace does). Frankly, the movie’s most glaring flaw is that the first 45 minutes or so are incredibly rushed and disjointed - the film’s own characters aren’t given proper introductions, let alone the entire galaxy. Characters in Rogue One talk about The Force without a single line explaining what it is. Darth Vader’s introduction is abysmal if it’s the first time an audience is seeing him, and his first scene ends with a dorky pun. No, Rogue One as the first movie doesn’t work to me, I cannot strongly enough recommend against showing someone who has never seen Star Wars the Rogue One entry first. These Anthology films are meant to viewed in the margins of the main Episode series, that’s where they belong.</p> <p>The main objection to what I’m saying seems to be that Rogue One should be viewed before Episode IV because it chronologically takes place before it. If there’s one thing that should be pretty obvious about Machete Order from the outset, I would think that it’s the fact that I don’t care when things take place chronologically. I’d argue that this is really Machete Order’s defining characteristic, so I’m not sure where the “but chronological!” crowd is coming from here. What I care about is what works narratively, not chronologically. Lots of movies are told out of sequence because they work better narratively that way. That’s what Machete Order is all about, telling the story in a way that’s not chronological but more narratively satisfying.</p> <p>All of these “A Star Wars Story” entries are going to basically work in any order, after viewing the main Episodic content. The Han Solo movie, Boba Fett movie, Obi-Wan movie, Yoda movie, or whatever else will work better when viewed after the main Episodes than it would before the Original Trilogy. This is why <strong>I recommend viewing all the other Star Wars stuff, optionally, after the numbered Episodes</strong>. If the Episodes are up to Episode XII by the time someone wants to watch Star Wars, do Machete Order for the Original/Prequel Trilogies, then Episodes VII through XII, then any/all of other Star Wars content, in any order. It’s in this category of “other Star Wars stuff” that I’d put any TV series, the Clone Wars cartoon, the Holiday Special, Rogue One, any Star Wars Anthology films and, yes, Episode I.</p> <p>So when one of these Star Wars movies comes out, this is my final answer. Machete Order, then episodes VII through whatever, then anything else in any order.</p> <h1 id="is-machete-order-still-relevant">Is Machete Order Still Relevant?</h1> <p><em><strong>“Disney is releasing a new Star Wars movie every year - does Machete Order even still matter?”</strong></em></p> <p>Honestly, probably not. I still think that, if you’re going to watch the Original Trilogy and the Prequel Trilogy, the best way to watch them is to skip Episode I and watch in Machete Order. However, <strong>in the Disney era of Star Wars, I’m not entirely sure that viewing the Original and Prequel trilogies even matters anymore</strong>.</p> <figure class="image alignleft"><img src="http://www.rodhilton.com/assets/crawl.png" /></figure> <p>I know that this is sacrilege and it makes me sad too because I think the Original Trilogy is great, but you have to sort of look down the lens of time for a bit and realize that, at some point, there will be 50 or so Star Wars movies. There may well be theatrically released Star Wars movies that you don’t get to watch because you’re dead. <strong>When the 50th Star Wars film is released in theaters, will someone have to watch all 49 previous films to watch it?</strong> Remember, these movies are for kids, so you’re talking about sitting an 8-year-old down to watch over 100 hours of film and who-knows-how-many hours of Television, just go to see a silly movie about laser swords and space ships.</p> <p>As of this writing, the only Episode we have after the Original and Prequel Trilogies is The Force Awakens. And yeah, that movie has Han Solo, Luke, Leia, C3P0, R2D2, references to Vader, and so on. With only 6 other Episodes (5 with Machete Order), it’s not unreasonable to sit down and marathon the other films before watching The Force Awakens. But once the Sequel Trilogy is completed and we’re at Episode IX, will the other trilogies be necessary viewing? I honestly don’t think so - I think <strong>The Force Awakens can be watched as the very first Star Wars movie a person sees, and it works just fine</strong>. Everything from previous films is either established well enough in The Force Awakens, or treated like a mysterious legend. The truth is, pretty much any of these movies can be watched alone, that’s what the opening crawl is for. And yes, Episode VIII will likely have Luke training Rey or something like that, so I would argue that Episodes VII-IX are an extension of Luke’s story and thus should be viewed after a Machete Order viewing of the other trilogies. But I have no doubt that Luke and Leia will both be dead by the end of Episode IX, so by the time Episode X is released, will someone need to watch the other trilogies? Won’t those stories be about Finn, Rey, or possibly <em>their</em> descendants, or yet another new set of characters?</p> <p>So if you’re going for a full Marathon of Star Wars, <strong>Machete Order is the way to go when covering the Original and Prequel trilogies</strong>. Or if someone loved The Force Awakens and wanted the backstory, Machete Order all the way. But I think the Original and Prequel Trilogies are going to become increasingly irrelevant as time goes on. One of the main criticisms of The Force Awakens is that it pulls so much material from the original trilogy that it seems like fanservice. I think that’s missing the forest for the trees - The Force Awakens is re-using elements from the OT because it’s a quasi-reboot. It’s intentionally giving us another Death Star, a Vader-esque character, a Luke-esque protagonist, a trench assault on a giant base, and a retread story about a secret file carried by a droid for a group of rebels trying to destroy an empire. It’s doing all that <strong>so that people who watch The Force Awakens without watching any previous Star Wars movie can enjoy those elements</strong>. The truth is, going forward the Star Wars films you personally love will just seem boring and stupid to kids growing up on the Disney era. The Episode XIX, XX, XXI “trilogy” will be so far removed from the Original Trilogy, I promise that your grandkids aren’t going to give a damn about it. Hell, I’d be shocked if they even kept numbering these suckers after 12, everything will just be “A Star Wars Story” entries.</p> <h1 id="other-stuff">Other Stuff</h1> <p>Those are all the questions I get regularly. I think I’ll update this one post with new questions I get in the future, so that my poor little Software Engineering blog doesn’t turn into Star Wars Central or something. If you have other criticisms of Machete Order or other questions, feel free to leave a comment. I’ve gotten over 1,000 comments on the original post, and I read them all.</p> <p>And again, thank you to everyone who made Machete Order blow up all over the place. I’ve been on the radio multiple times and <a href="http://www.npr.org/2014/03/20/291977042/theres-more-than-one-way-to-watch-star-wars">NPR</a>, and had articles that mention me by name published in <a href="http://www.nydailynews.com/entertainment/movies/star-wars-fans-debate-movie-marathon-viewing-order-article-1.2454281">New York Daily News</a>, <a href="https://www.washingtonpost.com/lifestyle/in-what-order-should-you-watch-the-star-wars-movies/2015/12/09/25e96e88-9cf8-11e5-a3c5-c77f2cc5a43c_story.html">Washington Post</a>, and <a href="http://www.cnn.com/2015/12/08/entertainment/star-wars-machete-order/">CNN</a>. The order has been mentioned on <a href="https://www.youtube.com/watch?v=effD1u4oCRE">King of the Nerds</a>, <a href="https://www.youtube.com/watch?v=keSFjjhUyVA">The Big Bang Theory</a> and <a href="https://www.youtube.com/watch?v=XP0F1eKJZ3s">Late Night with Seth Meyers</a> by one of my favorite comedians, Patton Oswalt. As far as 15 minutes of fame go, it’s been a real blast, and I have everyone who saw the post and shared it to thank.</p> <p>May the Force be with you, always.</p> Wow, this Machete Order thing got big! After the post first “went viral” and got mentioned on Wired.com, I started getting around 2,000 visitors to it per day, which I thought was a lot. But then in the months before Star Wars Episode VII: The Force Awakens was released, it blew up like Alderaan, peaking at 50,000 visitors DAILY. This year, over 1.5 million unique users visited the page. It’s been nuts. Mon, 28 Dec 2015 00:00:00 +0000 http://www.rodhilton.com/2015/12/28/machete-order-update-and-faq/ http://www.rodhilton.com/2015/12/28/machete-order-update-and-faq/ Life nerd star wars movies popular #Life #Nerd #Starwars #Movies #Popular Top 10 Career-Changing Programming Books <p>When I graduated with a Computer Science degree ten years ago, I was excited to dive into the world of professional programming. I had done well in school, and I thought I was completely ready to be employed doing my dream job: writing code. What I discovered in my very first interview, however, was that I was massively underprepared to be an actual professional programmer. I knew all about data structures and algorithms, but nothing about how actual professional, “enterprise” software was written. I was lucky to find a job at a place willing to take a chance on me, and proceeded to learn as much as I could as quickly as I could to make up for my deficiencies. This involved reading a LOT of books.</p> <p>Here I reflect on my 10-year experience programming professionally and all of the books I’ve read in that time, and offer up the ten that had the most profound impact on my career. Note that these are not the “10 best” programming books. I do feel all of these books are very good, but that’s not the only reason I’m selecting them here; I’m mentioning them because I felt that I was a profoundly different person after reading each than I was beforehand. Each of these books forced me to think differently about my profession, and I believe they helped mold me into the programmer I am today.</p> <p>None of these books are language books. I may feel like learning to program in, say, Scala, had a profound impact on how I work professionally, but the enlightening thing was Scala itself, not the book I used to help me learn it. Similarly, I’d say that learning to use Git had a significant impact on how I view version control, but it was Git that had the impact on me, not the book that I used to teach myself the tool. The books on this list are about the the content they dumped into my brain, not just a particular technology they taught me, even if a technology had a profound impact on me.</p> <p>So, without further ado…</p> <h1 id="top-10">Top “10”</h1> <h2 id="the-pragmatic-programmer"><a href="http://www.amazon.com/The-Pragmatic-Programmer-Journeyman-Master/dp/020161622X">The Pragmatic Programmer</a></h2> <figure class="image alignright"><a href="http://www.amazon.com/The-Pragmatic-Programmer-Journeyman-Master/dp/020161622X"><img src="http://www.rodhilton.com/assets/pragprog-238x300.jpg" /></a></figure> <p>I know, I know. Every list you’ve ever seen on the internet includes this book. I’m sorry, I wish I could be more original, but this book really is an eye-opener. <em>The Pragmatic Programmer</em> contains 46 tips for software professionals that are simply indispensable. As the name implies, the book avoids falling into any kind of religious wars with its tips, it’s simply about pragmatism.</p> <p>If you were to read only one book on this list, this is the one to read. It never goes terribly deep into anything, but it has a great breadth, covering the basics that will take a recent college-grad and transform him or her into someone employable, who can be a useful member of a team.</p> <p>Many programmers got into the field because they liked hacking on code in their spare time, writing scripts to automate tasks or otherwise save time. There is a set of skills one develops just to sling code that makes a computer perform specific tasks, and that exact same skillset is needed by many, many employers. But there are many people who see programming professionally as simply an extension of their hobby, and do things the same way whether they are hacking at home or at work. <em>The Pragmatic Programmer</em> permanently altered how I view programming, it’s not just extending my hobby of coding and getting people to pay me for it; there’s a fundamental line between professional coding and hobbyist coding, and I am able to see that line and operate differently depending on what side of it I’m on thanks to <em>The Pragmatic Programmer</em>.</p> <p>How groundbreaking is this book? Groundbreaking enough that it launched an entire publishing company. It’s a big deal, if you’ve somehow managed not to read it yet, go do so.</p> <p><strong>What it changed:</strong> How I view “programming” as a job instead of a hobby I get paid for.</p> <h2 id="continuous-delivery"><a href="http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912">Continuous Delivery</a></h2> <figure class="image alignright"><a href="http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912"><img src="http://www.rodhilton.com/assets/continuousDelivery-227x300.jpg" /></a></figure> <p>Releasing software is one of the most stressful parts of the job. I can’t tell you how many times in my career I’ve been part of a botched launch, or up until the wee hours of the morning on a conference call trying to get software into the hands of customers. When do we branch, what goes in what branch, how do we build the artifacts, what process do we walk through to get them where they need to go? It can be one of the most complex, error-prone, and difficult parts of professional programming.</p> <p><em>Continuous Delivery</em> means to do away with all of that difficulty. It describes a mindset, toolset, and methodology for completely turning releases on their head. Instead of doing them less frequently because they are difficult, do them more frequently so they’re forced to be easier. In fact, don’t just do them more frequently, do them <strong>all the time</strong>. <em>Continuous Delivery</em> describes, with real-world practical examples, how to version control all configuration, how to test integration points, how to handle branching and branch content, how to safely rollback, how to deploy with no downtime, how to do continuous testing, and how to automate everything from checkin to release.</p> <p>In a lot of ways the book describes a pie-in-the-sky ideal. It’s difficult to achieve truly continuous delivery, though GitHub, Flickr, and many other companies seem to have done so. But as the old adage goes, aim for the moon, even if you miss you’ll end up among the stars. Wait, that adage is insane, stars are further away than the moon. Who came up with that phrase? Where was I? Oh right, even if you don’t ever reach the true ideal, every step you made toward it makes deployments at your company that much better. I’ve worked in various environments where the principles of this book have been applied at different levels, and I can personally attest that there is a near-perfect linear relationship between how much you adhere to the advice in this book, and how smoothly releases go.</p> <p>I worked in an environment operating at about a 70%-level of adherence to the philosophy outlined in this book, and it was heaven. When I left that job, my new employer was at approximately 0%, and it was complete misery. I set about implementing the ideas of the book and even a 10%-level of adherence was like a fifty-ton boulder being removed from my back. It worked so well that it was like a blinding light of epiphany for co-workers, and we wound up hiring someone whose sole job it was to help get us further along. Today we’re at about 50%, and it’s easily five times better than it was at 10%, and infinitely better than at 0%. Still hoping to get to 100%, obviously, but there’s no doubt that every aspect of the book makes releases smoother and less stressful. I simply don’t think I could ever work any other way ever again, it’s like finding out you’ve been coding with a blindfold on for years.</p> <p><strong>What it changed:</strong> How I release software and bake releasability into my code.</p> <h2 id="clean-code--the-clean-coder"><a href="http://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882">Clean Code</a> / <a href="http://www.amazon.com/The-Clean-Coder-Professional-Programmers/dp/0137081073">The Clean Coder</a></h2> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/cleanduology-300x191.jpg" /></figure> <p>Look at this, only a few items into my list and I’ve already cheated by including two books as a single entry. Yes, <em>Clean Code</em> and <em>The Clean Coder</em> are two separate books, but honestly they’re both very short, and very similar. Both books are about how a programmer should conduct him or herself professionally, they simply cover different aspects. Professional software developers communicate with their coworkers in two ways: through code and through everything else. <em>Clean Code</em> is about how you communicate with your co-workers (fellow programmers) through code itself, and <em>The Clean Coder</em> is about how you communicate verbally, or through e-mail.</p> <p>Both of these books, written by “Uncle” Bob Martin, really could easily be a single book with two large sections. Bob’s philosophy toward professional software development is honest and direct, some would even say blunt. He makes no bones about it: fail to communicate in the way he describes, and you’re bordering on professional negligence. It seems harsh but, frankly, I’m convinced. Call me a believer.</p> <p>I definitely treat my code differently in light of his suggestions from <em>Clean Code</em>. It may seem strange that I categorize <em>Clean Code</em> as a book about communication, given that it’s all about how to write code. But in the words of Abelson and Sussman, “Programs should be written for people to read, and only incidentally for machines to execute.” Machines will run code whether it’s “clean” or not, but your coworkers will only be able to understand and work with your code if it’s clean. <em>Clean Code</em> is about how to structure your code for others to read, or even for the future version of yourself to read.</p> <p>Even more than <em>Clean Code</em>, <em>The Clean Coder</em> had a profound impact on me. It drastically altered how I talk to bosses, product owners, project managers, marketers, salespeople, and other non-programmers. It advocates taking ownership of your screwups, being honest about abilities and deadlines, and up-front about costs. Not every co-worker you encounter will appreciate the approach outlined in <em>The Clean Coder</em>, but ultimately your customers will, because your products will be better for it.</p> <p><strong>What they changed:</strong> How I conduct myself professionally.</p> <h2 id="release-it"><a href="http://www.amazon.com/Release-It-Production-Ready-Pragmatic-Programmers/dp/0978739213">Release It!</a></h2> <figure class="image alignright"><a href="http://www.amazon.com/Release-It-Production-Ready-Pragmatic-Programmers/dp/0978739213"><img src="http://www.rodhilton.com/assets/releaseit-250x300.jpg" /></a></figure> <p>A product’s life doesn’t begin when you first create the source code repository, or write the first line of code, or even finish the first story. It begins as soon as it’s launched into production, into the hands of real users. Everything before that is just bits, just plain text files on disks. So in a lot of ways, it’s astonishing how much thought is put into the code for the period of time before it’s really born.</p> <p><em>Release It!</em> places the stress on the real life of a program. It’s all about monitoring, health checking, logging, and ensuring that applications remain operational. It’s about baking in concern for capacity and stability from the start, and what needs to be done to keep a program operating even when there are outages, or broken integrations, or massive spikes in load. Most of all, it’s about <strong>assuming</strong> that code will fail, backend servers will die, databases will timeout, and everything your software depends on will eventually go to hell. It’s a completely different approach to software development, and it’s completely eye-opening.</p> <p>Not to be too pejorative, but if you do enterprise application development, you probably shouldn’t write another line of code before you read this book. I consider pretty much everything I’ve written before it to be inadequate for real production use, even all the stuff currently in production. It covers patterns and anti-patterns to support (or subvert) stability as well as capacity, and the section of the book covering these topics is simply excellent. But then it goes beyond that to also discuss Operational enablement. Even if you’re not into DevOps, and don’t want to really be involved in DevOps work, this book gives you the tools and tips to do what aspect of DevOps is the purview of pure developers.</p> <p><em>Release It!</em>’s tactics will make you your operations team’s favorite person, and greatly help cover you and your teams ass in the eventual case of catastrophic failure somewhere. The patterns sections alone are worth the price of admission here, and the fact that the book is chock full of even more useful content beyond them is kind of stunning.</p> <p><strong>What it changed:</strong> What I consider to be “production-ready”, and how I view Operations.</p> <h2 id="head-first-design-patterns--patterns-of-enterprise-application-architecture"><a href="http://www.amazon.com/First-Design-Patterns-Elisabeth-Freeman/dp/0596007124">Head First Design Patterns</a> / <a href="http://www.amazon.com/Patterns-Enterprise-Application-Architecture-Martin/dp/0321127420">Patterns of Enterprise Application Architecture</a></h2> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/patterns-300x189.jpg" /></figure> <p>No list like this would be complete without a book about design patterns. But where’s the famous “Gang of Four” book, you ask? Not on this list, that’s where. Honestly, GoF was a pretty groundbreaking book at the time, but I personally think the presentation of the information it contains is awful. I believe everything presented in GoF is presented better in <em>Head First Design Patterns</em>. I know that not everyone is crazy about the Head First series, and even I find the structure and layout of the book grating at times, but I think the diagrams and visuals are light years better than those of GoF.</p> <p>I also think Head First does a better job of providing <em>contextual</em> examples. While GoF provides sample code implementing the pattern, I feel that <em>Head First Design Patterns</em> provides a more valuable context for its examples, with more explanation about what the code is doing and what it’s for. This helps readers understand <em>when</em> to use specific patterns, which I feel is the most important thing to learn when learning patterns. Too often, people read their first design patterns book and immediately decide to implement as many as they can. This is the wrong approach to take with patterns, and I think Head First’s contextualization and strong visuals make it easier for readers to avoid this mistake. <a href="http://www.codinghorror.com/blog/2005/09/head-first-design-patterns.html">Jeff Atwood disagrees</a> and I can see his point, but I think overall this book is better in this regard than the classic GoF.</p> <p><em>Patterns of Enterprise Application Architecture</em> is the GoF book, but at the level of architecture rather than code. Like GoF, it is extremely dry, and somewhat difficult to get through cover to cover, working better as a reference book than a reading book. It does a very good job, however, of managing to still provide ample context, describing when you’d want to use (or avoid) a particular pattern. I can’t tell you how many times I’ve referenced this book.</p> <p>Patterns provide great “templates” to use when solving common problems. They need to reached for with great care to avoid overuse, but when utilized appropriately can give developers a great deal of confidence in the time-tested designs they outline. Additionally, they provide a shared vocabulary among developers that greatly aids communication about complex topics. Describing the exact kind of hamburger you want to a Burger King employee is difficult when you have to describe every single element of the meal, but it’s much easier when you can simply say “number 5” and you both know exactly what is being ordered.</p> <p><strong>What they changed:</strong> How I design and discuss my software, both at the code and architecture level.</p> <h2 id="working-effectively-with-legacy-code"><a href="http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052">Working Effectively with Legacy Code</a></h2> <figure class="image alignright"><a href="http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052"><img src="http://www.rodhilton.com/assets/legacycode-226x300.jpg" /></a></figure> <p>My first job out of college was replacing a developer who had left the company, as the sole responsible engineer on a massive and extremely complex codebase. Working in this codebase was terrifying, any change I made had the potential to break almost anything, and there was no way to test any changes without pushing a jar to the production system and watching it go. I checked over every change I made about a thousand times, and hand-constructed little <code class="language-plaintext highlighter-rouge">public static void main</code> classes just to instantiate classes and invoke methods, and then hand-check results. I had never heard of unit tests at this point (evidently, neither had my predecessor), so everything was done with kid gloves.</p> <p>It wasn’t until 2 jobs later that I actually read <em>Working Effectively with Legacy Code</em>, which describes exactly how to deal with systems like these. The book explains how to take yourself from having no confidence in the codebase or your changes, to having complete confidence in them. It’s not simply about how to effectively manage yourself in the hole you’ve found yourself in, but exactly the tactics you can use to dig yourself out of the hole. It’s organized extremely well, indexed largely by actual complaints you might have about an inherited codebase. If I’d read this book earlier, my first job experience would have been much less stressful, and much more rewarding.</p> <p>One important thing to realize is that “Legacy Code” doesn’t refer exclusively to million-line Cobol codebases. As soon as code is written and deployed somewhere, it’s legacy code from that point forward. Every codebase you’ve worked on that you didn’t write yourself as a greenfield project is a legacy codebase, and the methodology of the book will help. Once upon a time in my career, inheriting another developer’s codebase was frightening for me, and I’d often react (as so many developers do) by immediately wanting to do a full-scale rewrite of any codebase that’s too complex for me to manage. Thanks to this book, I have no problem inheriting code written by others, even if they’re no longer around.</p> <p>Moving to a new job is less intimidating to me now, and I often spend the first few months of my time somewhere new simply getting the scaffolding in place to make changes confidently later on, increasing unit test coverage and breaking code into smaller and more isolated chunks. The full-scale rewrite is no longer the first tool I reach for in my toolbelt, it’s the last one, and I feel confident that I can refactor nearly any codebase into something I’m comfortable working on.</p> <p><strong>What it changed:</strong> How I feel about inherited codebases, and how I manage my confidence working with them.</p> <h2 id="refactoring--xunit-test-patterns"><a href="http://www.amazon.com/Refactoring-Improving-Design-Existing-Code/dp/0201485672">Refactoring</a> / <a href="http://www.amazon.com/xUnit-Test-Patterns-Refactoring-Code/dp/0131495054/">xUnit Test Patterns</a></h2> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/refactoring_xunit-300x196.jpg" /></figure> <p>I think most recent college graduates, myself included at the time, are “cowboy” coders. I used to have all the changes in my head, and just tried to get them fed from my brain into the compiler as quickly as I could, before I forgot all the stuff I wanted to do. Today, I cringe when I think about how many characters of code I’d type between actually running or testing my software; “waiting for the compiler is just going to slow me down, let me get all the code written first and then I’ll debug it!”</p> <p>Learning the technique of refactoring, in which you change the structure of code without changing the behavior, forces a mental split. You realize that “coding” is really two jobs, and that structure and behavior should be altered and tested independently, never at the same time. Martin Fowler’s <em>Refactoring</em> is a collection of structure-but-not-behavior changes that really provides the toolset for a lot of other books on this list. <em>Refactoring</em> is so important that, depending on what language you work with, you may not even think you have to actually read it: your IDE probably supports many of the operations it describes out of the box. Nonetheless, it is a critical read, as it puts the reader in the mindset to understand the two hats they must wear as a coder, and how to intentionally change from “coding” to “refactoring”.</p> <p>Of course, refactoring goes hand-in-hand with unit testing. There are hundreds of books covering unit tests and test-driven development, but none of them that I’ve seen break things down as well as <em>xUnit Test Patterns</em>. The book covers everything a programmer needs to become a unit testing badass, how to work with mocks and stubs, how to recognize problem smells in tests, how to refactor tests, and tons more. It’s not about a specific technology or tool, it’s about unit testing best practices in general, and my attitude toward testing and the kinds of tests I write are much improved because of it.</p> <p>Refactoring and testing are essential tools in the programmer’s toolchest, and these two books cover all of the mechanics and tools one needs to master those tools. <em>Refactoring</em> focuses on improving the structure of your code, <em>xUnit Test Patterns</em> focuses on improving the structure of your tests, and your code and tests form a symbiotic bond of code quality. These two books are, in a lot of ways, two sides of a very important coin.</p> <p><strong>What they changed:</strong> How I approach altering existing code, and how I ensure I’ve done so correctly.</p> <h2 id="the-passionate-programmer--land-the-tech-job-you-love"><a href="http://www.amazon.com/The-Passionate-Programmer-Remarkable-Development/dp/1934356344">The Passionate Programmer</a> / <a href="http://www.amazon.com/Land-Tech-Love-Pragmatic-Life/dp/1934356263">Land the Tech Job You Love</a></h2> <figure class="image alignright"><img src="http://www.rodhilton.com/assets/career_books-300x225.jpg" /></figure> <p>Okay, I get it, I’m terrible at making these lists, and clearly should have just done a “Top 15” or something. In any case, landing that first job out of college is tough, but eventually the day comes when it’s time to move on. <em>The Passionate Programmer</em> is largely about how to find the right kind of job for you, what to look for in tech companies, and how to manage the direction of your career. It’s pretty high level, but full of extraordinarily important advice to ensure you find yourself at companies that fit you and that you fit into well. <em>Land the Tech Job You Love</em> is more about the mechanics of this process, how to write a resume, how to interview, how to negotiate a salary, and the like. This is another situation where really two books are so closely related that they’d be better as a single larger book.</p> <p>These books helped give me confidence to understand the process of hunting for and getting a job as a programmer. It completely shifted my mentality, from being the unqualified person begging a company to give me a job, to being a competent and capable engineer simply searching for a mutually beneficial fit. It changed how I view the job hunt, and how I conduct myself in interviews. After reading these books, I completely scrapped my entire resume and created a new one from scratch.</p> <p>In a lot of ways, these books inspired me to create this very blog, or at least adjust what I used it for. I view my various online profiles as part of my “brand” and I think my viewpoint shift in this regard informs a great deal of what I post here, on twitter, and elsewhere. Yes, even all the inappropriate swearing (companies should probably know what they’re getting into with me).</p> <p>I have a lot of confidence about my career now, and I don’t live in fear of losing my job or being unable to find a new one. I think about my career differently, as a very planned and deliberate thing, not just a series of jobs. It makes me excited about my future as a programmer, rather than concerned and fearful, which is a liberating sensation.</p> <p><strong>What they changed:</strong> How I view and manage my career.</p> <h2 id="apprenticeship-patterns"><a href="http://www.amazon.com/Apprenticeship-Patterns-Guidance-Aspiring-Craftsman/dp/0596518382/">Apprenticeship Patterns</a></h2> <figure class="image alignright"><a href="http://www.amazon.com/Apprenticeship-Patterns-Guidance-Aspiring-Craftsman/dp/0596518382/"><img src="http://www.rodhilton.com/assets/apprenticeship-patterns-228x300.jpg" /></a></figure> <p><em>Apprenticeship Patterns</em> isn’t really a patterns book as the name implies, but it’s content has been kind of shoehorned into the format, I assume to increase sales. Ignoring that flaw, <em>Apprenticeship Patterns</em> is the best book on Software Craftsmanship I’ve read, and I’ve read quite a few. I actually recommend it above <a href="http://www.amazon.com/Software-Craftsmanship-The-New-Imperative/dp/0201733862">Pete McBreen’s Software Craftsmanship</a>, because it covers pretty much everything useful from that book, but excises some of the more unrealistic or naive bits, as well as the extremely long and pointless section about salary. <em>Apprenticeship Patterns</em> is a bugfix release for <em>Software Craftsmanship</em>.</p> <p>This book was the one that made really see the value in the Software Craftsmanship movement, and truly embrace it. I’ve written elsewhere about why I like the Software Craftsman title, but this was the book that convinced me to consider myself part of that crowd. Software Craftsmanship isn’t just about what customers can expect from you, it’s about what your fellow developers can expect from you, and what you should expect from yourself. It’s not just about writing clean code, it’s about having a clean career, if that makes any sense.</p> <p>I now put a much greater stress on my fellow engineers than I used to, and I care more about the team as a whole. In a lot of ways, this book takes the practices and techniques of many other books on this list and codifies them into an over-arching set of guiding principles. Software Craftsmanship as a movement can get a little culty at times, but I generally consider myself part of that cult, and I largely have this book to blame. The night time is the right time.</p> <p>What’s especially great is this book is it’s been licensed under Creative Commons, and is now <a href="http://chimera.labs.oreilly.com/books/1234000001813/index.html">completely free on the web</a>! Cool!</p> <p><strong>What it changed:</strong> How I view my responsibilities as a professional, and what I consider my true title.</p> <h2 id="the-art-of-agile-development"><a href="http://www.amazon.com/Art-Agile-Development-James-Shore/dp/0596527675">The Art of Agile Development</a></h2> <figure class="image alignright"><a href="http://www.amazon.com/Art-Agile-Development-James-Shore/dp/0596527675"><img src="http://www.rodhilton.com/assets/theartofagiledevelopment-228x300.jpg" /></a></figure> <p>The first job I had out of college was pure chaos. No process, no estimation, no planning, nothing. Generally someone from marketing would stop by a programmer’s cubicle and inform them that they just sold a few thousand dollars worth of seats based on a feature that didn’t exist yet, so how long would it take to implement it? Being my first post-college job, I was in “sponge mode,” so I simply thought this was how it worked in the real world. It wasn’t until my next job that I was introduced to Agile Development methodologies by way of Scrum, which was like mana from heaven. I was hooked.</p> <p>My job after that was at a company that wasn’t just into Agile as a methodology, their core business was actually developing agile tools for other software shops to use. The entire company lived and breathed agile, so knowing agile was the same as understanding the core company domain. It would have been impossible to do my job without understanding agile, in more ways than one. So when I first took the job, I decided I needed to read an Agile book to make sure I knew my stuff. Based on the title, I picked up <em>The Art of Agile Development</em>. What I didn’t realize at the time was that there were a lot of different agile methodologies, and in fact this book wasn’t about Scrum, it was about XP.</p> <p>I became a die-hard XP programmer without even realizing it. My first exposure to “XP Programming” was a failed experiment in college that ruined it for me, I never would have knowingly bought a book on XP. But The Art of Agile Development changed how I do my job, it changed the processes I like to use when working with managers and other developers, and the practices I like to adhere to myself, such as Test-Driven Development, Spiking, Evolutionary Design, and the like. What’s ironic is that I read this book to work at an agile company, only to find most of them disliked XP, and considered themselves Scrum only.</p> <p>What’s nice about XP is that it’s pretty individualistic. You can employ XP principles as a developer while working within Scrum, Kanban, Crystal, Lean, or whatever else. In fact, that’s exactly what wound up happening: a small contingent of developers at this company including myself began working in a more XP-style within the confines of the company’s Scrum processes, and our successes wound up infecting larger and larger groups of people until pretty much the entire engineering team was working similarly. When the company switched from Scrum to Kanban, it had little effect on how we worked.</p> <p>Today, my preferred way of working is with XP-style practices within a Kanban-style process, and an enormous part of that is because of this book. I wish I had a Kanban book to recommend as well to round this part of my list out, but 100% of my Kanban experience was gained on the job, with no books of any kind. What’s more, having worked for three years at a company where agile was something bordering on a religion, I’m pretty burned out on the topic in general, so other process-centric books on my “to-read” list have found themselves migrated towards the bottom. Nonetheless, in all of my reading, The Art of Agile Development was easily the most influential book on how I like to work. This one is pretty subjective, as I’m pretty sure <strong>any</strong> good XP book would have had the same effect, but this was the one that did it for me, so I had to include it here.</p> <p><strong>What it changed:</strong> How I like to work in terms of processes and practices.</p> <h2 id="update-domain-driven-design-distilled"><a href="http://www.amazon.com/Domain-Driven-Design-Distilled-Vaughn-Vernon/dp/0134434420">Update: Domain-Driven Design Distilled</a></h2> <figure class="image alignright"><a href="http://www.amazon.com/Domain-Driven-Design-Distilled-Vaughn-Vernon/dp/0134434420"><img src="http://www.rodhilton.com/assets/domaindrivendesign.jpg" /></a></figure> <p>When I first posted this list, I gave an honorable mention to <a href="http://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215">Domain-Driven Design</a> (DDD) even though I had never read it. My rationale was that, I had a hunch that once I finally read a book about Domain-Driven Design, it would have a career changing impact on me, but unfortunately I didn’t like the style (or size) of the book itself, so I gave the honorable mention. Basically I wanted a book that gave me an overview of Domain-Driven Design without being a 560-page reference book. I wanted the “Head First Design Patterns” to the original Domain-Driven Design’s “GoF”. Something to make the material easier to digest more quickly.</p> <p>I also said I had hoped <a href="http://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577/">Implementing Domain-Driven Design</a> (IDDD) would be that book (so much so that I pre-ordered it), but it seemed to assume the reader has already the original DDD book, and it was even longer than the original DDD book, clocking in at 656 pages. That’s over 1,200 pages about domain modeling, it just seemed like massive overkill.</p> <p>Finally, Domain-Driven Design Distilled, by the same author as IDDD, was released. At a mere 147 pages, the stated goal of DDD Distilled was to get the information into the brains of as many people as possible without overwhelming them, give the basics and an overview of DDD in an approachable manner, and then allow readers to go deeper with DDD or IDDD later on. This was precisely what I was looking for and the book absolutely delivered - I really loved it.</p> <p>I kind of always had this hunch that Domain-Driven Design was something of a buzzword fad, that it likely described something I was already doing regularly and that the book and the approach likely just lent formality and terminology to common sense activities. After all, the biggest thing I see referenced seems to be this Ubiquitous Language stuff, which I think just means using the same nouns for stuff as the domain experts, which I try to do anyway so I’m sure I’m already doing everything in the book, right? Nope. I was flat wrong, which is why I consider this book a must-read for engineers who do a lot of greenfield work, domain modeling, and architecture.</p> <p>Early on, the author provides a sort of toy example that will stay with us for the duration of the book, designing the domain for a Scrum management product. I’ve actually worked a job where I did this very thing, so this resonated fairly strongly. The book suggests that, if engineers are left to their own devices, they’ll design around code generality to reduce duplication, so there might be like a ScrumElement that could be a Product or a BacklogItem, and there’s like a generic ScrumElementContainer which could be either a Sprint or Release. I’m just reading this section like, yeah, that’s exactly what I would do… in fact I did that. Is that bad? But the rest of the book explains exactly why that’s bad, and exactly how to do it better. Chapter after chapter, the book showed me the ways in which my approach to domain modeling was disastrously bad and how much better it could be. It also explained how, with this alternative approach, my domain would lend itself more easily to modular system design along service-oriented boundaries.</p> <p>In short, this book is excellent and completely changed how I think about and model domain objects at work. The book can sometimes be light on detail, I often found myself wanting more information, or stronger examples of exactly how something should work, but at the end of the day that’s the purpose of this book - a short introduction that encourages the reader to dive in deeper with Domain Driven Design or Implementing Domain Driven Design. As such, I can’t really complain about the general lightness of this book, as it’s the primary reason it was such an easily-digestible 147 pages.</p> <p>Overall, this book is a must-read, I wish it existed years ago. I think back to all the times that a group of coworkers and I would gather in front of a whiteboard and model domain objects together without a single domain expert in the room. It makes me slap my head at how idiotic my approach has been for over a decade, the ways that I let database and technical concerns dictate the design of domain objects rather than the business’s needs. I can never look at this regularly-performed process the same way, which is why it joins my list of Career-Changing Books.</p> <p><strong>What it changed:</strong> How I approach domain modeling; how I design service and module separations</p> <h1 id="honorable-mentions">Honorable Mentions</h1> <p>There are a number of books that I didn’t include in the above list, but that nonetheless had a large impact on my career. This, of course, despite the fact that I completely cheated in my Top 10 and included more than ten books.</p> <ul> <li><a href="http://www.amazon.com/Presentation-Patterns-Techniques-Crafting-Presentations/dp/0321820800"><strong>Presentation Patterns</strong></a> - Only an honorable mention because it’s not <em>really</em> about software development per se, but it’s totally altered how I do presentations.</li> <li><a href="http://www.amazon.com/Pragmatic-Thinking-Learning-Refactor-Programmers/dp/1934356050"><strong>Pragmatic Thinking and Learning</strong></a> - Learn more about your brain than you ever realized you needed to know. Though not specifically about programming, it’s a very programmer-centric view of the mind, and how one can best work with your own mind and improve your ability to think and learn.</li> <li><a href="http://www.amazon.com/Effective-Java-Edition-Joshua-Bloch/dp/0321356683"><strong>Effective Java</strong></a> - I said I wasn’t going to include any technology-specific books, but I can’t help but mention <em>Effective Java</em> somewhere. I was programming in Java for years before reading this book, but afterwards I felt like a Java master. I almost never work with pure Java anymore, instead largely using other JVM-compatible languages, but the Java I wrote before reading <em>Effective Java</em> looks very different than the Java I wrote afterwards, and I definitely prefer the latter.</li> </ul> <p>So that’s my complete list. I obviously have many, many more books to read, and I look forward to writing another list like this one in the future after being profoundly changed for the better some more.</p> <p>Have some books you want to add? Feel like telling me one of my favorite books is inferior to one of yours? Want to yell at me for not including <a href="http://www-cs-faculty.stanford.edu/~uno/taocp.html"><em>The Art of Computer Programming</em></a> (come on, you never read that shit and you know it)? Leave a comment!</p> When I graduated with a Computer Science degree ten years ago, I was excited to dive into the world of professional programming. I had done well in school, and I thought I was completely ready to be employed doing my dream job: writing code. What I discovered in my very first interview, however, was that I was massively underprepared to be an actual professional programmer. I knew all about data structures and algorithms, but nothing about how actual professional, “enterprise” software was written. I was lucky to find a job at a place willing to take a chance on me, and proceeded to learn as much as I could as quickly as I could to make up for my deficiencies. This involved reading a LOT of books. Wed, 05 Feb 2014 00:00:00 +0000 http://www.rodhilton.com/2014/02/05/top-10-career-changing-programming-books/ http://www.rodhilton.com/2014/02/05/top-10-career-changing-programming-books/ Programming books career reading popular #Programming #Books #Career #Reading #Popular