Absolutely No Machete Juggling Rod Hilton&#039;s rants about software development, technology, and sometimes Star Wars http://www.nomachetejuggling.com Programming Podcasts: A Roundup <p>A number of people have asked me what programming podcasts I listen to, and they&#39;ve generally been pretty happy with the breadth and volume of my response. I thought it would be a good idea to share all of these here on my blog in case other programmers are searching for some good podcasts.</p> <p>I generally dislike blog posts like this one because I&#39;ve discovered so many of them myself over the years, only to find that most of the links are broken, defunct, or link to podcasts that are no longer updated. But on the other hand, I haven&#39;t posted anything in over a year and this post is super easy to write so, you know, yay for low-effort content.</p> <p>As far as this selection, I tend to like detailed discussions and interviews for my tech podcasts, and I&#39;ve found that I actually like listening to interviews while coding more than I like listening to music. I also tend to work with functional programming languages on the JVM with a focus on backend development, scalability, and architecture, so my selections here will bias towards those topics.</p> <!-- more --> <h1>Tech Industry</h1> <ul> <li><a href="https://www.recode.net/recode-decode-podcast-kara-swisher">Recode Decode</a> - Kara Swisher is an experienced journalist who covers various topics in the tech industry; episodes are typically long interviews with noteworthy tech personalities. Updated every other day. [<a href="http://feeds.feedburner.com/Recode-Decode">Feed</a>]</li> <li><a href="https://itunes.apple.com/us/podcast/id1355212895">Techmeme Ride Home</a> - A great replacement for the Crunch Report if you were into that, Techmeme&#39;s Ride Home is a daily summary of the biggest news in tech. It&#39;s a great way to stay up to speed on what&#39;s going in the industry, and episodes are generally pretty short and good for a quick drive. Updated every weekday. [<a href="http://feeds.feedburner.com/TechmemeRideHome">Feed</a>]</li> <li><a href="https://www.hanselminutes.com/">The Hanselminutes Podcast</a> - Scott Hanselman interviews big tech industry players covering various topics in the tech industry. Updated weekly. [<a href="https://rss.simplecast.com/podcasts/4669/rss">Feed</a>]</li> </ul> <h1>General Software Development</h1> <ul> <li><a href="https://www.oreilly.com/topics/oreilly-programming-podcast">O&#39;Reilly Programming Podcast</a> - O&#39;Reilly&#39;s interview series, frequently featuring authors of new O&#39;Reilly books as part of promotion, dealing with a variety of programming and architecture topics. Updated twice a month. [<a href="http://feeds.podtrac.com/2P68PDQSg03Y">Feed</a>]</li> <li><a href="https://www.programmingthrowdown.com/">Programming Throwdown</a> - Each episode typically features a thorough discussion of a specific topic or technology, often with book suggestions. Updated monthly. [<a href="http://feeds.feedburner.com/ProgrammingThrowdown">Feed</a>]</li> <li><a href="https://softwareengineeringdaily.com/">Software Engineering Daily</a> - Interview series with software engineers covering a variety of topics. Updated daily. [<a href="http://softwareengineeringdaily.com/category/podcast/feed/">Feed</a>]</li> <li><a href="Link">Software Engineering Radio</a> - A bit academically focused, run by people from the IEEE Software technical magazine. Updated a couple times per month. [<a href="http://feeds.feedburner.com/se-radio">Feed</a>]</li> <li><a href="https://nodogmapodcast.bryanhogan.net/">no dogma podcast</a> - Discussions and sometimes interviews on various topics, casts a very wide net; sometimes extremely technical dives into a technology, sometimes a higher-level industry discussion. Updated twice monthly. [<a href="http://feeds.feedburner.com/NoDogmaPodcast">Feed</a>]</li> <li><a href="http://herdingcode.com/">Herding Code</a> - Various development topics covered, usually skews towards .NET. Updated every other month. [<a href="http://feeds.feedburner.com/herdingcode">Feed</a>]</li> <li><a href="https://www.infoq.com/the-infoq-podcast">The InfoQ Podcast</a> - Complete mishmash of various software development topics, high-level to low-level. Updated 2-4 times per month. [<a href="http://feeds.soundcloud.com/users/soundcloud:users:215740450/sounds.rss">Feed</a>]</li> <li><a href="http://coder.show/">Coder Radio</a> - Wide variety of topics related to software engineering with great hosts. Updated weekly. [<a href="http://coder.show/rss">Feed</a>]</li> </ul> <h1>Java Development</h1> <ul> <li><a href="http://www.javapubhouse.com/">Java Pub House</a> - Very deep dives into Java topics, tools, and technologies. Updated monthly. [<a href="http://javapubhouse.libsyn.com/rss">Feed</a>]</li> <li><a href="http://enterprisejavanews.com/">Enterprise Java Newscast</a> - Discussion about the latest news in the Enterprise Java space, focuses largely on the release of various tools and libraries. Updated twice monthly. [<a href="http://enterprisejavanews.libsyn.com/rss">Feed</a>]</li> </ul> <h1>Functional Programming</h1> <ul> <li><a href="https://corecursive.com/">CoRecursive w/ Adam Bell</a> - Interview series talking with various prominent functional programmers, discussing FP techniques and topics [<a href="https://corecursive.com/feed">Feed</a>]</li> <li><a href="https://soundcloud.com/lambda-cast">LambdaCast</a> - Educational series on functional programming, each episode covering a different aspect of FP (Monads, Functors, Applicatives, etc). Updated occasionally. [<a href="http://feeds.soundcloud.com/users/soundcloud:users:239787249/sounds.rss">Feed</a>]</li> <li><a href="https://www.functionalgeekery.com/">Functional Geekery</a> - Discussion-focused podcast about functional programming topics covering a variety of languages. Updated monthly. [<a href="https://www.functionalgeekery.com/feed/mp3/">Feed</a>]</li> </ul> <h1>Web Development</h1> <ul> <li><a href="http://www.fullstackradio.com/">Full Stack Radio</a> - Heavy UI/JavaScript/Web development focus. Updated twice a month. [<a href="https://rss.simplecast.com/podcasts/279/rss">Feed</a>]</li> <li><a href="http://bikeshed.fm/">The Bike Shed</a> - Discussions on various topics, mainly dealing with Ruby, Rails, and JavaScript. Updated 2-4 times per month. [<a href="https://rss.simplecast.com/podcasts/282/rss">Feed</a>]</li> </ul> <h1>Computer Science</h1> <ul> <li><a href="https://spectrum.ieee.org/multimedia/podcasts">IEEE Spectrum Podcast</a> - Focused primarily on academic and computer science topics. Updated rarely. [<a href="http://feeds.feedburner.com/ieee/spectrumo">Feed</a>]</li> <li><a href="http://podcasts.ox.ac.uk/">Computer Science</a> - The University of Oxford&#39;s podcast on computer science research. Updated rarely. [<a href="http://mediapub.it.ox.ac.uk/feeds/137514/audio.xml">Feed</a>]</li> </ul> <h1>Architecture</h1> <ul> <li><a href="https://www.stitcher.com/podcast/software-architecture-radio">Software Architecture Radio</a> - Matt Stine&#39;s interview series with prominent engineers and authors, focused entirely on software architecture. Updated rarely. [<a href="http://feeds.soundcloud.com/users/soundcloud:users:276322801/sounds.rss">Feed</a>]</li> <li><a href="https://www.codingblocks.net/">Coding Blocks</a> - Discussion series about best practices for engineers, strong focus on architectural concerns. Skews a bit toward .NET discussion but the topics are generally applicable in any language. Updated twice monthly. [<a href="http://feeds.podtrac.com/tBPkjrcL0_m0">Feed</a>]</li> <li><a href="https://www.nofluffjuststuff.com/podcast">No Fluff Just Stuff Podcast</a> - Michael Carducci, a frequent NFJS speaker, interviews various other speakers (usually at NFJS events) about a variety of topics, typically with a focus on software architecture. [<a href="http://nofluff.libsyn.com/rss">Feed</a>]</li> </ul> <h1>DevOps</h1> <ul> <li><a href="http://www.devopsmastery.com/">Devops Mastery</a> - Kind of intended as a newbie educational series, helping DevOps newcomers improve. It hasn&#39;t been updated in years but I&#39;m still including it because it&#39;s a basic tutorial series. [<a href="http://feeds.soundcloud.com/users/soundcloud:users:79143337/sounds.rss">Feed</a>]</li> <li><a href="http://www.devopsradio.libsyn.com/podcast">DevOps Radio</a> - Interview series covering various topics related to software delivery. Updated twice monthly. [<a href="http://devopsradio.libsyn.com/rss">Feed</a>]</li> <li><a href="https://www.arresteddevops.com/">Arrested DevOps</a> - Discussion series on good DevOps practices and patterns for effectiveness. Updated twice monthly. [<a href="https://www.arresteddevops.com/episode/index.xml">Feed</a>]</li> </ul> <h1>Soft Skills</h1> <ul> <li><a href="https://softskills.audio/">Soft Skills Engineering</a> - Meant for programmers but dealing with non-programming topics relevant to work. How to deal with co-workers, promotions, giving talks, interviewing, and all sorts of other soft skills are covered. It&#39;s kind of a &quot;Dear Abby&quot; but for programmers. Updated weekly. [<a href="http://feeds.feedburner.com/SoftSkillsEngineering">Feed</a>]</li> <li><a href="https://ryanripley.com/agile-for-humans/">Agile for Humans with Ryan Ripley</a> - Focused on the software development process with an obvious slant towards Agile and Scrum. Updated weekly. [<a href="http://feeds.feedburner.com/agileforhumans">Feed</a>]</li> <li><a href="http://mentoringdevelopers.com/">Mentoring Developers</a> - Focused on career development for Software Engineers, focused on more junior or newcomers to the field. Updated monthly. [<a href="http://mentoringdevelopers.com/feed/podcast/">Feed</a>]</li> <li><a href="https://jaymeedwards.com/">Healthy Software Developer</a> - A little &quot;self-help seminar&quot; at times but generally good soft skill advice for engineers. Updated weekly. [<a href="http://feeds.soundcloud.com/users/soundcloud:users:332662728/sounds.rss">Feed</a>]</li> <li><a href="http://giantrobots.fm/">Giant Robots Smashing Into Other Giant Robots</a> - A bit focused on management and business but still a good listen about soft skills in the tech industry. Updated weekly. [<a href="https://rss.simplecast.com/podcasts/271/rss">Feed</a>]</li> </ul> <p>There are lots of other great podcasts out there but even as I went over my OPML export to write this post I realized a few of my favorites hadn&#39;t been updated in ages. It doesn&#39;t give me a lot of hope that this very post will stay relevant for long, but it is what it is.</p> <p>Did I miss your favorite podcast? Please leave a comment, I&#39;d love to add some more feeds to my reader.</p> <p>I also left out a lot of common programming podcast categories, such as the various podcasts meant for newcomers to the field or people learning to program. I&#39;ve been programming for nearly two decades so these types of podcasts don&#39;t personally interest me and thus I can&#39;t vouch for any of them, but if there are any you like please leave a comment for anyone who might stumble across this post.</p> Wed, 02 May 2018 00:00:00 +0000 http://www.nomachetejuggling.com/2018/05/02/programmer-podcasts/ http://www.nomachetejuggling.com/2018/05/02/programmer-podcasts/ A Branching Strategy Simpler than GitFlow: Three-Flow <p>Of all the conversations I find myself having over and over in this field, I think more than anything else I&#39;ve been a broken record convincing teams <strong>not</strong> to adopt <a href="http://nvie.com/posts/a-successful-git-branching-model/">GitFlow</a>.</p> <p>Vincent Driessen&#39;s post &quot;<a href="http://nvie.com/posts/a-successful-git-branching-model/">A successful Git branching model</a>&quot; -- or, as it&#39;s become commonly known for some reason, &quot;GitFlow&quot; -- has become the de facto standard for how to successfully adopt git into your team. If you search for <a href="https://encrypted.google.com/search?q=git+branching+strategy">&quot;git branching strategy&quot; on Google</a>, it&#39;s the number one result. Atlassian has even adopted it as one of their <a href="https://www.atlassian.com/git/tutorials/comparing-workflows#gitflow-workflow">primary tutorials</a> for adopting Git.</p> <p>Personally, I hate GitFlow, and I&#39;ve (successfully) convinced many teams to avoid using it and, I believe, saved them <a href="http://endoflineblog.com/gitflow-considered-harmful">tremendous headaches</a> down the road. GitFlow, I believe, leads most teams down the wrong path for how to manage their changes. But since it&#39;s such a popular result, a team with no guidance or technical leadership will simply search for an example of something that works, and the blog post mentions that it&#39;s &quot;successful&quot; right in the title so it&#39;s very attractive. <strong>I&#39;m hoping to possibly change that with this post, by explaining a different, simpler branching strategy that I&#39;ve used in multiple teams with great success</strong>. I&#39;ve seen GitFlow fail spectacularly for teams, but the strategy I outline here has worked very well.</p> <p>I&#39;m dubbing this <strong>Three-Flow</strong> because there are exactly three branches. Not four. Not two. Three.</p> <p>First, a word of warning. This is not a panacea. This will not work for all teams or all kinds of development work. In fact off the top of my head, I don&#39;t believe it would work well for 1) embedded programming 2) shrinkwrap release software or 3) open source projects. <strong>Basically Three-Flow works when:</strong></p> <ol> <li><strong>Everyone committing to a codebase works together.</strong> If not on the same team, at least at the same company. If you&#39;re taking code from external developers via GitHub or something, this won&#39;t work. Everyone making commits is &quot;trusted.&quot;</li> <li><strong>The product can be replaced live with another version without user awareness</strong>. In other words, hosted web applications and SaaS offerings.</li> </ol> <h1>What&#39;s wrong with GitFlow?</h1> <p>In brief, <strong>the primary flaw with GitFlow is feature branches</strong>. Feature branches are the root of all evil, pretty much everything that results from using feature branches is terrible. If you take nothing else away from this post, or hell even if you stop reading entirely, please internalize an utter disgust for feature branches.</p> <div class='image aligncenter' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/boo_feature_branches.png" width='600' height='795'/></figure></div> <p>To be fair, Driessen&#39;s post specifically does say that feature branches &quot;typically exist in developer repos only, not in origin&quot; but the graphics really don&#39;t convey that very well, including a specific image of &quot;origin&quot; which includes a pink feature branch with three commits. Moreover, I&#39;ve encountered many teams that have adopted or are considering adopting GitFlow and none of them have ever noticed that Driessen recommends branches only exist on a developer&#39;s machine. Everyone I&#39;ve ever met that adopts GitFlow has long-running remote feature branches.</p> <p>There&#39;s nothing wrong with making a feature branch on your local machine. It&#39;s a good way to hop between different features you might be working on, or have a clean <code>master</code> in case you need to make a commit to mainline without pulling in what you&#39;re working on. But I&#39;ll go further than the original GitFlow post and say <strong>feature branches should <em>never</em> be pushed to origin</strong>.</p> <p>When you have long-running feature branches, <a href="http://c2.com/xp/IntegrationHell.html">integration hell is almost inevitable</a>. Two engineers are happily working away making commit after commit to their own respective feature branches, but neither of their branches are seeing the other&#39;s code. Even if they&#39;re regularly pulling off mainline, they&#39;re still only seeing the commits that make it into the main branch, not each others. Developer A merges their code into mainline, then Developer B pulls and merges theirs, but now they have to deal with tons of merge conflicts. Developer B might not be in the best position to understand and resolve those conflicts if they don&#39;t fully understand what Developer A is doing, and depending on how long these branches have been alive, they might have tons of code to resolve.</p> <p><span data-pullquote="A developer's primary form of communication with other developers is source code. Long-running branches are silence. " class="left"></span></p> <p>Long-running feature branches are the exact opposite of what you want. <strong>A developer&#39;s primary form of communication with other developers is source code</strong>. It doesn&#39;t matter how often you have stand-up meetings, when it comes to the central method of communication, <strong>long-running branches represent dead silence</strong>. <a href="https://blog.newrelic.com/2012/11/14/long-running-branches-considered-harmful/">Long-running branches are the worst</a>.</p> <p>Feature branches also scale terribly. You can get away with one developer having a long-running feature branch, but as your team grows and you have more and more engineers in the same codebase, each pair of developers running feature branches is failing to communicate effectively about their work. If you have a mere 8 engineers each running their own feature branch, you have $$\frac{8^2}{2} = 32$$ different failed communication lines. Add another engineer and it&#39;s 40 missed lines of communication.</p> <h2>Use Feature Toggles</h2> <p>Instead of using feature branches, use <a href="https://www.martinfowler.com/articles/feature-toggles.html">feature toggles</a> in your code. Feature toggles are essentially boolean values that allow you to not execute new code that isn&#39;t ready for production while still sharing or possibly even deploying that code. It looks in code exactly as you might expect:</p> <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="k">if</span><span class="o">(</span><span class="n">newCodeEnabled</span><span class="o">)</span> <span class="o">{</span> <span class="c1">//new code</span> <span class="o">}</span> <span class="k">else</span> <span class="o">{</span> <span class="c1">//old code</span> <span class="o">}</span> </code></pre></div> <p>The old code will continue executing until the newCodeEnabled toggle is flipped. These toggles can be implemented through a config file or even some kind of globally accessible boolean, though in my experience the best way is to use an external config like <a href="https://www.consul.io/docs/agent/options.html">Consul</a> or <a href="https://zookeeper.apache.org/">Zookeeper</a> so that features can be toggled on or off without requiring a redeployment. <strong>Product owners and other stakeholders love being able to view a dashboard of toggles and turn features on and off without asking developers.</strong></p> <p>If two developers are working from the same branch but using different feature toggles, the chances for a conflict are far lower. And since they&#39;re working off the same branch, they can pull and push multiple times per day to stay in-sync. At a bare minimum, developers should pull at the start of the day and push at the end of the day, so that no two local repositories are out of sync by more than a workday.</p> <p>Automated tests should be written for the cases where the toggle is both on and off. This basically means that when a new feature is being developed, the existing tests simply need to be adjusted to setUp with the flag off. Then new tests get added with the flag on. The test suite ensures that the &quot;old way&quot; never breaks. Sometimes the execution path through the code can be affected by more than one toggle. If you have two toggles that intersect in some way, you need 4 groups of tests (both off, both on, and both variants of one on and one off). Again this ensures that two developers working in the same area of the code are regularly seeing each other&#39;s changes and integrating constantly. Code coverage tools can easily tell you if you&#39;re missing a potential path through the code.</p> <div class='image alignright' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/toggle.jpg" width='300' height='223'/></figure></div> <p>Feature Toggles can also be expanded to be more dynamic. Rather than simply being booleans, you could build your system so that toggles could depend on the status of users, allowing users or groups of users to &quot;opt-in&quot; to a beta program that gives access to bleeding edge features as they&#39;re developed to solicit customer feedback. Toggles could be dependent on geographic location or even a dice-roll, allowing for A/B testing and canary releases when features are ready to be turned on.</p> <p>When the feature is finished and turned on in production, you can schedule a small cleanup task to delete the old code paths and the toggle itself. Or you can leave the code in place if it&#39;s a feature that may have reason to turn off again in the future - I&#39;ve left feature toggles in place that have really saved the day down the road when some major backend system was experiencing a catastrophic problem and stakeholders wanted to simply turn the feature off temporarily. If you do intend to remove the toggle, it&#39;s a good idea to schedule it into your regular process as soon as you start the feature, lest the team forget when bigger, shinier work comes along.</p> <p>I cannot overstate the value of feature toggles enough. <strong>I can virtually guarantee that if you start using toggles instead of branches for long-developed features, you&#39;ll never look back or want to use feature branches again</strong>. In nearly every case that I gave in to a team member that pushed hard for a feature branch because of this or that reason, it wound up being a massive pain later on that delayed the release of important software. I&#39;ve pretty much always regretted feature branches, and never once regretted making a feature toggle. It takes some getting used to, particularly if you&#39;re accustomed to long-running branches, but the positive impact toggles have on your team is tremendous.</p> <h1>Introducing Three-Flow</h1> <p>Alright, now that we&#39;ve gotten feature branching off the table, we can talk about the workflow that I&#39;ve used successfully on multiple teams.</p> <div class='image aligncenter' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/three-flow.png" width='577' height='747'/></figure></div> <p>In this approach, all developers work off <code>master</code>. If a feature is going to need to be in development for a while, it&#39;s put behind a feature toggle, and still kept on master with all the other code. <strong>All commits to master are rebased</strong>. It&#39;s a good idea to set <a href="http://stevenharman.net/git-pull-with-automatic-rebase">automatic rebase on pulls</a>. If you have a local feature branch for your work, it should be rebased onto master, there should be no trace of the branch in origin.</p> <p>That&#39;s it. That&#39;s where all the main development work happens. You have one branch, the default <code>master</code> branch. And everyone codes there. Everything else about Three-Flow is concerned with managing releases.</p> <h2>Releasing</h2> <p>When it&#39;s time to do a release (regular cadence or whenever the stakeholders want it, your call), the <code>master</code> branch is &quot;cut&quot; to the <code>candidate</code> branch. The same <code>candidate</code> branch is used over and over again for this purpose.</p> <p>The purpose of <code>candidate</code> is to allow a QA team to do any kind of regression testing it would like to do. Theoretically, all of the features themselves have been tested as part of accepting that the work is done. But this release candidate branch allows one last check to make sure everything is in order before going to production. The <code>master</code> branch where all the work is done and accepted should be tested in a production-like environment where <strong>the relevant feature toggles are on</strong>. The <code>candidate</code> branch where work is sanity-checked before release should be tested in a production-like environment where <strong>the relevant feature toggles are off</strong>. In other words, it should run the code the same way that production itself will, with the new toggles defaulted to off.</p> <p>To cut a release candidate, you&#39;d do this:</p> <div class="highlight"><pre><code class="language-output" data-lang="output">$git checkout candidate #assume candidate already tracks origin/candidate$ git pull #make sure we're up to date locally $git merge --no-ff origin/master$ git tag candidate-3.2.645 $git push --follow-tags </code></pre></div> <p>The reason for using <code>--no-ff</code> is to force git to create a merge commit (a new commit with two parents). This commit will have one parent that&#39;s the previous HEAD of <code>candidate</code> and one that&#39;s the current HEAD of <code>master</code>. This allows you to easily view your git history and see when branches were cut, by whom, and which commits were pulled over.</p> <p>You&#39;ll also noticed we tagged the release. More on that in a bit.</p> <p>If bugs are found in the <code>candidate</code> branch as part of the testing effort, they are fixed in <code>candidate</code>, tagged with a new release tag, and then merged down into <code>master</code>. These merges should also use the <code>--no-ff</code> parameter, so as to accurately reflect code moving between the two branches.</p> <p>When a release candidate is ready to go out the door, we update the <code>release</code> branch so that it&#39;s HEAD points to the same commit as the HEAD of the <code>candidate</code> branch. Since we&#39;re tagging every release we make on the <code>candidate</code> branch <strong>we can simply push the tag itself to be the new HEAD of <code>release</code></strong>:</p> <div class="highlight"><pre><code class="language-output" data-lang="output">$ git push --force origin candidate-3.2.647:release </code></pre></div> <p>The <code>--force</code> basically means to ignore whatever else is on the origin <code>release</code> branch and set it&#39;s HEAD to point at the same commit that <code>candidate-3.2.647</code> points to. Note that this is not a merge - we don&#39;t want to complicate the git history with this, really the only reason we&#39;re even bothering with the <code>release</code> branch at all is so that we have a branch to make production hotfixes to if need be. Yes, this force push means any hotfix work in <code>release</code> would get overwritten - if you find yourself releasing new candidates to production while there is ongoing hotfix production work, your team has a serious coordination/communication issue that needs to be addressed. Either that or you&#39;re doing way too many production hotfixes and have a major quality problem. <strong>Production hotfixes should be possible but rare</strong>.</p> <p>The reason we do a <code>push --force</code> rather than a merge is that if you do a merge, it means that the commit at the HEAD of <code>candidate</code> and the commit at the head of <code>release</code> may have different sha-1&#39;s, which isn&#39;t what we want. We don&#39;t want to make a <em>new</em> commit for the release, we want <em>exactly</em> what was QA&#39;d and that&#39;s the commit at the HEAD of <code>candidate</code>. So rather than create a merge, we forcefully tell git to make the tip of <code>release</code> exactly match that of the release candidate, the HEAD of <code>candidate</code>.</p> <p>Any production hotfixes that need to happen are made to <code>release</code> and then merged into <code>candidate</code> and then into <code>master</code>, all with <code>--no-ff</code>. This is quite a bit of git work for a production hotfix (2 distinct merge operations), but production hotfixes should be rare anyway.</p> <p>If you follow this workflow exactly, then when you view your git history as a graph it will look pretty much exactly like the above picture, showing exactly which commits moved between branches.</p> <div class='image aligncenter' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/threeflow-history.png" width='600' height='472'/></figure></div> <p>You&#39;ll notice that the way the above graph does NOT resemble the earlier picture is that you don&#39;t see the dotted lines pushing to <code>release</code> except the most recent one. That&#39;s because we always do a <code>--force</code> push, meaning that every time we release to production, we completely ignore what production once was. This is intentional - it doesn&#39;t matter what was on production and when, all that matters is what&#39;s on production <em>right now</em> so we can hotfix it in case of a production emergency. The only time you&#39;ll even see the <code>release</code> branch at all on this graph is for whatever is currently in production, and whenever hotfixes were made that had to be merged into <code>candidate</code> and <code>master</code>. This is exactly what we want: no unnecessary information adding noise to our graph.</p> <h2>Release Notes</h2> <p>You can easily generate &quot;release notes&quot; for a deployment to production. You just need to compare the tag for the current <code>release</code> branch to the tag for the current <code>candidate</code> branch. </p> <p>If you&#39;re using tags, you can do this comparison by using the tag names. It&#39;s easy to remind yourself of which tag is in production, because every time we force an update of the <code>release</code> branch pointer, we use the tag. That means that there&#39;s always exactly a tag that points to the same commit that the HEAD of <code>release</code> points to. You can find out which tag this is by running:</p> <div class="highlight"><pre><code class="language-output" data-lang="output">$git describe --tags release candidate-3.1.248 </code></pre></div> <p>So if we know that our <code>candidate</code> branch has been tagged as <code>candidate-3.2.259</code> you can get the list of commits that make up the difference between those two tags like so:</p> <div class="highlight"><pre><code class="language-output" data-lang="output">$ git log --oneline candidate-3.1.248..candidate-3.2.259 </code></pre></div> <p>You could also do this if you didn&#39;t want to mess with tags. The following will always just compare what&#39;s on <code>release</code> (production) with what&#39;s on <code>candidate</code> (what&#39;s planned to go to production):</p> <div class="highlight"><pre><code class="language-output" data-lang="output">$git log --oneline release..candidate </code></pre></div> <p>Running these commands will show you every single commmit that is in the new candidate that wasn&#39;t in the previous release. At my last gig, we liked to include the ticket numbers for our issue tracker in our commits, which allowed a script to cross-index this list of commits with actual work items in Jira.</p> <h2>Common Operations</h2> <p>Just to summarize a bit, here are some of the operations you might want to be able to do. All of these examples assume that your local branches are properly set up to track the remote branches, and that those local branches are up to date. If you&#39;re not sure, it&#39;s often a good idea to do a <code>git fetch</code> and then use names like <code>origin/master</code> instead of <code>master</code> to ensure you&#39;re using the origin&#39;s version of the branch in case yours is stale.</p> <h3>How do I cut a release candidate off master?</h3> <div class="highlight"><pre><code class="language-output" data-lang="output">$ git checkout candidate $git pull$ git merge --no-ff master $git tag candidate-3.2.645 #optionally tag the candidate$ git push --follow-tags </code></pre></div> <h3>How do I release a candidate?</h3> <div class="highlight"><pre><code class="language-output" data-lang="output">$git push --force origin &lt;tag for the candidate&gt;:release </code></pre></div> <p>Alternatively if you aren&#39;t using tags you could just do:</p> <div class="highlight"><pre><code class="language-output" data-lang="output">$ git push --force origin candidate:release </code></pre></div> <p>or, if you&#39;re not sure you&#39;re up to date locally:</p> <div class="highlight"><pre><code class="language-output" data-lang="output">$git fetch$ git push --force origin origin/candidate:release </code></pre></div> <h3>How do I find which branches have a particular commit on them?</h3> <p>Often people want to know if a particular code change is currently in production or set to go out to production in the next release. Here&#39;s an easy way to find which of the three branches a commit is on.</p> <div class="highlight"><pre><code class="language-output" data-lang="output">$git branch -r -contains &lt;sha of commit&gt; </code></pre></div> <h3>How do I find which tag a branch is pointing to?</h3> <p>Or in more accurate terms, for a given branch pointer, how do I find which tag(s) point to the same commit as the branch HEAD?</p> <div class="highlight"><pre><code class="language-output" data-lang="output">$ git describe --tags &lt;branch&gt; </code></pre></div> <h3>How do I find out which commits are going to go out with a release?</h3> <div class="highlight"><pre><code class="language-output" data-lang="output">$git log --oneline release..&lt;tag of release candidate&gt; </code></pre></div> <p>You could also do this if you didn&#39;t want to mess with tags:</p> <div class="highlight"><pre><code class="language-output" data-lang="output">$ git log --oneline release..origin/candidate </code></pre></div> <h3>How do I set up the candidate and release branches for the first time?</h3> <p>You can create what&#39;s called an &#39;orphan&#39; branch with no commits to it, but you&#39;ll be unable to push it to origin to set up the remote branch until you have some kind of commit.</p> <p>Pretty much every project starts with an initial commit, usually just a readme or something. I recommend just making a branch off that commit and pushing that. What you&#39;re looking for is the first merge commit into <code>candidate</code> to have two parents so that it shows up in logs correctly. So really, any commit on <code>candidate</code> will work, may as well choose the first one.</p> <div class="highlight"><pre><code class="language-output" data-lang="output">$git branch candidate git log --format=%H --reverse | head -1$ git checkout candidate $git push </code></pre></div> <p>If you try the approach where you create a fresh orphan commit, you&#39;ll find that the first time you try to merge, git will tell you &quot;refusing to merge unrelated histories&quot;. You basically need the branches to all share a commit, so it may as well be the first commit. Word of warning though, you might get merge conflicts the very first time you actually cut a release candidate (but probably not).</p> <p>To set up the release branch for the first time, just do a release. As soon as you force push the right commit to the remote <code>release</code> branch, it will be set. You&#39;ll also want to check out a local copy of the same branch for any hotfixes you may want to do:</p> <div class="highlight"><pre><code class="language-output" data-lang="output">$ git branch release git branch release --set-upstream-to=origin/release </code></pre></div> <h1>Questions</h1> <h2>Isn&#39;t this just cactus model?</h2> <p>You may be wondering if Three-Flow is simply Jussi Judin&#39;s <a href="https://barro.github.io/2016/02/a-succesful-git-branching-model-considered-harmful/">cactus model</a>, an alternative to GitFlow that uses the default <code>master</code> branch for all development work.</p> <p>For the most part, yes, it is. The key difference is that Judin recommends moving commits between the <code>master</code> and <code>release</code> branches via cherry-picks. I very much recommend against that, cherry-picks are a last resort, to only be used when correcting a mistake. I prefer rebasing to merging, and I prefer merging to cherry-picking. I think it&#39;s important to be able to use merge commits to actually see when and what commits were merged, and by whom. Being able to pull up an accurate graph of merges is important. I only use cherry-picking when I put a commit on the wrong branch by mistake.</p> <p>The other main difference is the <code>candidate</code> branch which I accept as something of a necessary evil. While my goal is always an always-deployable master where all commits automatically go to production, I&#39;ve found that most organizations and teams are not ready or comfortable with that kind of deployment schedule. Most groups like to have some kind of QA buffer time and that&#39;s basically what <code>candidate</code> provides. The goal of the team should be to remove the need for the <code>candidate</code> crutch but in the mean time Three-Flow provides a very usable, simple branching model that generally gives teams everything they need to be successful with git.</p> <h2>Isn&#39;t this just GitFlow without feature branches?</h2> <p>I have actually explained this branching strategy to GitFlow adopters by telling them it is essentially just GitFlow except that you don&#39;t have feature branches, all development happens on <code>develop</code> but you rename GitFlow&#39;s <code>develop</code> to <code>master</code> and you rename GitFlow&#39;s <code>master</code> to <code>release</code>.</p> <div class='image alignleft' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/always3.png" width='300' height='175'/><figcaption style='display:table-caption;caption-side:bottom;'><p class='caption'>Always 3 there are. No more, no less.</p></figcaption></figure></div> <p>The motivator behind Three-Flow is simplicity. GitFlow encourages the creation of a multitude of feature branches, release branches, and hotfix branches. As a project goes on, the log can start to look impossibly complex. With Three-Flow, there are no feature branches or hotfix branches. Hotfixes simply happen on the production <code>release</code> branch. And instead of having multiple release branches, you have a single <code>candidate</code> branch that you just keep reusing.</p> <p>You don&#39;t need a system of what to name your branches because there are literally exactly three branches in origin: <code>master</code>, <code>candidate</code>, <code>release</code>.</p> <p>Answering the question of &quot;where does my code go?&quot; is very straightforward. Is it a production hotfix? If so, it goes in <code>release</code>. Is it fixing a bug that was found while QAing the release candidate? If so, it goes in <code>candidate</code>. Anything else and it goes in <code>master</code>.</p> <h2>What about code reviews?</h2> <p>If you have a system where you do code reviews before commits can get into the mainline, I recommend basically adding another branch, maybe <code>review</code> or even <code>develop</code> to borrow a term from GitFlow. All regular development goes on there, and code is reviewed and then cherry-picked into <code>master</code>. Of course, it can be tricky to keep track of which commits have and have not been reviewed. </p> <p>I don&#39;t entirely love this approach, frankly I think &quot;all code must be code reviewed&quot; might put you in the camp where Three-Flow won&#39;t work for your team, and you&#39;d be better off doing a GitHub-style pull request model like most open source projects. I&#39;ve heard <a href="https://www.gerritcodereview.com/">Gerrit</a> might be a good solution, every other &quot;GitFlow sucks&quot; post I&#39;ve ever read usually mentions it but I must confess I&#39;ve never used it myself.</p> <h2>What about a codebase with multiple artifacts?</h2> <p>A lot of people have a single codebase that builds multiple, independently deployable artifacts. Those individual buildable artifacts need their own separate QA cycles and different artifacts will have different version numbers in production. How does Three-Flow work with such a setup?</p> <p>I&#39;ve actually worked this way very recently. We had a single git repository that built multiple different artifacts that deployed independently. The solution was simple, and each independent artifact only adds two branches to Three-Flow.</p> <p>You still have a single shared codebase in <code>master</code> and you use feature toggles instead of branches. But let&#39;s say you have two artifacts foo and bar. You simply have a <code>foo_candidate</code>, <code>foo_release</code>, <code>bar_candidate</code>, and <code>bar_release</code>. When you tag release candidates, you tag in the format <code>foo-candidate-2.1.423</code> and <code>bar-candidate-3.2.126</code>. </p> <p>Otherwise the process works exactly the same way. This scales better than you might expect, I was very recently on a large project that had 4 different independently deployable artifacts that came out of a single codebase. 8 <code>candidate</code> and <code>release</code> branches, plus <code>master</code>. Generally there was a pretty strong mapping between an individual &quot;team&quot; and one of these artifacts, so a team or a group still just worked with 3 branches.</p> <h2>Is there a way to not have to manually type so many arguments?</h2> <p>One of the weirder aspects of this flow is that pretty much every command I suggest typing into git has additional arguments.</p> <p>Any time you do a <code>merge</code>, I&#39;m asking you do to a <code>merge --no-ff</code>. When you cut a release and tag it I suggest you <code>push</code> using <code>push --follow-tags</code> so your tag gets up to origin as well.</p> <p>You can actually set these arguments to be defaults. Since all merging in Three-Flow uses <code>--no-ff</code>, you&#39;re safe to run:</p> <div class="highlight"><pre><code class="language-output" data-lang="output"> git config --global merge.ff no </code></pre></div> <p>If you run this then from that point on you can simply run <code>git merge</code> without the <code>--no-ff</code> argument.</p> <p>Similarly, you can set <code>push</code> to always push locally-created tags:</p> <div class="highlight"><pre><code class="language-output" data-lang="output">$git config --global push.followTags true </code></pre></div> <p>And I mentioned this up above but it&#39;s a good idea to set your master branch to automatically rebase whenever you pull. You can do this like so:</p> <div class="highlight"><pre><code class="language-output" data-lang="output">$ git config --global branch.master.rebase true </code></pre></div> <p>You can actually set any new branch to automatically rebase on pulls in case you&#39;re making local feature branches that track master:</p> <div class="highlight"><pre><code class="language-output" data-lang="output">git config --global branch.autosetuprebase always </code></pre></div> <p>You could leave out the <code>--global</code> from any of these commands so the configuration only applies to the specific git repository you&#39;re working in, as well.</p> <h2>Can&#39;t I just use merging for the release branch?</h2> <p>First of all, you can do whatever you want. This is just a strategy that worked for me on multiple different teams and I wanted to spread it around because I think it&#39;s much simpler than GitFlow.</p> <p>But moreover, yes, if you don&#39;t like the idea of doing a <code>push --force</code> to update <code>release</code> and losing some historical information, but would rather just do a <code>merge --no-ff</code>, by all means do it. This has the advantage of being fewer things to remember how to do, basically any time you move code between the three branches you&#39;re doing a <code>merge --no-ff</code>. </p> <p>In fact, an early version of this strategy did just that, <code>--no-ff</code> merges to <code>release</code>. It worked out fine, reading the git history was really straightforward. The only thing I don&#39;t like about it is that it&#39;s <em>kind of</em> a fib, in that what goes out to production should be the exact same HEAD of <code>candidate</code> that went through QA, and doing a merge commit creates a brand new commit on <code>release</code> that didn&#39;t necessarily get tested. You could, of course, not do a merge commit to <code>release</code> and only do fast-forwarding commits. But then you sort of lose the history anyway, and there&#39;s always a chance that the branch can&#39;t be fast-forwarded and you need a merge commit anyway. And forget about rebasing to <code>release</code>, you&#39;re pretty much guaranteed to have to work your way through a ton of merge conflicts, often very similar ones over and over as you individually resolve each commit in the release.</p> <p>For my money, doing the force pushes kind of reinforces that <code>release</code> isn&#39;t really a branch and shouldn&#39;t be treated like one. It&#39;s really just an updated pointer to production. It&#39;s basically just a series of tags, except that since it&#39;s a branch you can easily make a new commit on it for production hotfixes. To each their own though, there are definitely some simplicity advantages to always doing the same thing with <code>candidate</code> that you do with <code>release</code>. But hell, either way is preferable to using GitFlow. Have I mentioned how much I hate GitFlow? It&#39;s like, a <em>bunch</em>.</p> <h1>Summary</h1> <p>To summarize the main Three-Flow branching model outlined here:</p> <div class='image alignright' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/triforce-threeflow.png" width='300' height='200'/><figcaption style='display:table-caption;caption-side:bottom;'><p class='caption'>It's dangerous to git alone. Take this.</p></figcaption></figure></div> <ul> <li>There are three branches in origin: <code>master</code>, <code>candidate</code>, <code>release</code></li> <li>Normal development happens on <code>master</code>. All new commits are rebased.</li> <li>Features that are incomplete are put behind feature toggles, ideally dynamic toggles that can be changed without a redeploy</li> <li>To cut a release, <code>master</code> is merged into <code>candidate</code> with a <code>--no-ff</code> merge commit</li> <li>Any bugs found during a candidate&#39;s QA phase are fixed in <code>candidate</code> and then merged into <code>master</code> with a <code>--no-ff</code> merge commit</li> <li>When a candidate is released to production, it&#39;s <code>push --force</code>d to the tip of <code>release</code></li> <li>Any production hotfixes happen in <code>release</code> and are then merged into <code>candidate</code> which is then merged into <code>master</code>.</li> </ul> <p>That&#39;s really all there is to it. Like I say above, there are all kinds of development paradigms that this won&#39;t apply to, it&#39;s largely geared toward web applications. But if you think Three-Flow might work for your organization, I highly recommend giving it a shot before adopting the future headache and incomprehensible git history that is GitFlow. </p> <p><strong>In my opinion, Three-Flow is the quickest and easiest way to get up and running with a sensible branching strategy with minimal rules to follow and the fewest complexities to understand.</strong></p> <p>Tried something similar and loved it? Tried something similar and found an issue that you solved? Think my use of <code>--force</code> is a blasphemous use of git and I&#39;m the stupidest dumb idiot that ever ate his own boogers? Feel free to leave a comment below.</p> Sun, 09 Apr 2017 00:00:00 +0000 http://www.nomachetejuggling.com/2017/04/09/a-different-branching-strategy/ http://www.nomachetejuggling.com/2017/04/09/a-different-branching-strategy/ Software Engineering Guiding Principles - Part 2 <p>Here are five more Guiding Principles I use when making technical decisions as a software engineer. You can also check out <a href="http://www.nomachetejuggling.com/2016/06/15/guidingprinciples-part1/">Part 1</a>.</p> <p>Just as before, this list is really a list of principles I use when making difficult technical decisions or mantras I use to snap myself out of being stuck - it&#39;s really not about just how I try to write good code (SOLID, DRY, etc) although there is a little bit of that as well.</p> <h1>Perfect is the Enemy of Good</h1> <p>When it comes to designing code, I think it&#39;s better to get started as soon as possible and make changes and modifications via refactoring as needed. It&#39;s better to get something up and working quickly, rather than spending time debating in front of whiteboards about the correct way to do things. In my experience, engineers in particular have such an affinity for elegance that we can get wrapped around the axle trying to figure out the perfect, most elegant solution.</p> <p>I&#39;m not saying to write shitty code, obviously. It&#39;s still important to follow good design principles like <a href="https://en.wikipedia.org/wiki/SOLID_(object-oriented_design)">SOLID</a>, the <a href="https://en.wikipedia.org/wiki/Law_of_Demeter">Law of Demeter</a>, <a href="https://en.wikipedia.org/wiki/KISS_principle">KISS</a>, <a href="https://en.wikipedia.org/wiki/Defensive_programming">defensive programming</a>, <a href="https://christiantietze.de/posts/2015/09/clean-code/">CLEAN</a>, <a href="https://en.wikipedia.org/wiki/Separation_of_concerns">separation of concerns</a> and so on. It&#39;s just that you don&#39;t have to get every little thing perfect, it&#39;s better to get something that&#39;s imperfect but works built and then refactor to perfection later.</p> <p>Remember <a href="https://en.wikipedia.org/wiki/John_Gall_(author)">Gall&#39;s Law</a>:</p> <blockquote> <p>A complex system that works is invariably found to have evolved from a simple system that worked.</p> </blockquote> <p>It&#39;s important to realize when you or your team have gotten into a state of <strong>analysis paralysis</strong>, which is one of the reasons I like Pair Programming so much - it&#39;s handy to have a second person around to recognize when you&#39;re wrapped up analyzing instead of building. Nobody really asked you to build the world&#39;s greatest, most reusable, most well-designed system on the planet. <strong>The company doesn&#39;t need the perfect solution, it just needs one that&#39;s good enough</strong>.</p> <p>There are lots of ways engineers can get gridlocked doing analysis, and it&#39;s important to recognize all of them.</p> <h2>Premature Optimization</h2> <p>Don Knuth calls Premature Optimization the <a href="http://c2.com/cgi/wiki?PrematureOptimization">root of all evil</a>. It can happen both in code/design, as well as architecture.</p> <p>If you find yourselves talking about caching layers, circuit breakers, or geo redundancy before building even the first version of the software, you might be getting ahead of yourself. <strong>Those things are all just as easy to add later as they are to add now</strong>, so there&#39;s no reason to get wrapped up on these concerns early.</p> <p>Obviously I&#39;m not advocating writing inefficient algorithms when an efficient one is just as easy to implement, but if the code is substantially cleaner with something less efficient, leave well enough alone and just get it working. Even dumbass bubble sort is usually good enough, and it has the advantage that you remember how it works right now without double checking anything on Wikipedia. </p> <h2>Bike Shedding</h2> <p>Otherwise known as the <a href="https://en.wikipedia.org/wiki/Law_of_triviality">Law of Triviality</a>, this is when a disproportionate amount of weight is given to trivial concerns when designing something. The term comes from the fact that teams will tend to focus on the minor issues that are easy to understand, such as what color to paint the staff bike shed.</p> <div class='image alignright' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/sheldon-cooper.png" width='300' height='187'/><figcaption style='display:table-caption;caption-side:bottom;'><p class='caption'>You're in my spot</p></figcaption></figure></div> <p>The more time you devote to making a decision, the more you need to periodically ask yourself &quot;does this really matter?&quot; A lot of times, it doesn&#39;t matter to anyone else on the team, it doesn&#39;t matter to your users, and it certainly doesn&#39;t matter to the company. <strong>If it only matters to you, you&#39;re probably being, you know, kind of a dork</strong>.</p> <p>An entire team can bike shed as well. Recognize when your team is bike shedding and stop the conversation, drive it toward the things that matter. If people keep gravitating toward the trivial, it means that there&#39;s a lack of comprehension of the difficult decisions that actually matter. You either need to stop and get everyone on the same page about the challenging stuff, or you have the wrong group of people making the decision.</p> <h2>Overengineering</h2> <p>Premature reusability. Engineers have a tendency to want to design components to be as generic and reusable as possible, there&#39;s an old joke from Nathaniel Borenstein I&#39;m fond of:</p> <blockquote> <p>No ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. </p> <p>Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter.</p> </blockquote> <p>A really great example of over-engineering is found in Bob Martin&#39;s <a href="https://smile.amazon.com/Software-Development-Principles-Patterns-Practices/dp/0135974445?sa-no-redirect=1">Agile Software Development</a>. In it, Bob Martin and Bob Ross sit down to do the <a href="http://butunclebob.com/ArticleS.UncleBob.TheBowlingGameKata">Bowling Game Kata</a>, a programming exercise where you simply write code to calculate the scores for a bowling game.</p> <p>The two engineers started talking about what classes they were going to have. There would need to be a <code>Game</code>, which of course would have 10 <code>Frame</code> instances, each of which would have between 1 and 3 <code>Throw</code> instances. This seemed natural, like how you might answer a &quot;design a the object model for a bowling game&quot; question in an interview.</p> <p>But as they tried to write tests to drive out the behavior of <code>Frame</code> and <code>Throw</code> they found that there were no behaviors to those classes. A <code>Throw</code> is really just an <code>int</code>. In the end, they wound up with a simple <code>Game</code> class and nothing else, with a handful of methods on it to say how many pins were hit, and a method to get the score.</p> <div class='image aligncenter' style='display:table'><figure><a href='http://xkcd.com/974/'><img src='http://imgs.xkcd.com/comics/the_general_problem.png'/></a></figure></div> <p>Don&#39;t start any large endeavor with a mind on generality and reuse. Follow the <a href="https://blog.codinghorror.com/rule-of-three/">Rule of Three</a> - make everything designed for single use and naturally you will eventually discover reusable components falling out when refactoring after you&#39;ve done the same or similar things in multiple places.</p> <h2>Exception - Architecture</h2> <p>It&#39;s important to note that there is one exception to this idea: your code <em>design</em> can be just good enough. But your system <em>architecture</em> needs to basically be perfect from the start. This can be extremely difficult to get right, but it&#39;s important, so a little bit of analysis paralysis is somewhat forgivable.</p> <p>When it comes to code design, evolutionary design is the way to go - just build it and evolve it. But for architecture, get the team into a room with a whiteboard and hash out the details before you start building. <strong>Evolutionary design, up-front architecture</strong>.</p> <p>How do you know the difference between design and architecture? One analogy I&#39;m fond of is that architecture is strategy while design is tactics. Doing the right thing vs doing things right. That&#39;s a helpful distinction but I find myself most fond of <a href="http://www.ibm.com/developerworks/library/j-eaed10/">Martin Fowler&#39;s definition</a>:</p> <blockquote> <p>Architecture is the stuff that&#39;s hard to change later. And there should be as little of that stuff as possible.</p> </blockquote> <p>Anything that would be extremely difficult to change later on is something deserving of a substantial amount of upfront analysis. The language you choose for your code is architecture because changing it would require a full rewrite. If you&#39;re using a highly opinionated framework like Rails, Grails, or something that spreads throughout your entire codebase like Spring, that&#39;s architecture. </p> <p>If you go with microservices, lots of decisions that are typically architecture suddenly become design, because you could swap one microservice for another easily, or quickly swap out the language or framework of one service. However, now the contracts between services - which would be easy to refactor if they were all in a single codebase together as simple classes - cease being design and become architecture. And of course the decision to use microservices or not altogether <em>is</em> architecture. </p> <div class='image alignleft' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/architecture.jpg" width='300' height='188'/></figure></div> <p>The data store you use is likely architecture. It can sometimes be easy to swap out MySQL for Oracle if you&#39;re using a strong database abstraction layer or relying on JPA or ActiveRecord something similar, but as your data needs grow you&#39;ll quickly find yourself using customized queries or perhaps even stored procedures, and migrating becomes difficult. Even if you choose something like Postgres and try to keep the option open to switch to Oracle or MariaDB, you&#39;re still picking a relational database at all, and switching to a NoSQL store would be extremely difficult so no matter how you slice it, it&#39;s architecture.</p> <p>Public-facing APIs are a strange middle ground. Once you&#39;ve decided on the APIs, they&#39;re impossible to change without affecting your users, so they&#39;re architecture. However, you can introduce a new API version later fairly easily, so it&#39;s not that hard to change your mind, making it sort of design? Of course, the WAY you version the APIs in general is architecture, because if you provide no facility for versioning early on it becomes difficult to add a new version later.</p> <p>Overall the dichotomy is subjective so you need to use your best judgement, but what&#39;s important is that you don&#39;t spin your wheels making something perfect that could be perfected later if it can be good enough now.</p> <h1>If You Break My Code, It&#39;s My Fault</h1> <p>I&#39;ve blogged about this one before, under the more provocative title &quot;<a href="http://www.nomachetejuggling.com/2011/10/21/i-broke-your-code-and-its-your-fault/">I Broke Your Code, and It&#39;s Your Fault</a>&quot;. In fact, there was even a <a href="https://www.reddit.com/r/programming/comments/qbg9y/i_broke_your_code_and_its_your_fault/">lengthy reddit discussion</a> about it in which folks tried to decide if I was clinically insane, or just a regular moron.</p> <p>Hyperbolic title aside, I still stand by the original point. Even if someone else does something as annoying as change the interface I was depending on in my code, it shouldn&#39;t be possible for them to so thoroughly break the code I wrote without SOMETHING telling them that they did so. <strong>All it takes is one failed test to say &quot;hold up, you broke shit.&quot;</strong></p> <p>When I push code up to the shared repository, it&#39;s my job to ensure it works, not QAs. But it&#39;s also my job to ensure that a junior engineer or a new hire can&#39;t just break it without something telling him or her it happened. When I write code, I try to imagine, what would happen if some other engineer came in and modified the class I just wrote, maybe didn&#39;t understand why I was doing <code>-1</code> somewhere, and so they just removed it? Would that be an annoying thing to do? Sure, and I would hope that the other engineer might ask me why I was doing it if I failed to make it obvious from the code itself. But maybe this is years from now and I&#39;m not even at the company anymore, so they remove the <code>-1</code>, or they think my code sucks so they rewrote the entire function from scratch. The instant they do that, a test I wrote somewhere should fail (hopefully with an explanation of why it needed to be the way it was).</p> <p>By writing my code like this, and creating what reddit argued is <em>too many tests</em>, I am encouraging the other members of my team to embrace <strong>fearless refactoring</strong>. Don&#39;t like how I wrote something? Refactor it, and don&#39;t worry about breaking anything - I wrote enough tests to ensure that you can&#39;t. Is it possible I&#39;ll make a mistake and fail to cover something I should have? Of course it is, but when this happens, the refactor-er in question did me a favor by highlighting a mutation that I missed.</p> <p>Top comment on that thread questions the wisdom of being happy that an app breakage highlights a missing test we can add. The commenter says to try telling the client about your unit test suite while they&#39;re losing customers left and right due to the bug. I guess that&#39;s a fair poi-- wait, what? You&#39;re developing applications where breakages and bugs can utterly destroy your company, and you&#39;re <em>not</em> writing a metric ton of tests? That&#39;s some serious Evel Knievel shit right there. Um, Evel Knievel was a stuntman in the 70&#39;s. Er, the 70&#39;s were a decade about 30 years before Spongebob first aired. Nevermind.</p> <div class='image alignright' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/safetynet.jpg" width='300' height='300'/></figure></div> <p>Look, the safety net of an overabundance of unit tests combined with some high-level smoke tests to ensure that basic functionality is always working should give the entire team the freedom to refactor and rewrite anything they don&#39;t like. If everyone on a team is able to adopt this attitude, the end result is code that is incredibly clean. <strong>If the team isn&#39;t fearlessly refactoring, and they&#39;re afraid to make tiny changes and improvements because something might break somewhere, your team is hamstrung</strong>. Modules start to have a &quot;here be dragons&quot; vibe, with everyone afraid to improve them and so they rot until your entire codebase is rotten and you think you need to rewrite it (we talked about that <a href="http://www.nomachetejuggling.com/2016/06/15/guidingprinciples-part1/#toc-the-team-unqualified-to-refactor-is-unqualified-to-rewrite">already</a> though).</p> <p>I&#39;m not saying it should be impossible to break my code. Changing the interfaces of things I depend on, or literally going in and modifying what I wrote could easily make it behave incorrectly. I can&#39;t stop that. I&#39;m saying it <strong>should be impossible to break it without a test automatically telling you that it happened</strong>.</p> <p>When you actually imagine that another engineers might come in and accidentally (or maliciously) modify your code, your tests get much stronger. You&#39;ll find that your assertions are better when you try to guard against this sort of thing, which is really what unit testing is all about. Lots of people track coverage for tests, but coverage basically just counts lines hit during the testing phase. You could write a suite of unit tests that actually hits every single line of code, giving you 100% code coverage, but makes no assertions whatsoever. Your coverage is high, but your tests are borderline useless in this case. <strong>Raw coverage isn&#39;t what I&#39;m talking about here</strong>.</p> <p>It&#39;s not about how many tests you have or how many lines they cover, it&#39;s about how strong the tests you have are. And approaching your tests with the attitude that it should be impossible to break your code without a test failing is how you make them strong.</p> <h2>Zealotry</h2> <p>This is probably the <em>strong opinion</em> I hold that comes closest to zealotry for me. As a counterexample, I really <a href="http://www.nomachetejuggling.com/2009/02/21/i-love-pair-programming/">love pair programming</a> but I left the gig where I did it regularly and took a job where the team really didn&#39;t like pairing, and I adjusted fine to not pairing. I usually write my tests first and enjoy the TDD red-green-refactor cycle, but there are times when I suspend this practice and write tests later. There are plenty of things I really love doing that I&#39;m more than happy to stop doing as the situation demands, but I don&#39;t think I can go back to not testing at all, and I might be unwilling to listen to arguments to convince me to.</p> <p>At this point in my career, the level of physical discomfort I feel writing code with no tests at all is unbearable. Not too long ago I was extremely busy with one task but was forced to switch gears to implement a small change I didn&#39;t really agree with to an unrelated part of the code. As some kind of juvenile form of protest, I half-assed the code and wrote no tests, just to get it done and off my plate so I could go back to what I was doing. I pushed it up to the central git repo and felt so uncomfortable with what I had done that I lasted about 60 seconds before going back in and writing some tests to cover the change and explain why in the test case. My rebellion was brief, I am not a badass.</p> <p>I&#39;ve heard of places where bosses will declare that unit tests are a waste of time that slow down development, and I genuinely don&#39;t think I could work in a place like that anymore. Ten years ago and I wouldn&#39;t have cared, but today it just seems like an impossible request that I don&#39;t write tests, like asking me to drink lighter fluid or something. I&#39;ve fallen into such a comfortable cycle of code-a-little, test-a-little that eschewing the process feels completely unnatural and foreign; whiteboard coding interviews seem so bizarre to me now, I&#39;d never write so many lines of code without tests at work. My god, there&#39;s an <code>if</code> statement in it, that&#39;s two tests!</p> <p>My code design has vastly improved by thinking about testability. Once upon a time I&#39;d have used <code>Math.random()</code> or <code>System.currentTimeMillis()</code> or <code>new FileReader(&quot;whatever.txt&quot;)</code> without a second thought, but viewing code through the lens of testability made me realize that all of those things are subtle integration dependencies on the underlying system. Figuring out how to write unit tests for code that depends on random number generators, a clock, or the filesystem has forced me to consider things as candidates for dependency injection that I&#39;d never have considered without those tests. Even if I were to delete those tests afterwards, the code is still cleaner and better for having been designed with them in mind.</p> <h1>If You Hate It, Do It More</h1> <p>This one is easy to say, but very hard in practice to commit to. Basically, whenever I find myself dragging my feet on something I don&#39;t want to do, I need to sit down and ask myself why I hate doing it. Chances are, when I get to the root cause of my disdain or anxiety, I find that it&#39;s because something is extremely inefficient or error-prone.</p> <p>Hate performing deployments? Why? There&#39;s a good chance it&#39;s because it involves a bunch of manual steps, handbuilding artifacts and manually uploading them somewhere, then shelling into multiple boxes and executing commands. The desire to get away from anything unpleasant is very strong, but it&#39;s these situations that would benefit the most from doubling down and doing it more often.</p> <p>If you&#39;re deploying every quarter because it&#39;s such a pain, you need to start deploying every month. If you still hate it, every week. If you still hate it, every day. At some point you&#39;ll hit a point where you say enough is enough, and if you&#39;re going to deploy this crap every day then it needs to be easier. And that&#39;s when you start developing deployment pipelines and writing automated scripts. <strong>The more you do something you hate, the better you&#39;ll get at doing it</strong>, if only to keep your sanity.</p> <div class='image alignleft' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/hate.jpg" width='300' height='196'/></figure></div> <p>Hate provisioning machines? Start adding and removing boxes from clusters on a regular basis. At first it will be difficult and annoying - <em>that&#39;s good</em>, that&#39;s what will make you better. In no time you&#39;ll be using OpenStack or AWS, augmenting setup with Puppet or Chef, or maybe even containerizing your entire process with Docker. Your infrastructure will be better for it, everything that you hate doing is likely a weak spot in your development.</p> <p><strong>Hating something is your brain&#39;s way of telling you &quot;this sucks,&quot; but instead of responding by hating it, respond by taming it.</strong> The more you do it, the easier it is to figure out which parts suck the most, and how you can improve them.</p> <p>One of my favorite examples of this is Netflix&#39;s <a href="http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html">Chaos Monkey</a> approach. Dealing with failure was such a negative experience for Netflix that they started doing it all the time, so often that they built software that would randomly fail-out nodes, clusters, or even entire regions. It forced Netflix to revisit how their software works, and handle failure better. What came out the other end was a vastly superior product. And also &quot;Daredevil&quot;.</p> <p>This principle is tough because it&#39;s a lot like cleaning an incredibly messy room or a trainwreck of a garage. Things start pretty bad, but the real issue is that <strong>things have to get worse before they can get better</strong>. Only by embracing the things you hate doing the most do you force your own hand, resulting in something that can do the horrible job you hate automatically, on-demand, and quickly.</p> <h2>Meetings</h2> <p>Yes, this principle applies to basically every aspect of your job. This one is particularly tough for me but: if you hate meetings, have more of them. Start having daily meetings if you need to. In so doing, you and the rest of the team will discover exactly what it is you hate about meetings so much. The only way to really figure out EXACTLY what you hate about meetings is to expose yourself to them so often that it becomes immediately apparent what doesn&#39;t work about them for you.</p> <p>Once you&#39;ve identified what meeting dysfunctions make you despise them so much, it&#39;s easier to fix those things and make meetings more enjoyable. Honestly, I hate meetings too but I need to ask myself: geeze, why? Should it really be so unpleasant to meet and chat with other engineers I respect and enjoy working with? Are we really such misanthropic jerks that we can&#39;t enjoy exchanging ideas? And don&#39;t say that the reason you hate meetings is because they prevent you from doing <a href="http://www.nomachetejuggling.com/2012/10/05/getting-real-work-done/">Real Work</a>, I&#39;ve already talked about how dumb that is.</p> <p>After you and your team realize what doesn&#39;t work about meetings, you can take steps to address them until meetings aren&#39;t something you despise. And once you don&#39;t hate it, the inverse of the rule applies: <strong>if you like it, you can survive doing it less</strong>. Dial your meeting schedule back down once the thing you hate is <em>not meeting</em>.</p> <h1>Be the Worst Person in the Band</h1> <p>I got this from Chad Fowler&#39;s &quot;<a href="https://amazon.com/Passionate-Programmer-Remarkable-Development-Pragmatic-ebook/dp/B00AYQNR5U/">The Passionate Programmer</a>&quot; who in turn took it from jazz guitarist Pat Metheny, who said:</p> <blockquote> <p>Always be the worst guy in every band you’re in.</p> </blockquote> <p>This idea has resonated with me ever since. Is it uncomfortable to be the worst person on the team? Yeah, it sure is. And it&#39;s this discomfort that will drive you to be better. When you&#39;re the best person in the band, you walk around with tons of confidence but you aren&#39;t learning anything and you aren&#39;t improving, because nothing is driving you to. When you&#39;re the worst, you have to step it up.</p> <p>One of the great things about this career is that it&#39;s absolutely impossible to ever know all of it. It&#39;s growing and new tools and ideas are being added at a rate faster than you can possibly learn them. I can see this being stressful for some people, but it&#39;s my favorite thing about it. There&#39;s always, <strong>always</strong> more stuff to learn and improve. It&#39;s like being a bookworm and walking into a library of infinite size.</p> <div class='image alignright' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/yoko.png" width='300' height='205'/></figure></div> <p>Nothing makes me want to learn more and be better than being surrounded by people who are better than me. I&#39;ve worked plenty of jobs where I was the worst guy in the band, and plenty where I was the best, and I always come out of the ones where I was the worst guy in the band feeling like I just spent the entire time leveling up like crazy. Being the best fills you with confidence, which is nice on an emotional level, but it&#39;s nowhere near as satisfying as coming out the other end of a job a vastly improved person. </p> <p>I&#39;ve modified this slightly to <strong>be the second worst guy in the band</strong>. Being the truly worst can make you feel useless, like you&#39;re not making any valuable contribution. Plus it actually helps to be able to mentor someone, one of the most effective ways to learn something is by teaching. In any case, definitely don&#39;t be the best person in the band.</p> <p>Another way I&#39;ve heard this phrased comes from Scott Bain as quoted in <a href="https://smile.amazon.com/Beyond-Legacy-Code-Practices-Software/dp/1680500791">Beyond Legacy Code</a>:</p> <blockquote> <p>Always strive to be mentoring someone and to be mentored by someone.</p> </blockquote> <p>You can subscribe to all the blogs, read all the books, and attend all the conferences, but nothing will help you learn and keep up with the ever-changing world of software development like working every day with someone better than you. The more people that you&#39;re working with that are better than you, the stronger this effect. </p> <h1>Your First Loyalty is to Your Users</h1> <p>This one might be a little controversial, and proudly proclaiming it on my blog might make me unemployable. But at the end of the day, I as a software engineer answer to an authority greater than my product owner, my boss, my VP, my CTO, or anyone else who signs my paychecks: I owe my users quality software. If I wouldn&#39;t be willing to attach my personal cell phone number to the feature I&#39;m developing, I shouldn&#39;t write it.</p> <div class='image alignleft' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/tron2.jpg" width='300' height='169'/><figcaption style='display:table-caption;caption-side:bottom;'><p class='caption'>I fight for the users</p></figcaption></figure></div> <p>I have, on more than one occasion, gotten into a heated debate with a product owner or even a supervisor about a feature I was asked to implement. Often, this stems from situations where the people who are USING the product aren&#39;t the ones PAYING for it, and the client&#39;s higher-ups are writing the checks for features that their underlings using the product might dislike. I&#39;ve found myself usually able to win these arguments by helping the product owners understand how unhappy their users will be, and pointing out that happy users will eventually leave their current company and become a sales lead at their next gig, but the most heated and intense arguments I&#39;ve been involved in at work always stemmed from me advocating on behalf of the voiceless users who would end up on the receiving end of antagonistic features.</p> <p>Unlike most the other principles on this list, this one won&#39;t result in better quality codebases or more SOLID or testable designs - it actually affects the product I build at a business level. I will never do anything half-assed, never lie or mislead my users, never take advantage of them, and never intentionally create a negative experience for them because it will line my or someone else&#39;s wallet. This is especially true when my end-users are not engineers themselves - <strong>they have no power or control in this software-centric world, so taking advantage of the power imbalance is particularly unethical</strong>.</p> <p>I&#39;m not arguing that everything you build needs to make the world a better place at some cosmic level, or that you need to be carbon neutral in every facet of your life or anything like that. I understand that sometimes you need to pay the bills. But what I&#39;m saying is to never, ever forget that at the end of the day some poor schmuck is going to be using the thing you&#39;re building, and this person has people who care about him or her as much as you care about your loved ones. Imagine your mother or husband or best friend using your software - do you feel good about yourself? If not, don&#39;t build it.</p> <p>Don&#39;t write software that <a href="https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal">tricks emissions tests</a> just because your asshole boss told you to. Don&#39;t write <a href="https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal">copy protection schemes that phone home with a user&#39;s private data</a> just because your CEO thinks he&#39;s entitled. Don&#39;t develop code that <a href="http://www.cnet.com/news/e-tailers-snagged-in-marketing-scam-blame-customers/">opts users into monthly charges if they are dumb enough to trust you when using your product</a>, there&#39;s no such thing as a &quot;stupid tax&quot;, <strong>your stupidest users need your advocacy the most.</strong></p> <p>Just remember, someday there might be a scandal and a court case that involves engineers being held accountable for the features they built, and &quot;I was just following orders&quot; may not be enough to save you. Be proud of what you create. It&#39;s not enough to assume <a href="https://groups.google.com/forum/#!msg/comp.lang.c++/rYCO5yn4lXw/oITtSkZOtoUJ">the guy who ends up maintaining your code will be a violent psychopath who knows where you live.</a> - assume that your poor users are as well.</p> <div class='image alignright' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/uncle-ben.jpg" width='300' height='226'/></figure></div> <p>I think there&#39;s a tendency for developers to &quot;just do what they&#39;re told.&quot; The users and their experience is the concern of the product owners, marketing types, salespeople, and other stakeholders - the developers just build the software according to the requirements, right?. In all honesty, I wish that this was a safe mindset to adopt - I&#39;d rather concern myself only with the code and my fellow developers who have to work with it, and leave the features and user experiences up to other people. But time and time again, I&#39;ve found that for whatever reason the folks in those positions lose sight of the user experience and request antagonistic features. At the end of the day, the engineer is where the rubber meets the road - we&#39;re the gatekeepers on what actually gets created, so <strong>we&#39;re the last line of defense before something goes out the door that will make the lives of users worse</strong>. Product Owners and marketers can draw boxes and do photoshop mockup designs all they want, but the engineers are the only ones with the <em>power</em> to actually build the stuff users will be interacting with, and as the sole wielders of this power, we have the <em>responsibility</em> to consider those users even when others don&#39;t.</p> <h1>Conclusion</h1> <p>I feel like there are more things I wind up saying a lot, but one of the most challenging parts of writing up this list was even stepping back enough to realize which things could be written down. When you live by certain ideals long enough, they become so ingrained that it&#39;s hard to even remember what the principles are. Most of the ones on this list I realized only because I&#39;ve been called out by other people for saying them so often.</p> <p>Anything missing? Any principles that you live by as an engineer? Leave a comment, I&#39;m curious what other people see as their <strong>Software Engineering Golden Rules</strong>.</p> Mon, 20 Jun 2016 00:00:00 +0000 http://www.nomachetejuggling.com/2016/06/20/guidingprinciples-part2/ http://www.nomachetejuggling.com/2016/06/20/guidingprinciples-part2/ Software Engineering Guiding Principles - Part 1 <p>I find that I repeat myself often at work. There are a handful of things I say so often when discussing decisions that I&#39;ve been called out for it on occasion for acting like a broken record.</p> <p>But the reason I keep repeating these phrases is that I think they inform a great deal of my decision-making. They are, in effect, my guiding principles when developing software professionally.</p> <p>I thought it might be fun to write a few of these things down because I think that they&#39;re worth sharing - I feel like these principles have steered me in the right direction time and time again. Obviously, there are exceptions to these and there are times when they should be ignored (after all, not being a zealot is one of the principles) but I think they will generally take an engineer down the right path.</p> <h1>Have Strong Opinions, Weakly Held</h1> <p>I think the phrase I&#39;ve heard more than any other in my life is &quot;tell us how you really feel!&quot; which is I guess people&#39;s way of telling me I&#39;ve made them uncomfortable by expressing an opinion too aggressively. It&#39;s true, I can be very strongly opinionated, and I&#39;ve gotten into more than my fair share of, oh, let&#39;s call them &quot;passionate discussions&quot; in the workplace. I&#39;m never insulting or personal, but I have strong opinions on how to do things.</p> <p>That said, I think it&#39;s important to always be open to having my mind changed. If anything, I think I&#39;m TOO easy to convince to change my mind on something, often it takes only one strong counterpoint to completely demolish an opinion I&#39;ve held firm to for years. My opinions are informed by years and years of experience, but that experience doesn&#39;t always apply in every situation, so it&#39;s important to be willing to adjust in light of new information or facts.</p> <p>Apparently this phrase &quot;strong opinions, weakly held&quot; comes from Stanford Professor Bob Sutton. I think it&#39;s a good way to approach every opinion really. I&#39;ve switched between polar opposite positions on a number of issues, including political and philosophical issues that I won&#39;t get into on this blog, but I think I do a good job of allowing my convictions of experience to be suspended to make way for alternative arguments. <strong>I never assume I&#39;m objectively right just because I care</strong>.</p> <div class='image alignright' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/zealot.jpg" width='300' height='259'/><figcaption style='display:table-caption;caption-side:bottom;'><p class='caption'>Unless you hunger for battle, don't be a zealot</p></figcaption></figure></div> <p>It&#39;s important that the thing that makes an opinion weakly held is a strong, rational, logical argument for the alternative position. I won&#39;t back down on something I think is important because of how passionately another person disagrees, or how upset it makes them that they&#39;ve met opposition. This is what makes the opinion strong: I genuinely care about believing the largest possible number of true and correct things, so the only way to dislodge a strong opinion is with true and correct things that work to counter it.</p> <h2>Don&#39;t Be A Jerk</h2> <p>I cringed when I watched Season 3, Episode 6 of my favorite show Silicon Valley, as the main character Richard felt so strongly about Tabs over Spaces that he alienated everyone in his life over it. These debates are so incredibly pointless to me, I do not understand how people waste so much time caring about them. <strong>Strong opinions are not the same as zealotry</strong>, zealotry is company and team poison. Strong opinions only matter if the things they&#39;re about matter. Having extremely strong opinions about tabs vs spaces, or emacs vs vim makes you borderline un-hireable to me, bringing zealots onto your team violates the <a href="https://smile.amazon.com/Asshole-Rule-Civilized-Workplace-Surviving/dp/0446698202">No Asshole Rule</a> (though for the record, spaces and vim, #sorrynotsorry).</p> <p>Additionally, it&#39;s fine to have strong opinions but if you find yourself belittling or mocking other people in order to stand by them, they probably aren&#39;t that strong. Your positions on technical matters should stand on their own weight, without needing to knock people down. Don&#39;t be one of those people that walks around acting like a jerk and then justifying it by saying you have strong opinions. The best engineers I&#39;ve worked with have consistently been skilled at <strong>not only having well-reasoned strong opinions, but communicating those opinions respectfully to others.</strong></p> <p><span data-pullquote="It's better to have a hole in your team than an asshole. " class="left"></span></p> <p>Being a technical wizard doesn&#39;t give someone the right to be a pompous ass to everyone else. I&#39;m a strong advocate of taking people who are, at a personal level, insufferable, and firing them for being a poor cultural fit, regardless of how much they know about this or that technology. It&#39;s better to have a hole in your team than an asshole.</p> <p>I started this list with this one in particular because it&#39;s important. The rest of this list is, essentially, a list of strongly held opinions I maintain. But it&#39;s important that even these opinions, having reached Guiding Principle level, are subject to change in the light of strong counterarguments, or subject to suspension in light of unique circumstances.</p> <h1>The Team Unqualified to Refactor is Unqualified to Rewrite</h1> <p>I strongly, strongly believe that a full-on code rewrite is nearly always the wrong thing to do. Either you pull everyone off the current iteration of the product to do the rewrite, which means your main product languishes, or you pull some people off to do the rewrite, meaning the rewrite team has to always be catching up with the ever-growing main product.</p> <p>From a simple project management standpoint, this is a disaster. Want to know how long the rewrite will take? Well, in the former case, you&#39;re working with a team that&#39;s dealing with new technology and new development, so there&#39;s no way to apply any previously recorded team velocity as a prediction of future performance. Moreover, you don&#39;t actually have any sense of the scope of the project, because the requirements are basically &quot;everything the app does now&quot;, which will include weird corner cases that have long since been forgotten. So you have an unknown scope and an unknown team velocity, and you&#39;re trying to make a prediction of when this work will be completed? So development is going to stop on the main product line for an indeterminate amount of time. And this is the BEST case scenario, the one where everyone can focus on doing the rewrite.</p> <p>In the latter case, it&#39;s even more unpredictable - you still have the unknown scope issue, but it&#39;s worse because you also have to include, in the scope, getting to parity with whatever else is built while the rewrite is being worked on. If the rewrite would take 3 months, you have 3 months worth of new features on the main product to catch up to. If it would take 6 months, you have 6 months of features to catch up on. And since you don&#39;t know how long it will take just to reach current parity, you can&#39;t predict how far in the hole you&#39;re going to be when it&#39;s &quot;done&quot;, which means it adds ANOTHER layer of unknown time into the mix. Maybe adding those 6 months of features takes you 5 months, so when you&#39;re done you&#39;ve got another 5 months to catch up on. That 5 months of work takes you 3 months to complete, so you have another 3. You&#39;re basically asymptotically approaching done. And remember, the velocity of the &quot;main product&quot; team will be affected by the loss of resources who peel off to do the rewrite, so you have little sense of the velocity of not one, but both teams. If you know your car&#39;s speed, you can predict when it will pass a landmark - but you can&#39;t possibly know when it will pass another moving car if you don&#39;t also know that car&#39;s speed perfectly. If you know neither car&#39;s speed, you&#39;re utterly done for.</p> <div class='image alignleft' style='display:table'><figure><a href='https://www.flickr.com/photos/8047705@N02/5463789169/in/photolist-9jPmax-9DDYgg-nuZtxE-qhFho-ejtid-6c7xeY-ze5cZ-nmKFqr-b1AcSK-9FVdCx-b1AhTZ-b1A75H-dZkGN1-nB5JZn-qhFau-2DKnb-f4N574-cBqwpb-dD7wZD-5cTVaL-zCEF8-9F679e-ogasoM-aDd8E1-9bBdFG-4LeSwt-aDd8s9-J1y1LF-aD9gKD-Curwec-8MGTB8-9cP1vZ-dfCV95-HHBNMQ-oqYWpW-CUmf1-6Skbcz-5SptQa-5qD4Md-4mtSKC-eftGLt-8P76u6-oHB4w9-F8KKsW-7pZA7u-9XRFMM-tPAFC-7xYyVJ-6YmJXf-9BVWnR'><img src="http://www.nomachetejuggling.com/assets/sorry.jpg" width='300' height='199'/></a><figcaption style='display:table-caption;caption-side:bottom;'><p class='caption'>Go back to start</p></figcaption></figure></div> <p>Moreover, from an engineering standpoint this is a terrible idea. Everyone likes doing greenfield work because it&#39;s new and exciting, but you have to ask, why do the engineers want to avoid maintaining and refactoring the existing product? Is the codebase such a spaghetti mess that it&#39;s too difficult to add anything, so the team wants to try again from scratch? <strong>Who the hell do you think made that dumpster fire in the first place?</strong> Why on earth would that same team suddenly do it right the second time around? Especially when under the pressure of &quot;we have to get caught up&quot; and the time-pressure of the company&#39;s primary software products being frozen or at least slowed while the team develops it? It&#39;s even MORE likely that corners will be cut and quality will suffer, not less likely.</p> <p>Refactoring the codebase is almost always the right way to go. Take the awful parts that you want to rewrite and slowly but surely refactor them into the clean codebase you want. It might take overall longer to be &quot;done&quot; with the effort, but the entire time it&#39;s happening the main product is still in active development without the &quot;two cars racing&quot; situation. Refactoring code is, though slower, also easier to do than rewriting it from scratch, because you&#39;re able to do it in small steps with (hopefully) the support of a huge test suite to ensure you don&#39;t break anything. <strong>Since refactoring is easier than rewriting, any team that says &quot;it&#39;s too hard&quot; to the idea of refactoring the existing codebase instead of rewriting it is inherently not good enough to do the rewrite.</strong> The end result will actually be worse.</p> <h2>Exceptions</h2> <p>There are a couple noteworthy exceptions to this. One, when the reason for the rewrite is a complete change in technology, specifically the language of implementation. If you&#39;re working with Java and want to rewrite in Scala or Clojure, the team should be able to refactor piece by piece since it all compiles to the same bytecode. However, if the team needs to move from a dead technology such as ColdFusion to something else like .NET, a full rewrite is the only way to go. This may also apply in the case of using a prototyping technology to develop the first iteration of a product, only to discover that there&#39;s no way to make the system scale, such as in the case of Twitter&#39;s abandonment of <a href="http://www.gmarwaha.com/blog/2011/04/11/twitter-moves-from-rails-to-java/">Rails in favor of Scala</a>. Not every company has the resources to <a href="http://readwrite.com/2010/02/02/update_facebook_rewrites_php_runtime_with_project/">develop a new PHP runtime</a> just to avoid rewriting their codebase in something other than PHP, sometimes you have to bite the bullet and pick different technology.</p> <p>Another exception is when you find yourself in an &quot;over the wall&quot; situation. Perhaps a team of contractors or consultants or offshore engineers were hired to develop the first iteration of a project, and then the codebase was tossed over the wall to another team to maintain. In this case, the new team may in fact be qualified to both refactor OR rewrite the codebase, and may simply decide the codebase as-is is too much of a mess to bother with and do a rewrite. In this instance, I still would encourage exploring every possibly opportunity to refactor first, but believe me when I say I&#39;ve been on the recieving end of these codebase bombs enough to fully appreciate that sometimes you just need to rewrite the whole thing.</p> <p>One more exception, if your &quot;product&quot; is mostly just a collection of microservices and you&#39;re talking about rewriting some of them, that&#39;s another story. In the land of microservices, rewriting a service essentially <em>is</em> refactoring, and presumably you have a collection of integration-style tests against each microservice, so a rewrite can be done relatively quickly and relatively safely. Even if you want to rewrite all of the services, you&#39;re able to do it one at a time - this is one of the big advantages of microservice architectures.</p> <h1>Choose Boring Technology</h1> <p>I really can&#39;t say this any better than Dan McKinley&#39;s original post <a href="http://mcfunley.com/choose-boring-technology">Choose Boring Technology</a>. In it, McKinley argues that every team or company should start out with three innovation tokens. You can spend these tokens whenever and however you please, but they don&#39;t replenish quickly. Every time you pick an exciting or buzzwordy or cutting edge technology instead of an old standard, you spend a token.</p> <p>Relational Databases are boring. Java is boring. JQuery is boring. Apache is boring. Linux is boring. Tomcat is boring. Choose something &quot;cool&quot; instead of something boring, and you&#39;ve spent an innovation token. Boring technology is boring because it&#39;s <em>known</em>, not because it&#39;s <em>bad</em>. Its failure modes are understood, and it probably has a host of libraries and support tools make it easier to live with in the long term.</p> <p>There&#39;s nothing wrong with Java, tons of scalable applications have been built on Java, and &quot;it&#39;s boring&quot; isn&#39;t a good enough reason to choose something else. If your team truly feels like Scala or Clojure or Erlang or whatever is the right tool for the job, by all means use it, but that&#39;s one innovation token spent. Pick MongoDB over MySQL or Oracle and you&#39;ve got one left. Any time you COULD use technology you&#39;re already using (&quot;our other codebase is .NET&quot;) but decide to pick something new instead, you spend a token.</p> <p>Boring Technology is easy to pick up, easy to research, easy to debug, and frankly easy to staff for. I&#39;m sure the engineering team is happy to pad their resumes with cool buzzwords while simultaneously making themselves irreplaceable, but is that really the best thing for the product and the company? When boring technology fails you, there are stacks of books and internet forums available to assist you - there&#39;s nothing worse than the feeling of excitement you get when you search for your error message and find that someone else has had the EXACT same problem as you before, only to be followed by the crushing blow of zero replies.</p> <script async class="speakerdeck-embed" data-slide="47" data-id="454e3843ac184d3f8bcb0e4a50d3811a" data-ratio="1.31113956466069" src="//speakerdeck.com/assets/embed.js"></script> <hr> <p>I&#39;ve worked plenty of jobs where the team was building plain old Java Web Applications using Spring, backed by MySQL or Oracle databases. You know what? Those products worked just fine. Did the teams have the <em>most</em> fun in the world writing that code? No, probably not, but we got the job done and the products performed quite well (and were easy to fix when they didn&#39;t). A buddy of mine is fond of watching engineers pick and choose cool technologies out of the pool of the latest-and-greatest, only to remind us that he worked on a 911 call routing application written in Java with a MySQL database, and it ran just fine saving tons of lives.</p> <p><span data-pullquote="It's not about how much fun I have. " class="right"></span></p> <p>At my current gig, we decided to build a 150,000-line codebase using Scala. Scala seemed like the right tool for the job, given the particular constraints we had about scalability and throughput in the system. I like Scala a lot, and there&#39;s no doubt that we&#39;ve made tremendous productivity gains by utilizing features exclusive to Scala, but if I&#39;m truly honest with myself did we actually make an overall <em>net</em> productivity gain? When you factor in time lost trying to understand confusing code, time lost by the compiler doing a <a href="https://wiki.scala-lang.org/display/SIW/Overview+of+Compiler+Phases">twenty-pass compilation</a> (holy shit), and time lost by having to manually perform refactorings that our IDEs couldn&#39;t automate due to weak tooling support, I&#39;m not actually sure we came out ahead. Especially given Java 8&#39;s functional programming features, I&#39;m not sure I&#39;d bother picking Scala over Java 8 today, as much fun as I have working with it. It&#39;s not about how much fun I have.</p> <p>Ultimately, it&#39;s really not about me or how much I enjoy working with particular tools and technologies. My job isn&#39;t to have a blast, hell it&#39;s not even really to &quot;write code&quot; - my job is to solve business problems, and it so happens the best tool I&#39;m most competent using for that is code. It&#39;s important to stay up to speed on the latest and greatest technologies so that you as an engineer have the knowledge to know when it&#39;s time to spend an innovation token, but honestly I think most of that effort should be relegated to conference attendance, reading, and personal github accounts. Don&#39;t make company decisions based on how many buzzwords you can add to your resume.</p> <h2>Inventing Languages</h2> <p>I&#39;d like to add that &quot;writing your own programming language&quot; should be worth four innovation tokens all on its own. If you develop an in-house programming language, you&#39;d better have a staggeringly good reason. Good programming languages are hard to write, and unless you have a number of Computer Science PhDs with specializations in Programming Language Design and Implementation on the team, chances are all you&#39;re actually doing is writing an overly complex DSL. The kind of thing whose compiler/transpiler/transliterator fails with &quot;syntax error somewhere&quot; in the event of a mistyped character, rather than a helpful diagnostic and a line number.</p> <p>Don&#39;t create your own programming language. Your language will be weak, your tools will be poor, and language support within other tools will be nonexistent. You probably aren&#39;t going to properly staff the design and support of the language you&#39;ve created. Unless you have an entire team of people devoted exclusively to maintaining that language and writing Eclipse plugins for it or whatnot, your technical debt is so crater-like that you can&#39;t even tell you&#39;re standing in a hole because it extends past the horizon. Whatever huge productivity gains you think your new language is offering your team, they&#39;ll be canceled out and then some.</p> <p><strong>99 times out of 100, a new language isn&#39;t what you want to build, but a library or a framework is</strong>. By all means, develop those in house if need be (but staff their development). Unless you&#39;re developing a language as part of your core business, like Apple developing Swift, don&#39;t do it.</p> <h1>Will You Understand This at 3AM?</h1> <p>Frequently John Carmack is cited as an example of an eccentric genius, the kind of guy who is way ahead of his time. I have to admit, I&#39;m also in awe of a great deal of what he&#39;s done with code. Take this square root function he wrote for Quake III arena:</p> <div class="highlight"><pre><code class="language-c" data-lang="c"><span class="kt">float</span> <span class="nf">Q_rsqrt</span><span class="p">(</span> <span class="kt">float</span> <span class="n">number</span> <span class="p">)</span> <span class="p">{</span> <span class="kt">long</span> <span class="n">i</span><span class="p">;</span> <span class="kt">float</span> <span class="n">x2</span><span class="p">,</span> <span class="n">y</span><span class="p">;</span> <span class="k">const</span> <span class="kt">float</span> <span class="n">threehalfs</span> <span class="o">=</span> <span class="mi">1</span><span class="p">.</span><span class="mi">5</span><span class="n">F</span><span class="p">;</span> <span class="n">x2</span> <span class="o">=</span> <span class="n">number</span> <span class="o">*</span> <span class="mi">0</span><span class="p">.</span><span class="mi">5</span><span class="n">F</span><span class="p">;</span> <span class="n">y</span> <span class="o">=</span> <span class="n">number</span><span class="p">;</span> <span class="n">i</span> <span class="o">=</span> <span class="o">*</span> <span class="p">(</span> <span class="kt">long</span> <span class="o">*</span> <span class="p">)</span> <span class="o">&amp;</span><span class="n">y</span><span class="p">;</span> <span class="c1">// evil floating point bit level hacking </span> <span class="n">i</span> <span class="o">=</span> <span class="mh">0x5f3759df</span> <span class="o">-</span> <span class="p">(</span> <span class="n">i</span> <span class="o">&gt;&gt;</span> <span class="mi">1</span> <span class="p">);</span> <span class="c1">// what the fuck? </span> <span class="n">y</span> <span class="o">=</span> <span class="o">*</span> <span class="p">(</span> <span class="kt">float</span> <span class="o">*</span> <span class="p">)</span> <span class="o">&amp;</span><span class="n">i</span><span class="p">;</span> <span class="n">y</span> <span class="o">=</span> <span class="n">y</span> <span class="o">*</span> <span class="p">(</span> <span class="n">threehalfs</span> <span class="o">-</span> <span class="p">(</span> <span class="n">x2</span> <span class="o">*</span> <span class="n">y</span> <span class="o">*</span> <span class="n">y</span> <span class="p">)</span> <span class="p">);</span> <span class="c1">// 1st iteration // y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed </span> <span class="k">return</span> <span class="n">y</span><span class="p">;</span> <span class="p">}</span> </code></pre></div> <p>But notice line 10, <code>i = 0x5f3759df - ( i &gt;&gt; 1 );</code>? It&#39;s easy to find, because it&#39;s elucidated with the helpful <code>what the fuck?</code> comment. There&#39;s no doubt that this code is extremely clever, and it&#39;s beyond question that it&#39;s extremely fast. It also requires an entire <a href="https://en.wikipedia.org/wiki/Fast_inverse_square_root">2000-word Wikipedia article</a> to understand.</p> <p>In fact, Carmack himself wasn&#39;t even the creator of this bit of wizardry, it came from Terje Mathisen, an assembly programmer who had contributed it to id Software previously. And in fact, he likely got it from another developer, who had gotten it from someone else. This is why the comment <code>what the fuck?</code> is right there - nobody understood it. And yet there it was, pasted into the Quake III engine code because it seemed to work and it was fast. Obviously this worked out for id, and <a href="https://www.youtube.com/watch?v=PcbpIntnG8c">Quake III is awesome</a>, but it probably wasn&#39;t the wisest idea to stake their company&#39;s product on code that nobody understood.</p> <p>Was it clever? Absolutely. <strong>But <a href="https://simpleprogrammer.com/2015/03/16/11-rules-all-programmers-should-live-by/">clever is the enemy of clear</a>.</strong></p> <p>I try not to ever write comments in my code. Comments should not be used to explain how something works, that should be apparent from the code itself. And if that means adding a few temporary variables so that their names can be helpful (or inspected while debugging), or having some comically long method names, so be it. Often people say that comments can be used to explain &quot;why&quot; something works instead, but frankly I find that a few unit tests for the code in question will do a better job of explaining the why than a comment ever could - at the very least, take the comment you&#39;d write explaining why and make it the name of the test. <strong>Code is for <em>what</em>, tests are for <em>why</em>. Comments are for jokes.</strong></p> <p>Obviously it&#39;s difficult not to be proud of yourself when you&#39;ve gotten some long method down to a one-liner (even if it is one incredibly long line) or invented some massively clever solution to a problem. And indeed, sometimes these clever tricks really are necessary to get the required performance out of a system (as in the Quake III square root example). That&#39;s why I&#39;ve found this heuristic so handy (hattip to <a href="http://neidetcher.com/">Demian Neidetcher</a>):</p> <p><strong>If your cell phone rings at 3AM because this code causes a production outage a year from now, will you be able to understand and reason about the code enough well enough to fix it?</strong></p> <p>Imagine that your job is basically on the line here, you&#39;re now in a conference call with your boss, your boss&#39;s boss, your boss&#39;s boss&#39;s boss, and the CTO. Hell, maybe the CEO is on talking about the millions of dollars in lost revenue every minute the product is offline. Your heart is racing from being startled awake, and your eyes are barely able to focus enough to read your laptop screen. Do you <em>really</em> want this to be what comes into focus in the middle of the night?</p> <div class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="o">(</span><span class="n">n</span><span class="k">:</span> <span class="kt">Int</span><span class="o">)</span> <span class="k">=&gt;</span> <span class="o">(</span><span class="mi">2</span> <span class="n">to</span> <span class="n">n</span><span class="o">)</span> <span class="o">|&gt;</span> <span class="o">(</span> <span class="n">r</span> <span class="k">=&gt;</span> <span class="n">r</span><span class="o">.</span><span class="n">foldLeft</span><span class="o">(</span><span class="n">r</span><span class="o">.</span><span class="n">toSet</span><span class="o">)((</span><span class="n">ps</span><span class="o">,</span> <span class="n">x</span><span class="o">)</span> <span class="k">=&gt;</span> <span class="k">if</span> <span class="o">(</span><span class="n">ps</span><span class="o">(</span><span class="n">x</span><span class="o">))</span> <span class="n">ps</span> <span class="o">--</span> <span class="o">(</span><span class="n">x</span> <span class="o">*</span> <span class="n">x</span> <span class="n">to</span> <span class="n">n</span> <span class="n">by</span> <span class="n">x</span><span class="o">)</span> <span class="k">else</span> <span class="n">ps</span><span class="o">)</span> <span class="o">)</span> </code></pre></div> <p>Yes it&#39;s clever, yes it&#39;s fast, congratulations on how smart you are. But your company code repository isn&#39;t the place to show off your l33t coding ski11z, do that shit in your personal github account. You&#39;re not being paid to fluff your e-peen, you&#39;re being paid to solve the company&#39;s business problems, and that means writing something that can be understood by the other people they hired. Code&#39;s primary purpose is to be read by other human beings (<a href="https://mitpress.mit.edu/sicp/front/node3.html">and only incidentally for machines to execute</a>), otherwise we&#39;d all be writing directly in machine language. So if this future version of yourself won&#39;t understand the code just from being tired, what chance does the dumbest person on your team have of understanding it? Stop showing off, your job (and maybe even your employer&#39;s future) may someday depend on it.</p> <h1>Deliver Working Software Early and Often</h1> <p>I realize this is just a rewording of a standard part of the <a href="http://www.agilemanifesto.org/">Agile Manifesto</a>, and I could just as easily say &quot;Be Agile!&quot; here. But I think the truth is Agile has come to mean a lot of different things to a lot of different people, and has become a term so overloaded and hijacked that it&#39;s effectively become <a href="https://pragdave.me/blog/2014/03/04/time-to-kill-agile/">useless as a phrase</a>.</p> <p>I like most of the ideas of the Agile Manifesto, but I think the most important thing to take away from it is the unparalleled value of getting working software into the hands of users as quickly and frequently as possible. I absolutely detest when features are held back so that they can be released in a &quot;big bang&quot; to really wow and excite users (hey Product Owners, your users really don&#39;t care as much as you think, you&#39;re just building a thing they&#39;re forced to use to accomplish something). As long as a feature actually works end to end, get it into the hands of users and solicit feedback right away; every day you keep working code behind a gate is a day you give your competitors to steal users away from you. It&#39;s also a day that you are effectively lying to your users - the most important people to your software - about what your product is capable of doing. </p> <p>I despise long-running feature branches in version control as well, almost any time you want to make a branch I think it&#39;s better to make a feature flag that people (specifically, product owners) can turn on and off at will. Long-running branches are incredibly susceptible to <a href="https://en.wikipedia.org/wiki/Ninety-ninety_rule">the 90/90 rule</a>. And if two subteams wind up creating simultaneous long-running branches off the same mainline trunk, pack it in, you&#39;re done for. </p> <p>Every &quot;big bang&quot; release I&#39;ve been a (reluctant) part of has ended in some form of failure. People think that the software is mostly done and then the effort spins its wheels at the end, trying to &quot;harden&quot; the release and remove bugs. Or the software is finally delivered only to discover that <a href="https://en.wikipedia.org/wiki/Pareto_principle">80% of the users are only using 20% of the features</a>, meaning that a more targeted, earlier release of those top 20% features would have been a far better use of engineering time and resources. The other 80% is now just cruft in the codebase, making it more difficult to add features later on, and nobody is using it.</p> <h2>Plans are The Opposite of Working Software</h2> <p>I think a corollary to this rule is, don&#39;t sell your users on non-working software. I really hate the tendency for &quot;marketing&quot; to <em>need</em> delivery dates on software features so that they can start selling the features now, a situation I&#39;ve seen at company after company. Don&#39;t try to sell users on features you plan on delivering, even if you&#39;re nearly certain about when those features will be done (but, hint, you&#39;re probably less certain than you think). That&#39;s selling vaporware, anything can change between now and then causing those features to be shelved or to not work properly. Instead, deliver working software early and often, and let the marketing folks sell users on what features are actually <em>done</em>, because more stuff will actually <em>be</em> done due to the team not wasting tons of time coming up with estimates (<a href="https://www.happybearsoftware.com/all-estimates-are-still-lies">read: lies</a>).</p> <p><center> <blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Just start referring to “estimates” as lies.<br><br>“how long will that take?”<br>“well, if I had to lie, a week?”</p>&mdash; Trek Glowacki (@trek) <a href="https://twitter.com/trek/status/636286667087851520">August 25, 2015</a></blockquote> <script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script> </center></p> <p>Obviously sometimes there are occasions where people need some sense of how long something will take, most notably when the company is deciding between two different features to implement and they&#39;re performing an analysis based on their cost (though in my experience, rarely does this happen and usually both features are requested anyway). But for the most part, using some roadmap or a plan to inform the company on how to sell their products is a mistake - give engineers the time to properly implement features well, and then when the features are done sell people on them. And remember, <a href="http://www.bloomberg.com/news/articles/2016-05-18/this-5-billion-software-company-has-no-sales-staff">good software sells itself</a>.</p> <h1>Part 2...</h1> <p>I split this list into two posts for really no good reason aside from length. If you want more, check out <a href="http://www.nomachetejuggling.com/2016/06/20/guidingprinciples-part2/">Part 2</a>.</p> Wed, 15 Jun 2016 00:00:00 +0000 http://www.nomachetejuggling.com/2016/06/15/guidingprinciples-part1/ http://www.nomachetejuggling.com/2016/06/15/guidingprinciples-part1/ My StrengthsFinder Results <p>At work, the BigWigs paid for a bunch of employees, including myself, to take the <a href="http://strengths.gallup.com/110440/About-StrengthsFinder-20.aspx">Gallup StrengthsFinder</a> test. This test gives the taker a series of choices between two things that aren&#39;t exactly opposites, and you have to select which one you identify closer with. In the end, the test tells you which of 34 possible strengths are your top 5.</p> <p>I enjoy taking personality tests for fun, but the way that the aforementioned BigWigs were attaching tremendous levels of importance to the results of the test made me a bit weary. Personality tests can often have a horoscope vibe, where they all say something so nice about the taker that everyone who reads it says &quot;yep, that&#39;s me!&quot;.</p> <p>So before the test, I took a look at the 34 possible strengths that the test would identify. I figured they&#39;d all be things I liked, so that when the top 5 were output, the taker would like the results. To my surprise, there were a number of strengths, about 10 of the 34, that I would have been downright irritated if they appeared in my top 5. To an extent that, if the exercise told me any of those 10 were strengths of mine, I&#39;d be able to instantly disqualify the test as bunk.</p> <p>So I took the test with a skeptical eye toward it, but I was actually incredibly surprised by the results. I think the test absolutely nailed me, and I&#39;m so impressed by the accuracy of my test results that I think it&#39;s worth sharing here. These strengths are ranked from strongest to less-strong (I don&#39;t say weakest because it&#39;s only the top 5 of all 34 strengths, so all 5 are very strong).</p> <h1>Strengths</h1> <h2>#1: Analytical</h2> <blockquote> <p>People exceptionally talented in the Analytical theme search for reasons and causes. They have the ability to think about all the factors that might affect a situation.</p> </blockquote> <p>I&#39;m not sure I agree that this is my #1 strength, but it&#39;s definitely very accurate for me. I think this goes hand in hand with the fact that I love being a programmer, and I enjoy debugging code. I&#39;ve never been satisfied with any explanation that something &quot;just is&quot; - I have to understand why something happened or I can&#39;t relax about it. It&#39;s not enough for a stressful production outage to be over, I need to get to the root cause of it. This is true even in the rare instance that there&#39;s nothing I can do about it, and thus get no value out of knowing the cause - the knowledge is the reward for me.</p> <p>I have a tendency to demand people &quot;prove it&quot; when making claims, even believable ones. Similarly, I expect other people to hold me to the same standard - I actually enjoy when I have an explanation for something and friends or co-workers are able to shoot it down. I want to believe as many true things as possible, and disbelieve as many false ones.</p> <p>Nothing is ever noise for me - there&#39;s always some kind of pattern that I want to find in the noise.</p> <h2>#2: Deliberative</h2> <blockquote> <p>People exceptionally talented in the Deliberative theme are best described by the serious care they take in making decisions or choices. They anticipate obstacles.</p> </blockquote> <p>Yep, this is absolutely me. This &quot;strength&quot; is so strong within me that it can often be a weakness, one that I actively try to overcome regularly - I often can get &quot;analysis paralysis&quot;. </p> <p>It takes a great deal of information gathering before I&#39;m willing to make any big decision - I ask an annoyingly large number of questions. This is true even for day-to-day things; the number of questions I asked my realtor when buying my first house actually caused him to become so exasperated that he and I had to part ways.</p> <p>When facing any decision, the first thing I want to know is what the risks are, and I try to plan for every possible outcome. If I don&#39;t feel like I understand the risks of a decision, I often cannot make one.</p> <p>One of the ways I try to address the analysis paralysis weakness is by plowing forward with functional spikes and throwaway code experiments, that way I&#39;m not just stuck reading wikipedia pages or StackOverflow posts. Experimentation is often the best way to acquire real knowledge.</p> <h2>#3: Learner</h2> <blockquote> <p>People exceptionally talented in the Learner theme have a great desire to learn and want to continuously improve. The process of learning, rather than the outcome, excites them.</p> </blockquote> <p>Kind of surprised this one wasn&#39;t higher, though 3 out of 34 is still pretty high. I&#39;ve been accused of being a &quot;perpetual student&quot; on more than one occasion, which is fair since I&#39;ve only been able to last one year after graduating before wanting to go back to school for another degree.</p> <p>I regularly take MOOC classes, attend conferences, and watch talks online. The level of excitement I felt when the company I work for announced that engineers could get free accounts on O&#39;Reilly Safari and Pluralsight is, frankly, embarassing. I read nonfiction constantly, usually jumping between 3 or 4 books at a time, and I consider reading fiction a waste of time because I don&#39;t learn anything while reading, and that time could be better spent learning.</p> <p>One of the things that Gallup says I should try to focus on as an action item as a Learner is:</p> <blockquote> <p>Seek roles that require some form of technical competence. You will enjoy the process of acquiring and maintaining this competence.</p> </blockquote> <p>A-yup, the fact that it&#39;s basically impossible to ever feel &quot;caught up&quot; in this industry is one of my favorite things about it. There&#39;s always stuff to learn, and I love mastering new skills, languages, and technologies.</p> <h2>#4: Intellection</h2> <blockquote> <p>People exceptionally talented in the Intellection theme are characterized by their intellectual activity. They are introspective and appreciate intellectual discussions.</p> </blockquote> <p>Very accurate. One of my favorite pastimes is discussing totally unimportant nerd shit with friends for hours and hours. </p> <p>I like exercising my brain muscles, solving problems, and challenging myself. When I was narrowing down college choices, eventually the deciding factor between my last two options was that I simply wanted to go to the school that I thought would be harder (the one that didn&#39;t offer me a full ride, so I&#39;m still paying for this strength).</p> <p>I&#39;ve often changed my opinions on issues because I challenged myself on some of my beliefs, and tried to reason my way from first principles to a new conclusion, and found myself on the oppsosite side of an issue than my gut reaction was initially.</p> <p>I don&#39;t think the test is telling me I&#39;m &quot;smart&quot;, which would definitely be horoscope territory. But I think you can be dumb and still really enjoy thinking, so I think this description is still fair. </p> <h2>#5: Restorative</h2> <blockquote> <p>People exceptionally talented in the Restorative theme are adept at dealing with problems. They are good at figuring out what is wrong and resolving it.</p> </blockquote> <p>I think this is, in a lot of ways, a natural result of some of the other strengths. As I said before, I love debugging and solving problems. I have a co-worker who is fond of saying that he wouldn&#39;t want to be a murderer if I was the detective assigned to the case.</p> <p>Some of my &quot;action items&quot; for this strength are particularly entertaining to me.</p> <blockquote> <p>Seek roles in which you are paid to solve problems. You might particularly enjoy roles in medicine, consulting, <strong>computer programming</strong>, or customer service, in which your success depends on your ability to restore and resolve.</p> </blockquote> <p>Uh, yeah.</p> <h1>Conclusions</h1> <p>I really liked this test, and I think it definitely nailed me. Basically the test tells me that I&#39;m doing exactly what I ought to be doing in terms of career choices, which is nice.</p> <p>After taking a class with the guy at work is so gung-ho for this test, he released my full ranking of all 34 strengths. This means I was able to see what my bottom 5 strengths (weaknesses) are. When I saw my bottom 5 weaknesses, I was very happy to discover that they came out of that pool of 10 or so strengths where I&#39;d have dismissed the entire test if it had told me I was strong with one of them.</p> <p>The book that surrounds this test (and the BigWigs who introduced it at work) put a lot of stock into these results. We&#39;re actually supposed to include our Top 5 in our e-mail signatures, and everyone is supposed to cater how they interact with people to the recipient&#39;s strengths. I&#39;m not entirely sure how I would do such a thing, like what would it mean if I was talking to someone who&#39;s really good at Empathy? Should I talk not about the facts of a task, but of how it makes me feel so they empathize? Weird, pass.</p> <p>One of the things the book and surrounding material pushes though, is playing to your strengths rather than trying to cover your weaknesses. This advice makes sense to me - essentially you want to emphasize the areas where you&#39;re strongest, so I should embrace the fact that I&#39;m an analytical, deliberative, learning, thinking problem-solver and try to make sure that, in my day-to-day work, I&#39;m giving those strengths a chance to shine. </p> <p>In other words, you could spend a ton of time and energy trying to improve your weaknesses, but they&#39;ll still be weaker than with someone who has those weaknesses as strengths. It&#39;s thus kind of a waste of limited resources to even focus on them, and is better to consider them a lost cause and try to instead play to your strengths. I actually think this is fantastic advice and I&#39;ve kind of restructured my thinking around this a bit.</p> <p>Some co-workers who took this test disagreed with the results so strongly that they took it multiple times until they got different results. That makes it even harder to want to actually cater my interactions with people to <em>their</em> strengths. But for me, I think this test absolutely got my number and I think it&#39;s worth doing. The book that goes with this test, <a href="http://www.amazon.com/Strengths-Based-Leadership-Leaders-People/dp/1595620257">Strengths Based Leadership</a>, costs less than20 and comes with an access code to take the test. I&#39;ve only skimmed the book so I can&#39;t speak to it&#39;s quality, but I definitely recommend taking the test.</p> Sat, 30 Apr 2016 00:00:00 +0000 http://www.nomachetejuggling.com/2016/04/30/strengthsfinder/ http://www.nomachetejuggling.com/2016/04/30/strengthsfinder/ Star Wars Machete Order: Update and FAQ <p>Wow, this <a href="http://www.nomachetejuggling.com/2011/11/11/the-star-wars-saga-suggested-viewing-order/">Machete Order</a> thing got big! After the post first &quot;went viral&quot; and got mentioned on <a href="http://www.wired.com/2012/02/machete-order-star-wars">Wired.com</a>, I started getting around 2,000 visitors to it per day, which I thought was a lot. But then in the months before <em>Star Wars Episode VII: The Force Awakens</em> was released, it blew up like Alderaan, peaking at 50,000 visitors DAILY. This year, over 1.5 million unique users visited the page. <a href="http://www.google.com/trends/explore?hl=en-US&amp;q=machete+order,+cure+for+cancer,+lindsay+lohan+naked&amp;cmpt=q&amp;tz=Etc/GMT%2B5&amp;tz=Etc/GMT%2B5&amp;content=1">It&#39;s been nuts</a>.</p> <p>So let me start out by thanking everyone for liking and spreading the original post - I&#39;m truly floored by how well-received the post was. Considering I wrote a nearly 5,000-word essay on Star Wars, I&#39;m pretty amazed that it was only a handful of times someone told me I was a loser neckbeard who needs to move out of my parents&#39; basement and get a girlfriend (I&#39;m married with a kid by the way). People only called for my public execution a couple times. On the internet, that&#39;s the equivalent of winning an Oscar, so thanks everyone!</p> <div class='image aligncenter' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/machete_order_popularity.png" width='640' height='141'/><figcaption style='display:table-caption;caption-side:bottom;'><p class='caption'>Holy shit!</p></figcaption></figure></div> <p>In all seriousness, I&#39;ve had thousands of people tell me I &quot;fixed&quot; Star Wars and made the saga more enjoyable for them. I think this is an unnatural amount of praise - after all, I&#39;m just a guy who watched some movies in the wrong order and skipped one, then wrote down why. I didn&#39;t create fanedits or anything truly difficult like that. But at the same time, the reason I published the post in the first place was that I felt Machete Order &quot;fixed&quot; Star Wars for me personally, allowing me to use the relevant parts of the Prequels to make Return of the Jedi a better movie, so it&#39;s really awesome that so many other people felt similarly. <strong>All joking aside, thank you.</strong></p> <p>Since it&#39;s been about 4 years since the original <a href="http://www.nomachetejuggling.com/2011/11/11/the-star-wars-saga-suggested-viewing-order.html">Machete Order</a> post, and now that Episode VII is out, <strong>I thought I&#39;d post a small update answering a lot of the questions I&#39;ve been asked</strong> and responding to the most common criticisms of Machete Order. <strong>There will be no spoilers of Episode VII here</strong>, though I will be talking about it a bit and I can&#39;t predict what people will post in the comments, so if you haven&#39;t seen it yet, make like a Tauntaun and split.</p> <!--more--> <h1>But Episode I has Maul!</h1> <p><em><strong>&quot;Are you really advocating I never watch Episode I or show it to anyone?&quot;</strong></em></p> <p>Man, no. By far the most common complaint is that I am advocating never watching Episode I, and that&#39;s a shame because it has the best podrace/duel/song/whatever. So let me be perfectly clear, I am not advising anyone to pull their Episode I disc out of their box set and throw it in the garbage. By all means, watch Episode I. Hell, I think Episode I is probably a better movie than Episode II is.</p> <p>The point of Machete Order is not, and has never been, ignoring Episode I because it&#39;s bad. It&#39;s been about skipping it because it&#39;s not relevant to Luke&#39;s journey. Episodes II and III are, because we see how his father falls to the Dark Side, and we see elements of his path that are mirroring his father&#39;s. </p> <div class='image alignright' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/fates.jpg" width='300' height='169'/></figure></div> <p><strong>By all means, if you like Episode I, watch it.</strong> What I&#39;m advocating though, is watching it sort of like an Anthology film - remember that we&#39;re going to be getting Han Solo origin movies and Boba Fett spinoffs and Rogue One films, and so on, until Disney stops making money off Star Wars. These movies are all going to take place at different times, between different Episodes, or before all of them. If you enjoy or want to share Episode I, I say view it as an Anthology movie, sort of like a prequel to the entire series.</p> <p>In other words, when you&#39;re watching &quot;The Main Saga&quot;, like maybe if you&#39;re doing a Marathon or you&#39;re introducing someone to Star Wars for the first time, watch in Machete Order: IV, V, II, III, VI. When you&#39;re done and that &quot;book&quot; is closed, you can pull in whatever &quot;Anthology&quot; stuff you enjoy, such as the Clone Wars TV shows or movies, the Han Solo spinoff, and Episode I. </p> <p>But for some kind of contiguous viewing experience, I think Episode I should be skipped, because it provides mostly backstory to the Republic itself and political goings-on. This makes it an interesting prequel to the entire saga, but a useless distraction from Luke&#39;s journey. </p> <h1>But Episode I has backstory!</h1> <p><em><strong>&quot;Aren&#39;t parts of Episode I crucial pieces to the story?&quot;</strong></em></p> <p>No, they aren&#39;t. They might be crucial pieces to the Star Wars overall story, but not to Luke&#39;s story, which is the whole point of Machete Order: re-centering the main saga narratively on Luke.</p> <p>Yes Sheldon, <a href="http://www.youtube.com/watch?v=keSFjjhUyVA">Chancellor Valorum is relevant</a> to understanding Palpatine&#39;s rise to power. Yes, Qui-Gon&#39;s belief that Anakin is the chosen one, combined with his untimely demise are very directly responsible to understanding Anakin&#39;s fall. Those make them interesting backstory - but they are <strong>not relevant to Luke&#39;s journey</strong>.</p> <div class='image alignleft' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/amidalabeforesenate.jpg" width='300' height='153'/></figure></div> <p>People who point this out act like it&#39;s sacrilege to (temporarily, see above!) skip Episode I because it fleshes out the Star Wars universe in various ways. So they might advocate Episodes I, II, III, IV, V, VI, VII, in order. But imagine that Disney releases an Episode 0, all about how Qui-Gon ignored some other ancient Jedi prophecy, and as a result his entire family died or something. This would provide a great understanding of why Qui-Gon is so insistent on training Anakin, and why he passes that burden to Obi-Wan. If someone were to suggest skipping Episode 0, by the logic of Machete Order detractors this would be impossible, because it&#39;s critical in understanding Qui-Gon&#39;s motivations. But skipping it would simply be regular Episode Order that we have now, which is what they&#39;re arguing for. This could go back forever, the exact order being advocated as &quot;correct&quot; is somehow now missing a critical component, because it skips hypothetical &quot;Episode -1&quot; and &quot;Episode -2&quot;.</p> <p>In other words, we don&#39;t really need to know why Qui-Gon is so intent on Anakin being trained or why he believes so strongly in a prophecy that the rest of the council doesn&#39;t seem to care much about. &quot;He just does&quot; is a perfectly fine answer for now, and it would be a perfectly fine answer if Episode 0 existed too. Similarly, we don&#39;t really need to know all of the machinations that led to Anakin embracing to the dark side, &quot;he just does&quot; is perfectly suitable, and in fact I argue that &quot;he lacks proper training&quot; is a far less sympathetic answer than &quot;it&#39;s very seductive&quot;, which is what we&#39;re left with skipping Episode I.</p> <p>All of these movies make references to past events that we don&#39;t ever see on screen. That&#39;s what these big &quot;worldbuilding&quot; movies are all about, and why there&#39;s a whole business for books and comics and video games to support them. We don&#39;t <em>need</em> to see Anakin&#39;s mother becoming a slave (not even in a movie), just like we don&#39;t <em>need</em> to know exactly why Nute Gunray hates Padme so much in Episode II. It&#39;s all backstory and fleshes things out a bit, but it&#39;s not critical, your mind fills in the gaps, makes educated guesses, and so on.</p> <p>Bear in mind, people happily enjoyed Star Wars without ANY of the prequels for sixteen years, and nothing that happened in the original trilogy left some kind of gaping unanswered question in the minds of the audience. So really, since the whole point of Machete Order is refocusing the story on Luke, claiming that any part of the prequels is truly <strong>necessary</strong> is a bit of a hard sell. I argue that Episodes II and III make Luke&#39;s story more enjoyable to watch in VI, but <em>crucial</em>? As in, unable to be understood without them? Nah.</p> <h1>But the prequels aren&#39;t that bad!</h1> <p><em><strong>&quot;I grew up with the prequels and they&#39;re not as bad as you think! You&#39;re blinded by nostalgia for the originals!&quot;</strong></em></p> <p>I had no idea what a huge population there was of Prequel fans, people who were born in the 90&#39;s and grew up watching the prequel trilogy and love them. Many people even claim Episode I is their favorite, or their favorite character is Jar-Jar. These people are not trolls, they genuinely love these movies. In fact they claim that the only reason that myself and others dislike the prequels is because our own nostalgia for the original trilogy blinds us to their flaws.</p> <p>First, a bit of an admission: I am not a huge Star Wars <a href="https://www.washingtonpost.com/lifestyle/in-what-order-should-you-watch-the-star-wars-movies/2015/12/09/25e96e88-9cf8-11e5-a3c5-c77f2cc5a43c_story.html">&quot;superfan&quot;</a>; I&#39;m just a movie geek. If I was some kind of rabid Star Wars fanboy, I would imagine I&#39;d consider it borderline blasphemous to advocate skipping an entire film in the Gospel of Star Wars. But as a movie nerd, I&#39;m more than happy to make whatever adjustments I think make for a better film-watching experience, because Star Wars is just a bunch of movies to me. I skip Godfather III and The Incredible Hulk too. They&#39;re just movies.</p> <p>So, here&#39;s my big secret: <em>I did not grow up watching Star Wars</em>. In fact, whenever I saw clips or images from the movies, I thought they looked boring (it looked like they mostly took place in the desert), and I skipped them. I liked parody movies, so I watched Spaceballs instead (a bunch). It was not until I was a senior in high school that my older sister discovered I still hadn&#39;t seen any Star Wars movies, and insisted I watch them. This was in 1999. To reiterate: <strong>I saw Episodes IV, V, VI, and I all for the first time, the same year, when I was seventeen.</strong></p> <div class='image alignright' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/spaceballs.jpg" width='300' height='169'/><figcaption style='display:table-caption;caption-side:bottom;'><p class='caption'>My Star Wars</p></figcaption></figure></div> <p>As a result, I can confidently say that I am not blinded by nostalgia for the original trilogy - they played no role in my childhood. I saw Episode I almost immediately after seeing the original trilogy, and I feel totally justified in saying that the prequel trilogy films, every single one of them, is vastly inferior to the original trilogy entries. I think my opinion here is pretty much objective - in fact I think the younger crowd talking about the greatness of the prequels are the ones blinded by their nostalgia.</p> <p>Further, the very first versions of the original trilogy I saw were the Special Editions, because that&#39;s what was available on VHS at the local video store at the time. Han never shot first for me. A cartoon Jabba always talked to Han after Greedo, Jabba&#39;s palace has always had an extended dance number, and the entire galaxy (not just Ewoks) always celebrated the fall of the Empire, at least for me. I didn&#39;t see the &quot;Despecialized&quot; versions until years and years later, and so I can once again confidently say, with total objectivity, that they are better than the special editions. The improved special effects for Cloud City and some matte improvements are welcome, but otherwise the Special Editions make the movies worse.</p> <p>Look, you can like or even love the prequels, and I totally understand why you might if you grew up watching them. But really, they are dreadfully bad movies, as far as movies go. Frankly I also think Return of the Jedi isn&#39;t a very good movie either, it&#39;s a mediocre movie that&#39;s elevated by having stellar <em>moments</em>. But all three of them are parsecs better than all of the prequels (yes, even III, &quot;the good one&quot;).</p> <p>It doesn&#39;t make the prequels genuinely good movies just because you liked them when you were a kid. Kids are completely capable of loving terrible movies. Kids are stupid. When I was a kid, I thought the two best movies in the world were Back to the Future and Superman III. Turns out, one of them is genuinely good, and one of them is actually dog shit.</p> <p>I am officially completely dismissing outright any criticism that my dislike for the prequels is because of my nostalgic childhood affection for the originals. I have no such childhood affection, and the prequels are dreck. Sorry.</p> <h1>What About Force Lightning?</h1> <p><em><strong>&quot;Doesn&#39;t Machete Order ruin the surprise that Emperor Palpatine can shoot lightning?&quot;</strong></em></p> <p>Yep, sure does. This was something I hadn&#39;t realized before, and was pointed out to me by a commenter. But indeed, if you&#39;re watching the original trilogy, the first time Palpatine starts electrocuting Luke, it&#39;s quite a shock (har har). </p> <p>With Machete Order, this surprise happens when Count Dooku just casually does it in Episode II. It&#39;s a real shame because it doesn&#39;t have the emotional or narrative impact here. I have no real defense for this, and I actually now consider it Machete Order&#39;s greatest flaw.</p> <div class='image alignleft' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/forcelightning.jpg" width='300' height='155'/></figure></div> <p>I kind of always thought the lightning wasn&#39;t a &quot;Sith power&quot; so much as something that Palpatine could do because he&#39;s so incredibly fucking evil. But no, the prequels make it clear this is just one of the video game powers you get by embracing the darkside, and they just do it willy nilly all over the place. Apparently you can just absorb it with a lightsaber if you have one handy, or without one if you&#39;re Yoda (hint to Luke, don&#39;t throw your lightsaber away, it has a +2 against Force Lightning!)</p> <p>It&#39;s even kind of annoying that this is typically referred to as &quot;force lightning&quot; now, like it&#39;s some kind of standard-issue thing you learn in Graduate Level Sith Academy before you get your diploma. I think it was better when it was just &quot;that evil scary crazy lightning shit The Emperor does out of nowhere.&quot; But alas, the prequels ruined this (have I mentioned that they suck?) and Machete Order is unable to fix it. </p> <p>The only way to preserve this twist is to simply move Episode VI two movies earlier, which is effectively just Release Order (IV, V, VI, I, II, III). I like the lightning surprise a lot but I think overall it&#39;s worth giving it up in order to make the final confrontation between the Emperor, Vader, and Luke more enjoyable by watching II and III first.</p> <p>The best defense I can offer is that there&#39;s basically no way to preserve this twist without moving the &quot;Luke and Leia are twins&quot; surprise back to Episode VI. And as I&#39;ve pointed out elsewhere, it actually works far better at the end of III, when the audience has no idea they are related, but does know who they are (by watching IV and V before it). So in a sense, you kind of have to choose if you want an effective twin twist or an effective lightning twist, and I personally choose the twins.</p> <h1>Where Do Episode VII and Rogue One fit?</h1> <p><em><strong>&quot;Since Rogue One is basically a prequel to IV, should Machete Order start with it? Where do the new Episodes go? What about the Star Wars Story entries?&quot;</strong></em></p> <p>Every time a new Star Wars movie comes out, I get a bunch of tweets and e-mails asking where it fits in Machete Order. It&#39;s flattering people care so much, but my answer is probably going to always be the same. So I&#39;m going to try and answer it once and for all.</p> <div class='image aligncenter' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/machete_order_final.png" width='640' height='309'/></figure></div> <p>The Force Awakens, The Last Jedi, and all of the new numbered Episodes are a chronological continuation of the story. If nothing else, they can be seen as both a fresh start for new characters, and as an epilogue to Luke&#39;s journey. They are all in both episode order and chronological order, so there&#39;s no reason to play musical chairs with them. I don&#39;t see any narrative benefit to watching them out of order at all, so <strong>watch all new numbered Episodes in order after Machete Order, no matter how many they make</strong> (hint: they&#39;ll keep making them until they stop making money).</p> <p>The &quot;A Star Wars Story&quot; films are a bit different, since they seem to take place at all sorts of different points in time (though, as of this writing, all of them take place between III and IV). Rogue One is particularly interesting since it literally takes place seconds before Episode IV, so a lot of people are suggesting Machete Order actually <em>start</em> with it.</p> <p>In my opinion, it doesn&#39;t matter that Rogue One takes place right before A New Hope. <strong>The purpose of Machete Order was and always will be to refocus the story of the Original and Prequel Trilogies to be about Luke&#39;s journey</strong>. Episodes II and III aren&#39;t included for all their mythos and world-building, they&#39;re included because Anakin&#39;s fall is directly relevant to Luke&#39;s path.</p> <p>Lots of people are claiming Rogue One is &quot;necessary&quot; now because it helps explain a lot of A New Hope. I disagree. The original Star Wars (Episode IV) is a timeless piece of groundbreaking cinema, and it&#39;s been beloved by generations for nearly 40 years without Rogue One. <strong>I don&#39;t know how much less &quot;necessary&quot; a film could get than having 40 years of fans being unbothered by its nonexistence</strong>. It is true that Rogue One is essentially a two-hour retcon of a 2-meter-wide &quot;plothole&quot;, but the film is structured as a retcon, not as a new introduction to the series. Some have suggested Rogue One should be the first film in the viewing order and I don&#39;t see it at all. That&#39;s like suggesting you read &quot;Rosencrantz and Guildenstern Are Dead&quot; before &quot;Hamlet&quot;. Rogue One doesn&#39;t work as an introduction, it does none of the worldbuilding that A New Hope does (or hell, even that The Phantom Menace does). Frankly, the movie&#39;s most glaring flaw is that the first 45 minutes or so are incredibly rushed and disjointed - the film&#39;s own characters aren&#39;t given proper introductions, let alone the entire galaxy. Characters in Rogue One talk about The Force without a single line explaining what it is. Darth Vader&#39;s introduction is abysmal if it&#39;s the first time an audience is seeing him, and his first scene ends with a dorky pun. No, Rogue One as the first movie doesn&#39;t work to me, I cannot strongly enough recommend against showing someone who has never seen Star Wars the Rogue One entry first. These Anthology films are meant to viewed in the margins of the main Episode series, that&#39;s where they belong. </p> <p>The main objection to what I&#39;m saying seems to be that Rogue One should be viewed before Episode IV because it chronologically takes place before it. If there&#39;s one thing that should be pretty obvious about Machete Order from the outset, I would think that it&#39;s the fact that I don&#39;t care when things take place chronologically. I&#39;d argue that this is really Machete Order&#39;s defining characteristic, so I&#39;m not sure where the &quot;but chronological!&quot; crowd is coming from here. What I care about is what works narratively, not chronologically. Lots of movies are told out of sequence because they work better narratively that way. That&#39;s what Machete Order is all about, telling the story in a way that&#39;s not chronological but more narratively satisfying.</p> <p>All of these &quot;A Star Wars Story&quot; entries are going to basically work in any order, after viewing the main Episodic content. The Han Solo movie, Boba Fett movie, Obi-Wan movie, Yoda movie, or whatever else will work better when viewed after the main Episodes than it would before the Original Trilogy. This is why <strong>I recommend viewing all the other Star Wars stuff, optionally, after the numbered Episodes</strong>. If the Episodes are up to Episode XII by the time someone wants to watch Star Wars, do Machete Order for the Original/Prequel Trilogies, then Episodes VII through XII, then any/all of other Star Wars content, in any order. It&#39;s in this category of &quot;other Star Wars stuff&quot; that I&#39;d put any TV series, the Clone Wars cartoon, the Holiday Special, Rogue One, any Star Wars Anthology films and, yes, Episode I.</p> <p>So when one of these Star Wars movies comes out, this is my final answer. Machete Order, then episodes VII through whatever, then anything else in any order.</p> <h1>Is Machete Order Still Relevant?</h1> <p><em><strong>&quot;Disney is releasing a new Star Wars movie every year - does Machete Order even still matter?&quot;</strong></em></p> <p>Honestly, probably not. I still think that, if you&#39;re going to watch the Original Trilogy and the Prequel Trilogy, the best way to watch them is to skip Episode I and watch in Machete Order. However, <strong>in the Disney era of Star Wars, I&#39;m not entirely sure that viewing the Original and Prequel trilogies even matters anymore</strong>.</p> <div class='image alignleft' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/crawl.png" width='300' height='130'/></figure></div> <p>I know that this is sacrilege and it makes me sad too because I think the Original Trilogy is great, but you have to sort of look down the lens of time for a bit and realize that, at some point, there will be 50 or so Star Wars movies. There may well be theatrically released Star Wars movies that you don&#39;t get to watch because you&#39;re dead. <strong>When the 50th Star Wars film is released in theaters, will someone have to watch all 49 previous films to watch it?</strong> Remember, these movies are for kids, so you&#39;re talking about sitting an 8-year-old down to watch over 100 hours of film and who-knows-how-many hours of Television, just go to see a silly movie about laser swords and space ships.</p> <p>As of this writing, the only Episode we have after the Original and Prequel Trilogies is The Force Awakens. And yeah, that movie has Han Solo, Luke, Leia, C3P0, R2D2, references to Vader, and so on. With only 6 other Episodes (5 with Machete Order), it&#39;s not unreasonable to sit down and marathon the other films before watching The Force Awakens. But once the Sequel Trilogy is completed and we&#39;re at Episode IX, will the other trilogies be necessary viewing? I honestly don&#39;t think so - I think <strong>The Force Awakens can be watched as the very first Star Wars movie a person sees, and it works just fine</strong>. Everything from previous films is either established well enough in The Force Awakens, or treated like a mysterious legend. The truth is, pretty much any of these movies can be watched alone, that&#39;s what the opening crawl is for. And yes, Episode VIII will likely have Luke training Rey or something like that, so I would argue that Episodes VII-IX are an extension of Luke&#39;s story and thus should be viewed after a Machete Order viewing of the other trilogies. But I have no doubt that Luke and Leia will both be dead by the end of Episode IX, so by the time Episode X is released, will someone need to watch the other trilogies? Won&#39;t those stories be about Finn, Rey, or possibly <em>their</em> descendants, or yet another new set of characters?</p> <p>So if you&#39;re going for a full Marathon of Star Wars, <strong>Machete Order is the way to go when covering the Original and Prequel trilogies</strong>. Or if someone loved The Force Awakens and wanted the backstory, Machete Order all the way. But I think the Original and Prequel Trilogies are going to become increasingly irrelevant as time goes on. One of the main criticisms of The Force Awakens is that it pulls so much material from the original trilogy that it seems like fanservice. I think that&#39;s missing the forest for the trees - The Force Awakens is re-using elements from the OT because it&#39;s a quasi-reboot. It&#39;s intentionally giving us another Death Star, a Vader-esque character, a Luke-esque protagonist, a trench assault on a giant base, and a retread story about a secret file carried by a droid for a group of rebels trying to destroy an empire. It&#39;s doing all that <strong>so that people who watch The Force Awakens without watching any previous Star Wars movie can enjoy those elements</strong>. The truth is, going forward the Star Wars films you personally love will just seem boring and stupid to kids growing up on the Disney era. The Episode XIX, XX, XXI &quot;trilogy&quot; will be so far removed from the Original Trilogy, I promise that your grandkids aren&#39;t going to give a damn about it. Hell, I&#39;d be shocked if they even kept numbering these suckers after 12, everything will just be &quot;A Star Wars Story&quot; entries.</p> <h1>Other Stuff</h1> <p>Those are all the questions I get regularly. I think I&#39;ll update this one post with new questions I get in the future, so that my poor little Software Engineering blog doesn&#39;t turn into Star Wars Central or something. If you have other criticisms of Machete Order or other questions, feel free to leave a comment. I&#39;ve gotten over 1,000 comments on the original post, and I read them all.</p> <p>And again, thank you to everyone who made Machete Order blow up all over the place. I&#39;ve been on the radio multiple times and <a href="http://www.npr.org/2014/03/20/291977042/theres-more-than-one-way-to-watch-star-wars">NPR</a>, and had articles that mention me by name published in <a href="http://www.nydailynews.com/entertainment/movies/star-wars-fans-debate-movie-marathon-viewing-order-article-1.2454281">New York Daily News</a>, <a href="https://www.washingtonpost.com/lifestyle/in-what-order-should-you-watch-the-star-wars-movies/2015/12/09/25e96e88-9cf8-11e5-a3c5-c77f2cc5a43c_story.html">Washington Post</a>, and <a href="http://www.cnn.com/2015/12/08/entertainment/star-wars-machete-order/">CNN</a>. The order has been mentioned on <a href="https://www.youtube.com/watch?v=effD1u4oCRE">King of the Nerds</a>, <a href="https://www.youtube.com/watch?v=keSFjjhUyVA">The Big Bang Theory</a> and <a href="https://www.youtube.com/watch?v=XP0F1eKJZ3s">Late Night with Seth Meyers</a> by one of my favorite comedians, Patton Oswalt. As far as 15 minutes of fame go, it&#39;s been a real blast, and I have everyone who saw the post and shared it to thank.</p> <p>May the Force be with you, always.</p> Mon, 28 Dec 2015 00:00:00 +0000 http://www.nomachetejuggling.com/2015/12/28/machete-order-update-and-faq/ http://www.nomachetejuggling.com/2015/12/28/machete-order-update-and-faq/ PhD Status Report <p>It&#39;s been a long time since I posted about how school is going, and I figured the folks who read my blog (all two of you, hi Mom!) might be curious.</p> <p>Since the end of my Spring 2013 semester, I&#39;ve been in &quot;Research Phase&quot;. This means I&#39;m finished with classwork and have been working on my research project. It&#39;s been a little over two years now, so here&#39;s what has happened.</p> <h1>Picking a Project</h1> <p>About two years ago, I started working with my advisor trying to figure out what area my research would be in. I&#39;ve always had a fascination with Genetic Algorithms and Metaheuristics, so I knew I wanted to do something involving that subfield. Two of my projects at school utilized Genetic Algorithms, and I had a lot of experience in the area.</p> <p>I went back to school to primarily focus on CS Theory &amp; Algorithms, but I knew my biggest strength was Software Engineering. In other words, I like theory, but I&#39;m not sure I&#39;m a real theoretician, and I wanted to play to my strengths a bit. This meant I wanted to be writing real, working code, rather than proofs. If I wound up proving anything interesting, that&#39;d be great, but I didn&#39;t want that to be part of my critical path to completing my PhD. Lots of CS PhD&#39;s go off with a white board and paper and output some amazing results, but I knew that wasn&#39;t going to be me.</p> <p>My first idea was to build some kind of framework for comparing different metaheuristics. I&#39;ve worked extensively with Genetic Algorithms but that&#39;s just one of an entire class of Metaheuristics. I had no experience with things like Swarm algorithms, Ant Colony System optimization, Tabu search, Simulated Anealing, and so on. I thought it would be interesting to pick some known optimization problems like Traveling Salesman using <a href="http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/">TSPLIB</a>, code various &#39;solvers&#39; using different techniques, and compare their performances.</p> <p>I started building out the project, codenaming it <code>judy</code> (because it was like a judge, ha ha. My advisor hated this name and told me to change it.) Unfortunately, while doing a literature review in this area I discovered that it&#39;s more or less accepted that all metaheuristics have a similar level of effectiveness.</p> <p>The more I researched, the more I realized that this would probably not be a valuable contribution to the the field. It was becoming very clear that this project wasn&#39;t well-formulated, and wasn&#39;t going to be successful. I hadn&#39;t spent very long on it, building out some parts of Judy but not much more. I needed to pick a different project, but I was still very interested in Metaheuristics, so I knew I wanted it to be closely related.</p> <h1>Course Correction: Picking a Similar But Different Project</h1> <p>I had already built out some parts of Judy by this point and had some pretty promising software. In fact the <a href="https://github.com/rodhilton/metaheuristics-engine">metaheuristics-engine</a> I had been building was a pretty nifty generic simulator for metaheuristic algorithms. I&#39;d used my skills as a software engineer to build a few interfaces that were easy to implement and then dump directly into a simulator that would scale out for however many processors were on a machine, handle as much of the metaheuristic algorithm in parallel as possible, and do various neat things like safe-journaling results (so that an interrupted simulation could resume). Thus, writing new metaheuristic implementations to search problem spaces would be really easy and fun.</p> <p>I wanted to use this software because I enjoyed building it, and I thought it would be cool to enhance the capabilities of my metaheuristic engine as a side-artifact of my research. I had also started building out a taxonomy of metaheuristics and how they were related in terms of object-oriented relationships (for example, a <code>GeneticAlgorithm</code> was a type of <code>EvolutionaryAlgorithm</code> in a formally-defined way, because one extended the other polymorphically).</p> <p>I went to another professor, the one who taught the classes where my projects utilized Genetic Algorithms. I knew she had a whole bag of interesting problems to solve, and I wondered if I could use my simulator to find solutions.</p> <p>Basically all of her problems were existence questions. Things like, does there exist a graph $$G$$ such that $$Y$$, with $$Y$$ being a variety of different restrictions. She&#39;d posed a similar problem before, which was the motivator behind <a href="https://speakerdeck.com/rodhilton/rectangle-visibility-and-elusive-k23">my project in her Graph Theory class</a>.</p> <p>Most of these graphs would have their existence proven not by formal proof, but by construction. Meaning, if I could construct a particular kind of graph, that graph itself would be the interesting result (and not necessarily how I derived it). My idea was, for each of her problems, I would figure out a way to randomly generate potential solutions, a way to score solutions in terms of how close they are to the ideal, and a way to mutate or perform crossover mutation with candidate solutions. I&#39;d write code for the problems, use/enhance my simulator, and then run the simulation on our ultrapowerful university computer across hundreds of threads.</p> <p>When I took on this project, <strong>I knew that it was high-risk/high-reward</strong>. For every problem I was working on, it was entirely possible that the solution I was looking for does not exist (remember, the only proofs possible are via construction). I also knew it was possible that even if it exists, my strategy might not find it, or that my approach might get stuck in local optima. In other words - I was essentially applying a technique intended for discovering SUBOPTIMAL results to optimization problems, hoping that that the threshold for optimality would be low enough that the technique could work.</p> <p>It&#39;s been two years since I started this project. In that time, I have been working on three different particular problems, growing my metaheuristics solution engine, and heavily taxing the resources on the university parallel computation platform.</p> <p>The first problem I worked on has been running for the longest time. Since this was a continuation of my project in my advisor&#39;s Graph Theory class, I simply adapted it to used my new and improved simulator, and put it up on our university multiprocessor machine. I&#39;m not going to bore you with the details but basically I&#39;ve been trying to find a particular graph configuration with certain restrictions, to see if a complete graph of a certain size exists. Since I&#39;m looking for a complete graph, the &#39;fitness&#39; criteria is simply, how many edges my best solution has, out of a possible $$\frac{n(n-1)}{2}$$ edges. About a year ago (after running for a year), my software managed to get ONE edge away from the desired answer. As of this writing, it is still one edge away from the solution I&#39;m trying to discover, which seems to be the best it can do (205,949,225 generations). Unfortunately this is not an interesting or publishable result unless I can uncover that one missing edge. It&#39;s always possible that the software will find the correct solution tomorrow, but I think at this point I need to admit this is a dead-end.</p> <p>I&#39;ve worked on two other problems but of those, I&#39;ve spent about 95% of my time on just one of them. This particular problem (again, I&#39;ll spare the details) is one that&#39;s been plaguing my advisor herself for over twenty years. I vividly recall one day in Algorithms class when she told the class that, if someone can discover particular graph with a specific set of properties, &quot;I&#39;ll give you a PhD right there.&quot; It was this problem I chose to work on for most of these last two years.</p> <p>Again, this too was a high-risk/high-reward project. My advisor and I joked that the entire publication that results from this effort would be a couple pages about the background and the reason for wanting to know if this graph exists, and the entire results section would be a single page - a listing and rendering of the graph. If I had found a result, there&#39;s no question that a paper about it would be accepted in virtually any CS journal I sent it to.</p> <p>But once again, I&#39;ve been working on this project for a little over two years now. I recognize that it seems early to give up&#39;&#39; given that it&#39;s been plaguing my professor for decades, but she&#39;s been trying to construct the graph by hand and my goal was to use the metaheuristics to perform a guided search of the solution space computationally. In the end though, I&#39;ve become convinced after two years of working on this that either the solution doesn&#39;t exist, or that it does exist but my technique will not discover it.</p> <h1>Walking Away</h1> <p>This past semester my wife and I found out that we were going to be having a baby girl in April. Very suddenly, I started to evaluate the relative importance of my degree. It started to feel very stupid to be working on a series of high-risk/high-reward research projects where nothing had panned out in two years. These problems were unbounded in every sense - my simulations might discover a solution after a day, a week, a year, ten years, a thousand years, or never. I started to become increasingly uncomfortable with the notion of continuing down a path of research so open-ended, with the possibility that I was programmatically searching for solutions that may not even exist.</p> <p>When I first decided to go back to school, I did it for fun. I enjoyed my Master&#39;s research and wanted to continue learning and educating myself, and I liked the idea of pursuing a Doctorate. At the time, I told myself and my wife that, since I was doing it for fun and not for any professional reason, that I&#39;d stop the instant it stopped being fun. Well, after two years of research that wasn&#39;t panning out, school had certainly stopped being fun for me. In fact, it&#39;s been a source of constant stress and irritation. I&#39;ve been working very hard and building some really cool software, but I have no interesting results to show for it.</p> <p>The analogy I used to describe how I was feeling to my wife was, it was as if I saw people mountain climbing and I thought it looked really fun, and I wanted to do it as well. So I spent a great deal of time and money taking mountainclimbing classes. Then I began researching mountainclimbing gear and spending a great deal of money on equipment. I planned and scheduled flights to various mountain ranges. Then I finally started climbing mountains and realized &quot;you know what? This sucks. I&#39;m really cold, and this isn&#39;t fun at all.&quot; But because I&#39;d spent so much time and money on it, I felt like I had to keep going even though I hated it it, falling victim to the sunk cost fallacy.</p> <p>After a great deal of thought I decided I would walk away from the PhD altogether. This was not an easy decision, given how much work I&#39;d put into things like the GREs, classes (straight A&#39;s), and passing my preliminary exams. But after two years and nothing publishable to show for all my research work, with a baby on the way, I felt like I needed to re-prioritize and know when to fold &#39;em, so to speak.</p> <h1>...Or Not?</h1> <p>I scheduled a meeting with my advisor to discuss my departure from the program. I was all set to walk away, and in fact had written the entire above portion of this blog post. I had basically accepted that I was done, I killed the processes on the university supercomputer, and moved a ton of books and articles out of my to-read list on Goodreads. Washing my hands of the project and admitting I was no longer pursuing it was extremely liberating - I felt like a huge burden had been lifted simply by shutting down the effort. </p> <p>But then I realized something. My main motivator for coming back to school was how much I enjoyed my Master&#39;s Thesis research, and that I wanted to do more. I had thought that I must love research, and wanted to continue it. The past two years made me realize I didn&#39;t actually enjoy research as much as I thought - a fact that I had trouble coalescing with how much I enjoyed my Master&#39;s research.</p> <p>A few weeks ago, the author of <a href="http://smile.amazon.com/Beyond-Legacy-Code-Practices-Software/dp/1680500791?sa-no-redirect=1">Beyond Legacy Code</a> contacted me to say he referenced my Master&#39;s thesis in his book. He praised my effort, talking about how much he enjoyed it and how he felt it was one of the better entries he&#39;s come across. It made me look back at my thesis and remember how much I enjoyed working on it.</p> <p>Then it dawned on me. Was it possible that it wasn&#39;t the research I enjoyed at the time, but the actual subject? If it was the subject, then would a research project more inline with my Master&#39;s appeal to me more? I had come back to school for Theory &amp; Algorithms, and I was so laser-focused on those subjects that it hadn&#39;t occured to me that I might enjoy classes in these topics but <em>not</em> research. Would I enjoy a research project if it were focused on Software Engineering, like my Master&#39;s?</p> <p>Pondering this for a few more days, I suddenly had a new idea for a research project. This was something much more Software Engineering-focused than what I had been working on - in fact it wasn&#39;t related to CS Theory at all. It was more of a natural continuation of my Master&#39;s work, and I was stunned by how <strong>excited</strong> I was by the idea. I once again felt the passion of wanting to complete my PhD, reinvigorated by this new project. My wife commented that she hadn&#39;t seen me this excited about research in the entire two years I&#39;d been working on my metaheuristics project.</p> <p>Of course, there have been some logistical issues. Since I went back to school primarily to focus on Theory, nearly every class I&#39;ve taken has been theory-heavy. Due to this, every single professor I&#39;ve met or worked with has been a theory professor, not a single one of them is experienced, or even interested in Software Engineering research. </p> <p>The meeting with my advisor, which I had originally scheduled to tell her I was dropping out of the program, went very differently. I had come up with the new project and done some publication searching all between scheduling the advising meeting and actually having it. So when I walked into her office, I presented her with my new project idea.</p> <p>I expected her to hate the idea since it&#39;s so unrelated to theory. Instead, her reaction was basically, if I&#39;m passionate about the project she&#39;ll help in any way she can. She doesn&#39;t have a lot of experience with Software Engineering but I don&#39;t think I need a lot of help with the topic - this is what I know best. </p> <p>So that&#39;s my status update. Baby on the way, feeling a strong desire/pressure to finish this PhD quickly, and basically starting over entirely in the research. Reading it back, it seems downright idiotic, but I feel extremely excited about my new project, and for some reason I don&#39;t feel worried.</p> <p>I still feel the same way about the program - this is something I&#39;m doing for fun, and the instant I&#39;m miserable in the program I will walk away. But if there&#39;s any path that ends with me actually completing this degree, I feel like I&#39;m finally on it.</p> <p>The next update about school will either be announcing that I&#39;ve completed my dissertation and am graduating, or that I&#39;ve decided to walk away from the program for good.</p> <h2>TL;DR</h2> <p>Too rambly but you still sort of care? Here it is in a nutshell:</p> <blockquote> <p>I decided the research project I&#39;ve been working on for two years was a dead-end. I almost decided to walk away from the program entirely, but instead have started over with a radically different project to give it one last shot before calling it quits.</p> </blockquote> Fri, 13 Nov 2015 00:00:00 +0000 http://www.nomachetejuggling.com/2015/11/13/phd-research-status-report/ http://www.nomachetejuggling.com/2015/11/13/phd-research-status-report/ Testing Against Template Renders in Grails <p>I work with Grails a lot and while I really enjoy it for the most part, there are definitely some weird quirks of the framework.</p> <p>One such quirk is something I encounter whenever I want to write unit tests against grails controller methods that render out templates directly. This isn&#39;t something I do very often - generally I prefer rendering out JSON and parsing it with client-JS - but in some cases when there&#39;s a lot of markup for a page element that you want to be updateable via ajax, it makes sense to render out a template like <code>render(template: &#39;somePartial&#39;)</code> directly from a controller method.</p> <p>Unfortunately, these kinds of methods are very difficult to write tests against. While a normal render exposes a <code>model</code> and <code>view</code> variable that you can test against, for some reason using a render with a template doesn&#39;t seem to do this.</p> <p>I&#39;ve seen lots of solutions where you stuff a fake string matching the name of the template using some metaclass wizardry, but then you&#39;re stuck dealing with some semi-view stuff in what you might want to simply be a unit test about the model values placed by the controller method.</p> <p>However, based on <a href="http://stackoverflow.com/questions/15141319/grails-controller-test-making-assertions-about-model-when-rendering-a-template">this StackOverflow post</a>, I&#39;ve written a quick-and-dirty little monkeypatch that exposes the <code>model</code> and <code>view</code> variables in your test, and populated with the values relevant to the template.</p> <p>I&#39;ve got this method in a <code>TestUtil</code> class:</p> <div class="highlight"><pre><code class="language-groovy" data-lang="groovy"><span class="kd">static</span> <span class="kt">def</span> <span class="nf">ensureModelForTemplates</span><span class="o">(</span><span class="n">con</span><span class="o">)</span> <span class="o">{</span> <span class="kt">def</span> <span class="n">originalMethod</span> <span class="o">=</span> <span class="n">con</span><span class="o">.</span><span class="na">metaClass</span><span class="o">.</span><span class="na">getMetaMethod</span><span class="o">(</span><span class="s1">'render'</span><span class="o">,</span> <span class="o">[</span><span class="n">Map</span><span class="o">]</span> <span class="k">as</span> <span class="n">Class</span><span class="o">[])</span> <span class="n">con</span><span class="o">.</span><span class="na">metaClass</span><span class="o">.</span><span class="na">render</span> <span class="o">=</span> <span class="o">{</span> <span class="n">Map</span> <span class="n">args</span> <span class="o">-&gt;</span> <span class="k">if</span> <span class="o">(</span><span class="n">args</span><span class="o">[</span><span class="s2">"template"</span><span class="o">])</span> <span class="o">{</span> <span class="n">con</span><span class="o">.</span><span class="na">modelAndView</span> <span class="o">=</span> <span class="k">new</span> <span class="n">ModelAndView</span><span class="o">(</span> <span class="n">args</span><span class="o">[</span><span class="s2">"template"</span><span class="o">]</span> <span class="k">as</span> <span class="n">String</span><span class="o">,</span> <span class="n">args</span><span class="o">[</span><span class="s2">"model"</span><span class="o">]</span> <span class="k">as</span> <span class="n">Map</span> <span class="o">)</span> <span class="o">}</span> <span class="n">originalMethod</span><span class="o">.</span><span class="na">invoke</span><span class="o">(</span><span class="n">delegate</span><span class="o">,</span> <span class="n">args</span><span class="o">)</span> <span class="o">}</span> <span class="o">}</span> </code></pre></div> <p>Then I can call this method with the controller as a parameter anywhere before I invoke the controller action. It can go in a <code>setup</code> or <code>@Before</code> method, it seems to work from both Spock tests and the builtin Grails testing framework.</p> <p>So if we have this example:</p> <div class="highlight"><pre><code class="language-groovy" data-lang="groovy"><span class="kd">class</span> <span class="nc">ExampleController</span> <span class="o">{</span> <span class="kt">def</span> <span class="nf">index</span><span class="o">()</span> <span class="o">{</span> <span class="kt">def</span> <span class="n">bigName</span> <span class="o">=</span> <span class="n">params</span><span class="o">.</span><span class="na">name</span><span class="o">.</span><span class="na">toUpperCase</span><span class="o">()</span> <span class="n">render</span><span class="o">(</span><span class="nl">template:</span> <span class="s2">"partial"</span><span class="o">,</span> <span class="nl">model:</span> <span class="o">[</span> <span class="nl">one:</span> <span class="s2">"hello"</span><span class="o">,</span> <span class="nl">two:</span> <span class="n">bigName</span> <span class="o">])</span> <span class="o">}</span> <span class="o">}</span> </code></pre></div> <p>This test will do what we want:</p> <div class="highlight"><pre><code class="language-groovy" data-lang="groovy"><span class="nd">@TestFor</span><span class="o">(</span><span class="n">ExampleController</span><span class="o">)</span> <span class="kd">class</span> <span class="nc">ExampleControllerSpec</span> <span class="kd">extends</span> <span class="n">Specification</span> <span class="o">{</span> <span class="kt">def</span> <span class="nf">setup</span><span class="o">()</span> <span class="o">{</span> <span class="n">TestUtil</span><span class="o">.</span><span class="na">ensureModelForTemplates</span><span class="o">(</span><span class="n">controller</span><span class="o">)</span> <span class="o">}</span> <span class="kt">def</span> <span class="nf">shouldHaveModelAndViewExposed</span><span class="o">()</span> <span class="o">{</span> <span class="nl">given:</span> <span class="n">params</span><span class="o">.</span><span class="na">name</span> <span class="o">=</span> <span class="s2">"Rod"</span> <span class="nl">when:</span> <span class="n">controller</span><span class="o">.</span><span class="na">index</span><span class="o">()</span> <span class="nl">then:</span> <span class="n">view</span> <span class="o">==</span> <span class="s2">"partial"</span> <span class="n">model</span><span class="o">.</span><span class="na">one</span> <span class="o">==</span> <span class="s2">"hello"</span> <span class="n">model</span><span class="o">.</span><span class="na">two</span> <span class="o">==</span> <span class="s2">"ROD"</span> <span class="o">}</span> <span class="o">}</span> </code></pre></div> Thu, 27 Aug 2015 00:00:00 +0000 http://www.nomachetejuggling.com/2015/08/27/testing-against-template-renders-in-grails/ http://www.nomachetejuggling.com/2015/08/27/testing-against-template-renders-in-grails/ QCon New York 2015: A Review <p>My default yearly conference, for many years, has been <a href="http://uberconf.com/conference/denver/2015/07/home">UberConf</a>. I really enjoy UberConf because it&#39;s packed full of lots of great sessions, and it&#39;s conveniently local. However, because I go to various local user groups and attend so often, I find that, if I go two years in a row there are too many sessions I&#39;ve seen before, and I wind up disappointed. So for the past few years, I&#39;ve been alternating between UberConf and something new. Two years ago, it was <a href="http://www.nomachetejuggling.com/2013/07/30/oscon-2013-a-review/">OSCON</a>, and this year it was <a href="https://qconnewyork.com/">QCon New York</a>.</p> <p>I chose QCon for a few reasons. One, the sessions seemed very focused on architecture and higher-level concepts, with very few language/technology talks. This was right up my alley because, while there are some languages and tools I&#39;d like to go deeper on, I think a more significant area for improvement for me is architecture and scalability. We get tons of traffic at my job - more than any other place I&#39;ve ever worked - so I&#39;ve had to learn a lot about scalability, and the nature of the work has forced me to really see broad system design differently.</p> <p>I went to QCon specifically wanting to improve some areas where I was weak, namely containerization, microservices, and reactive programming. I hear a lot of buzz about these things, and they pop up on recent <a href="http://www.thoughtworks.com/radar/a-z">ThoughtWorks Technology Radar</a>s, and QCon seemed to have a lot of sessions to offer in these areas. It took a LOT of convincing to get my employer to agree to send me to a conference location as expensive as New York, but eventually they approved it. Here I will detail some of my thoughts about the experience, in case it may be of use to others considering QCon.</p> <!--more--> <h1>Sessions</h1> <p>First and foremost, the sessions. Networking isn&#39;t my thing, I&#39;m all about the quality, quantity, and variety of sessions offered. <strong>I picked QCon based on the sessions, and I was not disappointed.</strong></p> <p>The sessions were well-arranged into tracks, as is common with these kinds of conferences. What was somewhat different about QCon, at least from my perspective, was how cohesive sessions were within a track, and how diverse the tracks themselves were. A lot of times tracks are really general, like &quot;Java&quot; or &quot;Agile&quot;, or they can be too similar to each other. In QCon&#39;s case, all of the tracks themselves were very different, but very specific, like &quot;Fraud Detection and Hack Prevention&quot; and &quot;High Performance Streaming Data&quot;. Within a track, all of the talks were very closely related, and it actually made sense to pick a track and stick with it, rather than buffet style picking-and-choosing based on session alone.</p> <p>The sessions were the perfect length. I&#39;ve complained before that UberConf&#39;s 90 minute sessions can sometimes seem overlong or padded, and that OSCON&#39;s 30-minute sessions seemed rushed or abbreviated right when they were getting good, but QCon strikes a great balance at 50 minutes each. This is short enough to prevent topic fatigue, but long enough to go in depth. Speakers usually did a great job of giving a presentation in-line with the topic title and description as well, which is (somewhat surprisingly) rare for tech conferences.</p> <p>One complaint is that <strong>slides were usually made available AFTER sessions were over</strong>. I hate when this happens, I want to see slides ahead of time, both because I can use them to make sure the content is going to be interesting, and because I use Evernote to actually take notes IN the PDF itself, highlighting or marking up the document with my own notes. The only argument I can imagine for why slides would be held off until after a presentation is that speakers might be modifying the slides until just before they give a talk, but frankly I think that stinks of unpreparedness. Slides should be available in advance, no exceptions.</p> <p>One excellent feature of QCon was that almost all of the talks were published in video form after the sessions were over (usually late at night or the next day). The recording quality was excellent, full video of the speaker and their slides synced up, and actual cameramen that kept the speaker in frame for the whole talk. Audio quality was excellent as well. UberConf does something similar by making audio-only available, but sometimes speakers forget to press record, and I often found myself skipping some sessions with the intent to listen to them on audio later, only to find that they weren&#39;t recorded. QCon solves this problem entirely with a professional A/V staff and quick editing/uploading. What&#39;s more, the slides are available when the video is - <strong>I actually found myself more easily able to take notes on the recorded talks than I was able to when watching talks live.</strong></p> <p>I learned a whole bunch from the sessions I went to, though there was a day (Thursday) where I narrowed down to two talks for every time slot, only to find out based on Twitter that the session I <em>didn&#39;t</em> choose was the better one. This was annoying, but I fixed it with the video recordings.</p> <p>I also really liked there being a special &quot;Modern Computer Science in the Real World&quot; track - it&#39;s rare to see really heavy CS stuff at programmer conferences, I liked it.</p> <div class='image aligncenter' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/qcon_sessions.jpg" width='640' height='427'/></figure></div> <h2>Mini-Talks</h2> <p>On Wednesday, each of the tracks had a session which was &quot;mini-talks&quot;, kind of a series of lightning talks related to the track. I really, really love lightning talks, I have found that there are a lot of 40-50 minute talks that really should just be 5 or 10 minute lightning talks, and they&#39;re a great way to get wide exposure on a lot of different topics.</p> <p>I wish QCon had done a mini-talks session for every track, rather than just the tracks on the first day of the conference, because <strong>the mini-talks were great</strong>. Again, the video recordings were instrumental here, it was extremely tough to decide which mini-talks to attend, and with the help of video I was able to attend all of them.</p> <h2>Open Spaces</h2> <p>QCon also has an &quot;Open Space&quot; slot for each track, where attendees would get together and brainstorm their own topics, then other attendees would speak about them if they knew anything about the topic.</p> <p>I hate these things, I typically find that they either fizzle out due to not enough participation, or get completely controlled by a single enthusiastic person. I guess a lot of people like Open Spaces, but they just aren&#39;t for me. Every time I saw an Open Spaces talk on the schedule, I wished it was another session, or a session of mini-talks.</p> <h1>Workshops</h1> <p>Like a lot of other conferences, QCon had two days of workshops. Workshops can be really hit or miss in my experience (usually miss), and QCon was no exception. One way QCon did a good job was that Monday was for all-day workshops, and Tuesday was for two half-day workshops (though they had some all-day ones as well). This is a good way to go, personally <strong>I&#39;ve found half-day workshops tend to work better than the all-day variety</strong>.</p> <p>The workshop I picked for Monday wound up being entirely not what was I expecting. It was extremely academic and not very hands-on, which is about the greatest sin a &quot;workshop&quot; can commit. If every participant isn&#39;t on their laptop, it&#39;s not a workshop, it&#39;s just an all-day lecture. QCon was particularly strict about not allowing you to change workshops after making selections, so I was stuck in the workshop the whole day, hating it.</p> <p>The two half-day workshops were much better, but again they suffered from the problem so many workshops have, which is <strong>catering to the slowest, least prepared people in the workshop</strong>. I&#39;ve said it before but if you ignore the e-mail that goes out with the instructions to get set up, you should be left behind. Read the e-mails and do the setup, if you can&#39;t do that then you deserve to have your money wasted; the alternative is that everything slows down so you can catch up meaning everyone else&#39;s money is wasted instead.</p> <p>I especially want to call out <a href="https://twitter.com/everett_toews">Everett Toews</a> for doing an excellent job with his OpenStack half-way workshop. He had a bunch of helpers who got the slower people set up without having to slow down himself, and overall the workshop got a lot accomplished and taught me a lot about OpenStack. I think the session occasionally devolved into &quot;here, copy and paste these commands&quot; lacking explanation, but for the most part the entire workshop was great, and <strong>easily a highlight of the conference for me</strong>.</p> <h1>Keynotes</h1> <p>I tend to expect Keynotes to be extremely non-technical technical talks. Like a talk that you might find scheduled normally, except watered down to a point where every audience member would be okay choosing it. These usually mean I&#39;m not a fan of them.</p> <p>OSCON had an interesting approach to the Keynote, which was a series of short keynotes, almost lightning-talk style, which I really liked. QCon didn&#39;t do this, but they actually managed to avoid the &quot;technical talks with no technical information&quot; by having legitimate technical talks, pretty in-depth from a technical standpoint.</p> <p><strong>Mary Poppendieck&#39;s Microchips to Microservices keynote was fantastic, as was Todd Montgomery and Trisha Gee&#39;s less technical but still highly enjoyable &quot;How Did We End Up Here?&quot; talk</strong>. I disagreed with a lot of points that Todd and Trisha made, but their talk gave me a lot to think about, which is always fun.</p> <p>I wasn&#39;t a big fan of &quot;Driven to Distraction&quot; which was a talk about the different kinds of &quot;X-Driven-Development&quot; there are. This was largely intended to be a comedy talk, rather than a technical talk, and it closed the day rather than opened it. I wasn&#39;t a fan of the humor, honestly. A little into the talk, I realized he was literally going to the entire alphabet, ADD, BDD, CDD, DDD, EDD, and so on. Some letters got multiple definitions, I think I gave up and left around GDD (&quot;Google Driven Development&quot;, because we Google things, har). Kind of ended the conference on a sour note, but something skippable at the end is nice in case your brain is fried, as mine was.</p> <h1>Expo Hall</h1> <p>Like OSCON, QCon had an area for vendors promoting their products and giving stuff away. This room was much smaller than OSCON&#39;s, but it wasn&#39;t off to the side. In fact, you had to walk through to get to 3 of the salons with regular talks, which was somewhat annoying. However, the vendors themselves were a lot less pushy, and only tried to talk to you if you came directly to their book and initiated conversation. They also let you go more easily once you indicated you wanted to move on.</p> <p>I can&#39;t complain much about the vendor room, they gave cool stuff away and getting your nametag scanned for the low low price of being pestered via e-mail later on earned you a spot in a drawing. There were enough giveaways that your chances were fairly decent, in fact I won a Lego Mindstorm set.</p> <div class='image aligncenter' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/qcon_haul.jpg" width='640' height='360'/><figcaption style='display:table-caption;caption-side:bottom;'><p class='caption'>Swag!</p></figcaption></figure></div> <p>I still think the coolest version of this I&#39;ve ever seen is at OSCON, where the Expo Hall was gigantic and had all sorts of interesting stuff in it, with a lot of demos and products (even a car). QCon&#39;s room being so much smaller meant that you could go see every booth in just a few minutes, but then were stuck going through the same room multiple times throughout the conference. It also could get extremely crowded in the room, with both booth traffic and just-passing-through traffic, which discouraged me from wanting to spend much time booth hopping.</p> <h1>Attendees</h1> <p><strong>I really liked the class of people at QCon, it seemed like mostly seasoned veterans in the world of software engineering</strong>. There weren&#39;t a lot of young hipsters or brogrammers or anything like that, it seemed like a lot of graybeards. I&#39;m a fan of that, I think our industry generally undervalues people that have been in the field for a while, so it was nice to see such a seasoned group.</p> <p>Of course, I mostly avoided talking to people because I hate networking, so I can&#39;t say much more than that. All I can say is I didn&#39;t overhear conversations that made me cringe, so it seemed like a brighter group of people overall than I&#39;ve seen at some other conferences.</p> <p>One thing I will say, I really liked how QCon set up their lunch tables. Lunch is typically when most of the socializing happens, and OSCON for example has tables for people with similar interests to meet and mingle. QCon had a similar setup, but I appreciated that they also had a handful of rectangular tables pushed against the walls on the perimeter of the lunchroom. A clear set of &quot;I don&#39;t want to mingle&quot; tables. Good stuff if you hate chit-chat like I do.</p> <blockquote class="twitter-tweet" lang="en"><p lang="en" dir="ltr">Just ran into stranger at <a href="https://twitter.com/hashtag/qconnewyork?src=hash">#qconnewyork</a> who recognized my name from my badge because he follows me on Twitter and doesn&#39;t know why. <a href="https://twitter.com/hashtag/surreal?src=hash">#surreal</a></p>&mdash; Rod Hilton (@rodhilton) <a href="https://twitter.com/rodhilton/status/608641289379262464">June 10, 2015</a></blockquote> <script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script> <h1>Venue</h1> <p>Alright, here&#39;s the bad news. The conference was held at the New York Marriott at the Brooklyn Bridge, and <strong>the venue was easily my biggest complaint about the entire conference</strong>.</p> <p>The hotel itself had absolutely no restaurant (or bar for that matter), which made meals a bit difficult. What the Marriott had was a lounge that you could pay to access for \$75 per day, which is outrageous. The lounge was the only place to get a meal or a drink in the entire building, and there was no room service offered.</p> <p>Normally only dinners would be a problem in this type of situation, since conferences usually include breakfast and lunch. However, what passed for &#39;Breakfast&#39; was extremely disappointing - 100% pastries and breads, no eggs or meat or protein of any kind (actually they did have bland, ice cold poached eggs). Nothing served warm or even warmable (bagels, no toaster). A bunch of pastries and muffins are not &quot;brain food&quot; - starting a full day of technical talks with a bunch of muffins is a surefire way to be nodding off by the third talk. I wound up walking to a nearby Panera every single morning to get an egg sandwich.</p> <p>Lunch was better, but also not great. I think by choice of QCon itself, the hotel catering had to make everything gluten-free. I find the whole gluten-free for non-celiacs fad <a href="http://www.georgeinstitute.org.au/media-releases/dont-believe-the-hype-on-gluten-free-food">generally irritating</a> but what really made this annoying was what foods they chose to serve. If I say to you &quot;okay we need a vegetarian meal&quot; you try to think of foods that don&#39;t involve meat, you don&#39;t immediately think of a meat dish and substitute tofu in. Similarly, why does your &#39;gluten free&#39; menu consist almost entirely of gluten-free pasta? I wound up grabbing lunch elsewhere for 3 of the 5 days I was at the conference. This was frustrating, because I convinced my employer to send me to NY by arguing that meals were mostly included, but then I wound up having to pay for most meals.</p> <p>Aside from workshops, it was very rare to find tables in the sessions - usually it was just rows of seats. And the walls were those sliding accordion walls that hotels use to divide huge rooms into sections. The end result being, it was very hard to find a place to plug in a laptop, and situate it in a way to take notes. This isn&#39;t really the venue&#39;s fault, I&#39;m sure QCon told them no tables, but it&#39;s always something that irks me as a notetaker. Like I mentioned earlier, I actually found it easier to take notes and watch sessions from the video recordings, from the comfort of my hotel room after hours.</p> <h1>Misc</h1> <p>In summary, QCon absolutely excelled in terms of session quality, variety, and depth, but workshops were an area for improvement (as with every other conference that offers them), and the venue itself was dreadful.</p> <p>I learned a lot at the conference, and was able to gain a lot of insight in the areas where I was hoping to. One particular interesting note was, I went to a lot of &quot;microservices&quot; talks, but almost all of them were &quot;here are some tips for your microservices&quot; and not &quot;here are pros and cons of microservices&quot; or &quot;why you should use microservices&quot;. In other words, most of the microservices talks assumed that you were already building and deploying microservices. This was somewhat shocking to me and made me feel behind the curve a bit - I&#39;m really not sold on microservices, I think the operational concerns are likely to outweigh the benefits, so it was interesting to see it presented like that ship has already sailed. There was a talk by StackOverflow in favor of monoliths, but everything else was all microservices all the time. Martin Fowler recently published a great <a href="http://martinfowler.com/articles/microservice-trade-offs.html">pros-and-cons of Microservices article</a> that I found myself nodding along to.</p> <p>QCon had a nice &#39;custom schedule&#39; builder like OSCON but QCon&#39;s was a bit behind, it didn&#39;t integrate with an app and give notifications like OSCON&#39;s did. However, the full video recordings of sessions available same day were phenomenal, and the ability to share with 10 other people who didn&#39;t attend the conference is awesome.</p> <p><strong>Overall, I really enjoyed my QCon experience and it&#39;s definitely on my radar for a conference to attend again in the future</strong>. However, most of the things I disliked about the conference were related to its physical location and venue, which makes me wonder if it&#39;s worth the (very high) price tag since the best part (the recorded sessions) are eventually made available on the internet.</p> Wed, 01 Jul 2015 00:00:00 +0000 http://www.nomachetejuggling.com/2015/07/01/qconny-2015-a-review/ http://www.nomachetejuggling.com/2015/07/01/qconny-2015-a-review/ Uploading a Jekyll Site to Rackspace Cloudfiles <p>This blog was never intended to be popular by any stretch of the imagination. Largely I started it simply to have a place to gather solutions to technical problems I&#39;ve encountered, so that I could easily look those solutions up if I needed them again. The blog has always run on my own shared hosting server, on a self-installed version of Wordpress.</p> <p>To my great surprise, a <a href="http://www.reddit.com/r/programming/comments/2986e4/the_worst_programming_interview_question/">few</a> <a href="http://www.reddit.com/r/programming/comments/hn1fx/a_different_kind_of_technical_interview/">of</a> <a href="http://www.reddit.com/r/programming/comments/yvr9/my_interview_with_google/">my</a> <a href="http://www.reddit.com/r/TrueReddit/comments/q98ld/the_star_wars_saga_suggested_viewing_order_iv_v/">posts</a> have found their way to the front page of <a href="http://www.reddit.com/domain/nomachetejuggling.com">reddit</a>. My <a href="http://www.nomachetejuggling.com/2011/11/11/the-star-wars-saga-suggested-viewing-order/">post about Star Wars</a> has been mentioned on <a href="https://www.youtube.com/watch?v=effD1u4oCRE">King of the Nerds</a> and <a href="https://www.youtube.com/watch?v=keSFjjhUyVA">The Big Bang Theory</a>, and even landed me an <a href="http://www.npr.org/2014/03/20/291977042/theres-more-than-one-way-to-watch-star-wars">Interview on NPR</a>. </p> <table class='image aligncenter'><tr><td><script type="text/javascript" src="//www.google.com/trends/embed.js?hl=en-US&q=Machete+Order&cmpt=q&content=1&cid=TIMESERIES_GRAPH_0&export=5&w=600&h=330"></script></td></tr></table> <p>Needless to say, the traffic to my blog has been both extremely unexpected and unpredictable. The Star Wars post had been online for months with virtually no traffic before <a href="http://archive.wired.com/geekdad/2012/02/machete-order-star-wars/">Wired</a> suddenly linked to it, instantly decimating my web server. I&#39;ve fought and fought with various configurations for Wordpress, used <a href="http://wordpress.org/plugins/w3-total-cache/">as much caching</a> as possible, and even had my <a href="https://www.servint.net/">web host</a> temporarily upgrade my service, all trying to keep a web site that makes no money online even when traffic increases by a factor of 100 overnight. <strong>When my site goes down, it&#39;s embarrassing, because even though it&#39;s just a personal blog on a shared host, it gives the impression that I, as a software developer, don&#39;t know how to make a web site scale</strong>.</p> <h1>Switching to Jekyll</h1> <div class='image alignright' style='display:table'><figure><img src="http://www.nomachetejuggling.com/assets/jekyll-logo.png" width='300' height='141'/></figure></div> <p>So after the most recent pummeling I took due to a <a href="https://news.ycombinator.com/item?id=7953725">Hacker News link</a>, I decided it was time to <strong>bite the bullet and convert the entire site to <a href="http://jekyllrb.com/">Jekyll</a></strong>. I&#39;ve messed around with the technology before to build another, smaller, blog, so I was somewhat familiar with the constructs and idioms. A lot of work and ten custom plugins later, the entire site was converted, with very little loss of functionality.</p> <!--more--> <p>I didn&#39;t want to serve the files from my shared host because I know firsthand that the traffic spikes I experience are often enough to overwhelm apache itself, and I couldn&#39;t host it with <a href="https://pages.github.com/">GitHub Pages</a> due to the aforementioned ten custom plugins. I&#39;ve used both Amazon S3 (to host the smaller Jekyll-based blog) and Rackspace Cloudfiles (as a CDN for the Wordpress version). Of those two, I find Amazon S3 to be extremely overcomplicated and difficult to work with, but there&#39;s a great <a href="https://github.com/laurilehmijoki/s3_website">S3_Website</a> gem that makes uploading a Jekyll blog a snap. Rackspace Cloudfiles is much more straightforward to work with, but the <a href="https://github.com/nicholaskuechler/jekyll-rackspace-cloudfiles-clean-urls/blob/master/cloudfiles_jekyll_upload.py">Python script</a> that <a href="http://www.rackspace.com/blog/running-jekyll-on-rackspace-cloud-files/">even Rackspace itself</a> links to has given me various dependency issues.</p> <p><span data-pullquote="Rackspace Cloudfiles is a bit cheaper per GB than Amazon S3, and ... that became the deciding factor. " class="left">Rackspace Cloudfiles is a bit cheaper per GB than Amazon S3, and ... that became the deciding factor. </span> In the end, Rackspace Cloudfiles is a bit cheaper per GB than Amazon S3, and since this blog receives a nontrivial amount of traffic, that became the deciding factor. Since I always had issues with the python script that uploads a Jekyll blog to Cloudfiles, I decided to do some research into alternative means of automated uploading (<a href="http://cyberduck.io/">Cyberduck</a> works, but I wanted something that I could make Jenkins run).</p> <p>Unfortunately, <strong>almost everything I found wound up linking to the exact same Python script that gave me trouble</strong>. So I decided to write my own, which I&#39;m open-sourcing for the benefit of anyone else that has had similar problems.</p> <h1>jekyll-cloudfiles-upload</h1> <p><a href="https://github.com/rodhilton/jekyll-cloudfiles-upload">jekyll-cloudfiles-upload is hosted on GitHub</a> and is a single Ruby script that can be dropped into your Jekyll blog project directory. It will look in <code>_site</code> for all of your static site files, compare them to what is in your Rackspace Cloudfiles container, upload any that need updating, and delete anything in the container you no longer have. It only has a few small dependencies (ruby and a ruby gem named <code>Fog</code>), and I&#39;ve been using it to update this blog with great success.</p> <h2>Installation and Usage</h2> <ol> <li><p>Log into Rackspace Cloud Files and create your container. <em>You must create your container first, the script will not do that</em>.</p> <blockquote> <p><strong>Pro-Tip</strong>: Before you upload anything, set your container&#39;s TTL to something other than the default, which is 72 hours. Once a file is loaded into the CDN, it seemed to me that, even if you changed your container&#39;s TTL after the fact, the TTL change itself wouldn&#39;t propagate until after 72 hours. Changing it first (I use 15 minutes) before uploading files seemed to fix this issue.</p> </blockquote></li> <li><p>Install <code>fog</code> rubygem via <code>gem install fog</code></p></li> <li><p>Put a <code>.fog</code> file in your home directory that looks like this (it&#39;s a yaml file, be careful not to use tabs instead of spaces):</p> <div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">default</span><span class="pi">:</span> <span class="s">rackspace_username</span><span class="pi">:</span> <span class="s">your_user_name</span> <span class="s">rackspace_api_key</span><span class="pi">:</span> <span class="s">your_api_key</span> <span class="s">rackspace_region</span><span class="pi">:</span> <span class="s">your_preferred_region</span> </code></pre></div> <p>The rackspace regions are strings like &#39;iad&#39; or &#39;dfw&#39;, depending on your preferred container region. You can get your api key from the Rackspace control panel&#39;s Account page.</p> <p>If you have multiple sites with multiple containers all in different regions, you&#39;ll have to hand-alter the script so that it doesn&#39;t look up this information in Fog, but just hardcodes it instead. If you do this, I suggest using the ruby symbol syntax in the <code>cloudfiles_upload.rb</code> script, such as <code>:iad</code>.</p></li> <li><p>Copy the <code>cloudfiles_upload.rb</code> script from the GitHub repository into the directory for your Jekyll project. It&#39;s a good idea to also make it executable via <code>chmod a+x cloudfiles_upload.rb</code></p></li> <li><p>Build your site via <code>jekyll build</code></p></li> <li><p>Execute <code>./cloudfiles_upload.rb container_name</code> or <code>ruby cloudfiles_upload.rb container_name</code>.</p> <p>The script will spider through the <code>_site</code> subdirectory and look for any files that need to be added, deleted, or updated. Only files whose md5 hashes differ will from those in the container will be uploaded, so it will not upload files unnecessarily.</p> <p><strong>Note</strong>: You may optionally leave off the <code>container_name</code> parameter, and the script will use the name of the directory you are in. So if you name your directory and container <code>mysite.com</code>, you can just run <code>./cloudfiles_upload.rb</code> with no arguments.</p> <blockquote> <p><strong>Pro-Tip</strong>: Add <code>cloudfiles_upload.rb</code> to your <code>_config.yml</code> file&#39;s exclusion list so it doesn&#39;t get uploaded.</p> </blockquote></li> </ol> <h1>Dogfooding</h1> <p>I offer no guarantee of support on this script, but I can assure you that I&#39;m dogfooding the hell out of it. I set up a private Jenkins instance that watches for changes to my private <a href="https://bitbucket.org/">BitBucket</a> repository that contains this blog. The repository has <code>jekyll-cloudfiles-upload</code> as a submodule, with the <code>cloudfiles_upload.rb</code> script symlinked to the submodule&#39;s version. Any change to the blog pulls down the most recent copy of the script, builds the blog, and then runs the script to upload it.</p> <p>I liked this solution so much that I wound up converting the smaller blog that I had been running on Amazon S3 over to Rackspace Cloudfiles as well. I also have a Jenkins job that looks for changes to the <code>jekyll-cloudfiles-upload</code> project, and automatically kicks off the jobs for both web sites whenever it changes as well, so this script is definitely instrumental to a process that controls a web site whose downtime personally embarrasses me a great deal. Again, no guarantees, but I&#39;m putting a lot of trust in this script, for whatever that&#39;s worth.</p> <h1>Jekyll Thoughts</h1> <p>So far, I&#39;m digging Jekyll a lot. I&#39;d used it before for the smaller blog as I mentioned, but that was my first, so I used <a href="http://jekyllbootstrap.com/">JekyllBootstrap</a> thoroughly. It was good for getting set up, but I found making modifications to themes and general customization quite perplexing and difficult. This time, I built everything from scratch, including all of the custom plugins I&#39;m using, and I have a much better understanding of how Jekyll works.</p> <p>The only thing I had to give up was the rightmost sidebar. Previously, that area actually showed my latest tweet, and various updates from my Goodreads, Trackt, Last.fm, Groovee, and Diigo feeds. Those used the <a href="https://wordpress.org/plugins/better-rss-widget/">Better RSS Widget</a> Wordpress plugin, and I liked the feature but it would occasionally have trouble pulling feeds, causing it to leave an error on the cached version of a page for hours until the cache cleared. I&#39;m alright with my sidebar-o-social-icons that I replaced it with, though.</p> <p><span data-pullquote="I love writing posts in Markdown. " class="right">I love writing posts in Markdown. </span> I&#39;ve always wanted to be able to do that with Wordpress, but found that plugins which supported it were generally terrible. I wish it was easier to make custom alterations to the markdown processing, but <a href="http://jekyllrb.com/docs/plugins/#tags">Jekyll Tags</a> are a decent workaround. Like I mentioned earlier, I&#39;ve got a lot of custom plugins gluing this site together, but I&#39;m happy with the readability of my markdown source files, and I like that there&#39;s an abstraction layer translating those to HTML rather than embedding HTML directly into posts or writing Wordpress shortcode processors.</p> <p>I may eventually put some of these plugins into GitHub as well, and I wound up writing a pretty handy extension to <a href="http://highlightjs.org/">highlight.js</a> that makes it easier to copy and paste syntax highlighted code which I think others might find useful. But easily, the most useful thing I wrote to support this effort - aside from a highly customized script that ran my blog post&#39;s html files through forty regular expressions to convert them to markdown - was the <code>cloudfiles_upload.rb</code> script. Hopefully others may find it useful as well.</p> Fri, 04 Jul 2014 00:00:00 +0000 http://www.nomachetejuggling.com/2014/07/04/uploading-a-jekyll-site-to-rackspace-cloudfiles/ http://www.nomachetejuggling.com/2014/07/04/uploading-a-jekyll-site-to-rackspace-cloudfiles/