Brian McQuay

Expandmenu Shrunk

  • Upgrade to Rails 4 already

    I’ve been developing a new application from the ground up built on Rails 4. I wanted to publicize my thoughts so far and try to encourage the weary to stay on the edge lest they face their own deprecation. Even though we’re extremely close to an official Rails 4 release there are countless gems out there that haven’t even begun to upgrade themselves and countless projects undoubtedly have yet to even discuss their upgrade path to Rails 4. While the upgrade from Rails 3 to 4 isn’t likely to be as difficult as it was from Rails 2 to 3, there will certainly be a handful of changes required. You’re either ahead of the game or you’re playing catchup. Don’t get caught without an upgrade plan while the rest of the community migrates to Rails 4. So enough preaching. Lets get into it.

    So far, I’m fairly happy with how few habits I have to change when working with Rails 4. The most notable change you’ll discover is strong_parameters. There’s a gem you can use already to get your application using strong_parameters while you’re still on Rails 3 and I highly recommend you start there if you’re doing an upgrade. If you’re not familiar with strong_parameters, it basically gets rid of attr_accessible and moves the logic into the controller. Initially, this feels a bit strange because we’re used to thinking of the model as sort of self encapsulating in a way. It does what it needs to do to manage the data. The problem with attr_accessible is that it never really allowed for much flexibility and was a fairly primitive and rigid approach to mass assignment protection. strong_parameters moves that logic into the controller where, once you’re used to it, it feels much much more powerful, customizable, and arguably better organized. It allows you to embed business logic around your mass assignment protection for instance. One example is managing a record as a user versus an admin. An admin might be allowed to update a bunch of attributes that the user isn’t allowed to update. Before strong_parameters, we’d end up having to grant all the records both the user and admin could update in attr_accessible and then customize the controller’s update action to manually set each parameter the user would be allowed to change. This ends up making a big mess of code when you compare it to how strong_parameters handles it. strong_parameters lets you specify different sets of attributes allowed for the user and the admin and it does it in a way that eliminates the need for attr_accessible, cleans up the update actions, and keeps the mass assignment protection logic much dryer and much more useful. Here’s how we might do it without strong_parameters if we want to allow a user to edit name and description but only the admin to edit contact:

    Normally I’d make a separate admin controller for this but for the sake of the example lets assume this is ok. Obviously this will get out of hand if you really want to be safe with your attr_accessible because with each new attribute you end up having to manually assign it in the controller. Here’s how you’d do the same thing with strong_parameters:

    Now you can see pretty quickly how much more powerful this is. It allows us much finer grained control over mass assignment protection and lets up keep our code clean while making it more secure. After you get used to it, you’re going to love it.

    The second big gotcha you’ll discover is turbolinks. Its a gem that’s enabled by default and as soon as you start crafting some Javascript for your application you’ll discover it. Basically your Javascript won’t work correctly in a lot of cases because of turbolinks. For those not familiar with turbolinks, it a way to speed up page loads by doing it through turbolinks Javascript instead of doing an actual browser page load. Turbolinks takes a typical anchor tag and fetches the page. Instead of rendering the entire page it does some trickery with the DOM and replaces the BODY contents, page title, and modifies the displayed URL so it looks correct. What this does in effect is it prevents your browser from having to go fetch the CSS and JS all over again for each page load. This can save a ton of time especially when you’re loading bulky JS frameworks or fluffed up CSS like Twitter Bootstrap for instance. The unexpected side effect of course is that your JS and CSS isn’t loaded when you’re expecting it to. Its a trivial to fix your Javascript once you know what’s happening though. Overall, I like turbolinks and it’ll generally reduce the amount of data you have to transmit but its a strange default addition to Rails in my opinion. Its almost like defaulting will_paginate as a gem. Sure its useful but does it need to be included by default? Either way though, you’ll learn to love it.

    Here’s how you might do something before turbolinks and discover it no longer works:

    Basically this is saying to initialize a datatable once the page is loaded. Turbolinks isn’t going to even look at this when you visit the page via a turbolink though. Its just going to replace the body, page title, and url. Here’s how you’d do it if you’re using turbolinks:

    What this is doing is it defines the load_datatable function and calls it like you’d normally do when the DOM is ready if you manually entered the URL or did a manual page refresh. The last line is what you need to learn though. It tells turbolinks to call that function when the page is loaded with turbolinks. So it replaces the body, the title, and the url and then it will execute the load_datatable method and everything will work how it used to work.

    There’s a few future deprecation notices that you might come across as well but for the most part its fairly painless. However, there’s a ton of gems out there that aren’t compatable for one reason or another. The handful that didn’t work that I’ve investigated were basically written poorly to begin with or made heavy use of attr_accessible which will undoubtedly make upgrading those more difficult. However, there are a bunch of gems already working on Rails 4 support and its only a matter of time before you’re going to need to upgrade yours as well. I’m running everything off the latest from github and its working pretty good. I haven’t had any gems’ repo ruin my bundle yet either. Rails 4 is probably the most stable new Rails version that I’ve seen before and I’ve seen every major release including version 1. The early days everything was a mess and barely worked right. Rails 2 was the first really stable release there was and upgrading to Rails 3 was a nightmare usually. Despite the history, however, this latest release is going to be the easiest upgrade there has ever been in the history of major Rails releases. Its simple, its stable, and it still has that new car smell. So what are you waiting for? Upgrade to Rails 4 already.

  • Generic Email Resque Job

    I was working on a project recently that uses Resque to queue jobs. One of the main things they use this for is to queue email jobs so they don’t hold up the user. The way things were initially implemented and the way I’ve seen it done in the past is they had a separate job for each email. This wasn’t going to scale very well and was going to create a big mess of code in the long run because we’d have to manage a separate Resque jobs for every type of email. This post is about my solution to this problem. Its a consolidation of email jobs so you can have one generic email job and reduce the amount of code in your application by a lot.

    The first step is to define the actual Resque job:

    What this piece of magic does is it allows us to pass in any mailer with a mail method, parameters, and destinations. It loops through the email addresses we’re going to send to and it fires off the mailer we send it. Here’s an example of how to use this:

    Notice we’re not passing in the entire mailer but just the class name and the :welcome token. The Resque job uses that to instantiate the correct mailer and call the correct method and that only happens once its out of the Resque and executing. The queue stays slim and its very dynamic now. You don’t need a new Resque job for every email at this point. The only caveat now is how you handle params in the actual mailer. Here’s an example:

    Notice that very generic params we’re passing in. Instead of creating a welcome method that takes in each variable separately, we just use a generic params hash instead. You could easily change this to make it so it accepts a dynamic number of “*options” for instance if you prefer.

    I was very pleased with how this turned out and how much code its going to let us condense that I had to share it. I hope you find it useful.

  • Improving your Development Process with Tests and Code Reviews

    I’ve worked with a lot of different teams in the past and I’ve seen a lot of variation with how those teams approach things like tests and code reviews. With this article I hope to shed some light on what practices work, which are destined to fail, and provide some advise on how to improve your test coverage and code reviews.

    Test Coverage
    Very often I encounter projects that have either no test coverage or so few tests as to make the tests effectively useless. In order for your tests to really serve their intended purpose of improving the modularity and reliability of the code, they need to be comprehensive enough for it to really work. If your tests are sparse, most of the bugs in your app will just fall between the gaps in your test coverage. Poor test coverage tends to result in less modular and more complex code. That less than ideal code ends up mixed in with better code that does have tests. This mix of good and bad code increases the complexity of the app more than it would if you had just all good or all bad code sometimes. It only takes one bad apple to spoil the bunch and the same goes with software. Bad code will spoil good code and bad practices will spoil good practices. Unless you have strong and broad coverage and a solid process for maintaining test coverage, you’re tests are mostly useless and you’re likely better off not even wasting your time doing tests. Don’t get me wrong, I’m a proponent of as close to 100% test coverage as you can possibly get. If you’re test coverage is more like 10% where you sometimes make tests and you sometimes don’t then you’re just wasting your time making tests. You’re either committed to complete test coverage or you aren’t. Anything else and you’re just spinning your wheels because you think it feels right to make some tests every now and then. Sparse tests are worse than no tests because you have a false sense of a proper process and because you’re wasting time building tests that cumulatively just amount to wasted time. Unless you’re close or over 80% code coverage, you probably have a process problem not a test problem.

    So if you’re committed to full test coverage, how do you go about making sure everything is actually tested? This is where code reviews and test driven development come into play but first lets discuss code reviews in general before we tie the two together.

    Code Reviews
    I often hear teams tell me that they only do code reviews occasionally or that they would really like to start doing more code reviews. My advise to them is always the same “You’re doing it all wrong.” If you’re telling yourself something similar its because you’re not building code reviews into your development process. I’ve seen teams try to impose a code review schedule where one a week or so the whole team would sit around and look at some code together as a group. No one is usually very interested and very little comes out of it than making a project manager feel like they’re doing the project good by having occasional code reviews. The problem with this approach is that no one is really all that engaged during the review and you still end up with tons of code making it into production that hasn’t been reviewed by anyone but the developer who committed directly to the master branch (ick). The only code reviews that I have ever seen actually work long term are those built into the development process. Here is one typical flow that I have seen work well in numerous teams.

    First, a developer creates a new branch for whatever task they’re working on. When its complete, the developer commits the code to their branch and creates a pull request to merge that branch into the development branch, or master if necessary. This is the first big step.

    Next, before ‘developer A’ goes onto another task, they look for pull requests from other developers. If there’s a request waiting, ‘developer A’ comments on the pull request that he’s reviewing it and then does a code review on it. If things need fixing or changing, the ‘developer A’ comments on the pull request. The original developer, ‘developer B’, then makes the necessary changes until ‘developer A’ is satisfied and does the merge. This dialog between the developers boosts information sharing, improves code quality, and enforces test coverage. Once the pull request is merged, ‘developer A’ finally moves on to either another pull request or grabs a new story to work on.

    What this effectively does is it makes sure that all code that gets merged into the development or master branches has been reviewed by another developer. Only when you build code reviews into your process like this will you be able to be comfortable knowing that no one was able to slip code in that wasn’t reviewed by at least one other developer.

    Code Reviews Enforce Test Coverage
    This is where this kind of code reviewing development process merges with your tests. When you have all your code being reviewed, you can also use it to make sure that all code is tested. Its the reviewer’s responsibility to make sure they’re comfortable with the test coverage of the committed code. All code should have tests before a pull request is created and the reviewer acts like an enforcer. The only way for any code to get merged is through this process and so you will have good test coverage and you’ll have all your code reviewed. Its a win win situation.

    I don’t really need to delve into the nuances of TDD since there’s a million posts already out there about TDD. However, I must say that if you do build your test first in a true TDD style you will definitely not forget to make tests before you commit your code and create a pull request. The tests are the first thing you build so you never end up forgetting them. They’re there before your actual code even exists and they serve as a guide for keeping your code modular and testable.

    This is such a great development process that not only will it likely solve your test and code review problems but more importantly it also helps elevate junior developers. The process of reviewing code and making sure your own code passes the scrutiny of whoever is reviewing it tends to increase the quality of everyone’s work overall. If you’re wishing you had better test coverage or your team was doing more code reviews then you need to rethink your development process.

  • Welcome to the Machine

    There is a pervasive cultural problem in the software industry that has only been getting worse in the past few years. As software developers, we’re always neck deep in code and we’re always looking for more efficient algorithms and processes. The constant refinement of our software, our architecture, and our processes is an art form and maybe even a kind of religiosity. We are craftsmen who mold the future of the world with out art form. Those concepts of refinement and improvement which we use in a purely technical manner have caused our interpersonal communications to degrade to a point of non-existence. Sure we can communicate our technical ideas fluently and translate vague client requirements into precise user stories but there’s more to having a healthy mind than that. We’re social animals at our core and to have a healthy mind we must build strong personal friendships with people and share our non-work related ideas. That tends to directly contradict the introverted nature of software development though. With our collective laser-like focus on technology and our ingrained habit of continuous improvement, we’ve achieved amazing software feats in recent years. We’ve seen an explosion of frameworks and the maturation of the few that survived. We’ve seen NoSQL and cloud computing become ubiquitous. We’ve seen the resurgence of Javascript in an enormous way and the list goes on and on.

    That amazing intensity, however, has not come without some loss. In the process, we’ve lost touch with ourselves and each other. There is a persistent and nagging degradation of interpersonal communications between team members within the software industry. Sure we always have some minor chatter about non-work or non-tech things but its always superficial. There’s almost an unspoken rule not to talk about personal things while at work. The work relationships we have with our co-workers rarely stretch into our personal lives. They start at work and they end at work.

    The lack of long term employment by most developers also contributes to the lack of many meaningful long term personal relationships being formed. Most developers have probably never held the same job for more than 4 years. The vast majority float in and out of different companies and projects with a sense of independence. That independence and the constant moving around prevents many strong friendships from ever forming. It isolates us individually more than staring at a computer all day ever has.

    In my own career, I’ve met many people through my work. Given the sheer numbers of people I’ve met, you would think that I’d have a vast network of friends. I do have a large number of acquaintances but I can’t really consider any of them strong personal friendships. Strong friendships take time to build and by moving around constantly and by limiting our conversations we have created somewhat of a void throughout the entire industry. I’m not suggesting there are no friends among software developers because there certainly is. What I am saying, though, is that in general our industry has evolved into a machine where humans are barely welcome anymore. Just because we work with machines all day doesn’t mean we need to start acting like them too. A healthy mind needs healthy personal relationships. A healthy software industry needs to not only foster a culture of continuous improvement but also a culture of growing strong personal friendships and developer retention beyond the span of a single year.

  • 3 new I18n gems for your Rails 3 app

    I’ve been busy building 3 new translation gems for a new application I’m working on. The first is i18n-country-translations. It breaks countries down into their standard country codes and adds translates for different locales. This gem can be easily incorporated into other gems or apps that need I18n country translations. Just add this to your Gemfile:

    gem 'i18n-country-translations'

    You can grab the source and more information here:

    The next gem I created uses the i18n-country-translations gem to make a country select that automatically translates the country names in the select drop down to the current locale. Just add this to your Gemfile:

    gem 'i18n_country_select'

    You can grab the source and find out more here:

    Finally, no app that has any dates displayed to users can be complete without taking into account timezones. The last gem is i18n-timezones and just adds timezone translations which can easily be included into other gems or directly in your app. It also adds a convenient ActiveSupport::TimeZone override of to_s so that it takes the current locale into consideration. This makes it extremely easy to add timezone translations. Just add this to your Gemfile:

    gem 'i18n-timezones'

    You can grab the source and find out more here:

  • Instead of searching for water on other planets, why not search for space debris?

    I came across this image today and it made me think of how scientists have been searching for water around distant plants. If humans have created this much space debris, its reasonable to assume another developing civilization on a distant planet would also have generated this amount of space debris. Perhaps instead of a quest for water on other planets, a more significant find would actually be the discovery of space debris from a civilization capable of space flight. If there is enough debris and operational satellites, they collectively would likely give off a unique photon signature similar to how scientists are detecting water from the refraction of light through the atmospheres of distant worlds. If we could refine this technique enough we might even be able to detect the presence of synthetic materials orbiting the atmospheres of distant planets. Enough debris would surround the planet and refract light in a way that might be unique to synthetic materials. Someday…

  • Can’t see the forest for the trees – Robots, AI, and the Internet

    I’d like to share some ideas I’ve had today about robotics, ai, and the internet.

    First lets discuss robotics. There has been much work in the field of robotics to create bi-pedal humanoid like machines. Much, if not most, of the work pursues what I consider the ideal: being as human as possible. When you look at just the mechanics and kinematics of the quest, you can see we are indeed making great progress. However, when you take into consideration longevity, perpetuation (ie. life-like procreation), energy, and immune systems, we seem a long ways away from that ideal of being human. Not many machines to date can last over 100 years like a human can. Most rust or break and once broken lack the ability to fix themselves like a human can. When we get injured we heal ourselves and life another day. Humans procreate and perpetuate the species and life itself. Humans can gather energy from a multitude of sources whereas most robots are currently electron based. No robot can replicate itself as well as life and we are a long ways away from life-like machine replication. Life and humans are the ideal that a large part of robotics research strives for as a result. If our science would reach a point where we were capable of creating human-like machines capable of perpetuating itself, we would likely end up with exactly what we’ve got already. We wouldn’t likely be using electron based energy sources because its much easier and efficient to generate energy from complex carbon structures like human food. We’d have machines that were capable of building themselves from the tiniest of components like DNA which would essentially be the program to create the robot, concise, redundant, and reliable. The most advanced robots would construct themselves from something like DNA and build itself an immune system to heal itself when it was injured. In the end, we would just be recreating ourselves. Would we do as good a job? How are we to improve upon what we’ve already got when we can’t even match what we’ve already got?

    The same argument can be said of artificial intelligence. The ideal is to create an ai that can pass the Turring test and behave exactly as human as a human would behave. While there is no doubt that specialized skills far surpass any of our own and researchers are making amazing progress we still shoot for the ideal. What would an ideal intelligence be like? It would be logical but yet still capable of being creative. It needs to be self motivating and it needs to be able to learn from its mistakes. It needs the ability to store vast amounts of information in a small amount of space and it needs to be able to filter out what’s important from that information as quickly as a human can. I have complete confidence that the singularity will occur but will it be creative enough to keep itself alive? Will it have the motivation and the drive? What purpose would it find for itself? Ultimately, the ideal intelligence that we shoot for is human. We can create special purpose machines that almost seem human but they can’t currently match the complete set of abilities of a human.

    Next lets discuss the internet. We’ve created a network of energy transmissions that spans the globe. A magnetic storage device on one side of the planet stores a file that is retrieved over the internet. The magnetic energy is converted to electrons, electrons are converted to wifi, wifi is converted back to electrons, somewhere they may get converted to light and sent over fiber optics only to be converted back into electrons and then converted to radio waves and shot into low orbit to a satellite which bounces the signal back to Earth. The radio waves from the satellite get converted back to electrons, to light, to electrons, to wifi, to electrons and finally converted into to photons again on your laptop LCD screen. All this happening over the span of the entire planet every single second, 24 hours a day. What we have created, whether we know it or not, is the most complex brain we have ever known. Humans are acting as sensors and collectively as the life force behind this brain. We feed it the electricity it needs. We fix it when it breaks. We feed it data constantly and we move data around through the brain constantly.

    At some point and somehow this complex network will create some type of emergent property. It may be visualized in human behavior such as how social networking is changing the wiring of our brains. It may be realized as societal shifts in thinking or a global consciousness of society. Regardless of how or what happens, this emergent property will rise out of the network as our own consciousness emerges from the complexity of our own minds. The internet isn’t just a network of computers. Its a network of human minds across an entire planet and the consciousness that emerges from that will change society forever. What would a human mind be without consciousness? We call people vegetables when they aren’t capable of thinking anymore after accidents. Is the internet a vegetable? Not at all. The internet is more like a child coming into being slowly gaining consciousness and self awareness. How would we even know if the internet became conscious? It may be that humans may not even realize the internet is a conscious being. Does a neuron in your mind have some conscious understanding that its just another part of a large network of neurons that make up your brain and that somehow a conscious being has emerged from that complex network of neurons? Certainly it sounds ludicrous to think a neuron would have such an awareness and its reasonable to assume we too would be just as clueless as to the emergent properties of the internet.

    Perhaps the emergent properties will only happen over many lifetimes in a way that’s not perceivable to a single limited human lifespan but rather stretch generations. Whenever and however it happens, I’m convinced it will though I have little more than this rambling blog entry to justify it.

  • New Years Resolution: Build my first robot

    I’ve been all over the place lately on deciding what I should do and commit my time to. For the past 8 or so years, I’ve been building Ruby on Rails applications and though I’m pretty good at it I don’t find it intellectually challenging. I keep coming back to the idea of going back to school for my PhD. I’m also going back and forth on scaling Onomojo back up and focusing on development of Webster’s Classroom. I’ve decided to keep doing what I’m doing right now but start to do my own robotics research in my spare time. I bought a Microsoft Kinect for the visual sensor and I’m working on gathering the rest of the parts I’ll need to put together my first robot. It’ll be crude no doubt but the process will teach me the basics that’ll get me started. Above is a screencast of the Kinect connected to my Mac thanks to the OpenKinect project.

  • Investing in myself: Realizing my value as a programmer

    Being a programmer, you have an invaluable skill that you need to learn to harness. Investors realize this already which is why they’ll spend stacks of cash to have you build them something that’ll someday be profitable. Large corporations realize the value in good developers and sometimes bend over backwards trying to retain their top talent. To be able to program well is a skill that people clearly value but why do programmers tend to place such little value on their own talents?

    I know amazing developers who have spent the last decade building other people a fortune in IP while spending very little of their time building their own software. While its easy to look at the paychecks coming in and be content with your progress as an individual, when you put your progress into the context of intellectual property ownership most developers are left with empty pockets. Investors and businessmen use developers to build intellectual property for their businesses. They pay good money for a developer’s time along the way and usually the developer moves onto something else within a few years and the business finds someone new they can use to extend their growing IP treasure chests.

    When I look back on my own past decade, I’ve seen plenty of cash come and go yet the only thing that remains is the software I’ve built for myself in my own time. I put my time and effort into the context of IP and suddenly the value I place on my own time and skill begins to rise. I recently read an article geared towards investors that was recommending keeping good developers on your side at all costs. It argued no matter what a developer was building it was better to keep them busy building you IP and on your team than to lose valuable talent. I read that and taking the context of IP and the value of my time into account I asked myself why I wasn’t spending more of my free time extending my own IP treasure chest? I listened to the advise of that article to investors and I’ve committed to investing more in myself.

  • Playing the Devil’s Advocate: An Argument for SOPA

    There’s much ado online these past few weeks promoting the demise of the SOPA act. Many smart people are arguing it will destroy the internet as we know it. That may be true but that may also be exactly what we need.

    We’ve long known about the problems with the current DNS system from its centralization to its lack of encryption. While it seems like a dozen new solutions are proposed each year, nothing ever seems to take off. If SOPA were to pass, it could be the exact catalyst that DNS alternatives need to propel them into the forefront. Until the major players make a stand and adopt DNS alternatives, it will forever be a partial embrace of a better world. If all these people and companies who would be potentially negatively effected by the passing of SOPA would collaborate and embrase one of the dozens of DNS alternatives, it will be enough to push the momentum in the direction of ridding ourselves of our current flawed system once and for all. If we fail to pass SOPA, we may well live in a world where we’re forever stuck with a flawed and insecure centralized DNS hierarchy that’ll already just as easily controlled without SOPA. Department of Homeland security already takes downs domains at will. If that’s isn’t bad enough to force a paradigm shift in DNS then perhaps something a bit worse like SOPA will be enough motivation for us to collectively solve one of the biggest problems facing the internet: the current DNS system.