Fork me on GitHub

Kernow Soul

Ruby, Rails and JavaScript Consultant

How to Rumble Like a Boss

| Comments

Having been both a competitor and an expert judge in the Rails Rumble I have a unique insight into what makes a good entry and some of the pitfalls entrants can fall into when building their applications.

I’m going to take you through my top tips on how to improve the score judges will give your app in the Rails Rumble competition. Or any other hackathon such as the Node Knockout or Django Dash for that matter.

I’m sure many of you will identify with this… you spend 48 hours working hard, having a great time with your team and creating an app with loads of amazing features you’re really proud of. Judgment day comes and you take look at the results page to see how your team scored and what comments the judges made. You get some great feedback from the judges but maybe the score it slightly lower than you expected, but no feedback on feature X or Y. Intrigued you delve into the analytics, database and logs of your app to see exactly what the judges did when using your app only to discover some the features you built have not been used.

I’ve been there and it can be frustrating, but don’t worry, follow a few simple tips and you’ll make sure the judges see your app as it should be viewed. Back in 2009 I was part of a Rails Ruble team of 3 that created a table football stats tracking app called Wuzlr. We built a ton of features and were really proud of our creation, we even used the app daily for months after the Rumble. However the app didn’t score as well as we hoped during the expert judging phase, what did we do wrong? We didn’t take into consideration how judges would score our app and thought building a good app would be enough.

This year (2013) there are around 500 teams competing in the Rails Rumble. The organizers do a really good job of making it as easy as possible for the judges while also making it fair for all the entrants. This year should be the most trouble free yet for the judges thanks to the hard work of the organizers. Even so, keep in mind all the judges are volenteers and have a limited amount of time to spend reviewing apps. Make sure your app stands out from the crowd, an easy to use, engaging, (fun), interesting, well designed app will suck the judges in and ensure they spend more time reviewing your entry.

To do well in this kind of competition it’s important to understand how your app will be judged to ensure you give it the best chance. I’ve come up with list of tips to help show your app off to the judges in the best possible way. Bring forth the list…

  1. Consider your login system, or if you even need one at all. One of the biggest hurdles to testing an entry is creating an account to login. While gems like Devise make it really easy to implement a login system, out of the box they do not allow users to sign up quickly. Don’t make judges confirm their email address in order to login, this takes time and reduces the amount of time they will spend using your app. Consider using Facebook and twitter authentication as it’s much faster. Best of all don’t make judges login at all, I came across at least one entry last year with a login system where it served no purpose. (also see point 6)
  2. If you’re requesting access to my Facebook, Twitter, or Github account don’t ask for more access than you need. An app last year requested full access to users Github account including read, write and delete access to private repositories! There was no need for the app to have this kind of access. Now I don’t know about you but I’m not going to trust the safety of my private repos to an application coded by some very sleep deprived guys in 48 hours.
  3. “Don’t make me think”. As we have discovered judges have 100’s of applications to go through and a limited amount of time to judge each one. Make your app easy to understand and use, this will help your score. If an app is too difficult to understand or use judges may not be aware of everything it does.
  4. Make a video. If your app isn’t instantly usable consider creating a short video or screencast to demonstrate how your app works, hell even if your app is a piece of cake to use it’s a really good idea to make a video, you’d be surprised how many things get missed. Design IS important, don’t make it an after thought. If you don’t have a designer on your team get one. You are allowed to work on wireframes and mock ups before the competition starts, use this time to plan your app so you know exactly what your doing as soon as the 48 hours starts. Time is precious, don’t waste any.
  5. Consider creating demo accounts or adding dummy data to better show off how your app works. One of the mistakes I made with the Wuzlr app was the majority of the stats were only generated after 3 game results had been entered. No judge entered 3 game results when testing so none of them saw the majority of the app. Adding automatically generated dummy data would have solved this problem.
  6. Make your app easy to test, some apps I tested last year required me to invite friends from Facebook or Twitter. To test this on my own requires 2 accounts with the provider, or I would need another person. Consider if your app really needs to require friend invites, if it does make it as easy as possible to connect with other users. Anything that causes delays in being able to test your app is likely to cause it to have a lower score.
  7. Finally, don’t forget the importance of your apps description and screenshot on the competition entires page, first impressions count. Make sure to write a description that clearly and concisely describes what your app does, a couple of sentences and no more. It’s good to let judges know what they’re testing before they see your app. After writing your description and uploading a screenshot check what it looks like on the entries page, quite often descriptions are truncated on the index pages where most judges will be reading the descriptions.

Hopefully these tips will help you in creating an app with the best chance of doing well in the Rails Rumble. Good luck and I look forward to seeing what the teams come up with this year.

4 Ways to Avoid Merge Commits in Git (or How to Stop Being a Git Twit)

| Comments

I’m sure you’ve all come across merge commits when using Git, those pesky commits with a message reading something like Merge branch 'master' of github.com:kernow/Project-X. We’ve all been guilty of creating git merge commits, but don’t worry there’s a way to stop being a “Git Twit” and make everyone in your team happy which will no doubt lead to them buying you cake! But first, how do these commits get into the repository in the first place? You start out being a good gitizen, git checkout master, git pull, feverishly code away, commit, commit, commit. Woo I’m done, everyone will love my wicked new code! git push rejection!! what! Other people have been working too, jerks. git pull, git push, and there we have it, a merge commit. So how do we stop this madness?

Rebase to the rescue

When running git pull we need to rebase, and so to the first way to avoid merge commits…

  1. git pull --rebase What’s happening here? Git will rewind (undo) all of your local commits, pull down the remote commits then replay your local commits on top of the newly pulled remote commits. If any conflicts arise that git can’t handle you’ll be given the opportunity to manually merge the commits then simply run git rebase --continue to carry on replaying your local commits.

  2. Tell git to always rebase when pulling, to do this on a project level add this to your .git/config file:

    1
    2
    
    [branch “master”]
      rebase = true
    Or do it all on the command line with git config branch.master.rebase true

  3. Add a global config option to always rebase when pulling

    1
    2
    
    [branch]
      autosetuprebase = always
    Or again do it all on the command line with git config --global branch.autosetuprebase always

  4. And the final way, which is what I personally use, in ~/.gitconfig

    1
    2
    
    [alias]
      pl = pull —rebase
    I have a bunch of aliases setup so I can type less and save myself those valuable microseconds. This will allow you to type git pl (or in my case g pl as I have git aliased to g) and it will automatically rebase. If I want to do a pull and not rebase for a specific reason I can use the command git pull which will do an pull without rebaseing.

Of course you could use the 3rd solution and run the command git pull --no-rebase but that involves more typing, and I’m a lazy typer!

BRUG Talk

| Comments

I’ll be doing a talk at BRUG tomorrow evening (29th September 2010) on JavaScript testing covering some of the more awkward testing scenarios you may come across in JavaScript land. This months BRUG is at Beef Towers, hopefully we’ll see some new faces this month.

Programatically Simulating JavaScript Events in a Test Environment

| Comments

Yesterday I was implementing a feature on tutorhub.com where I wanted to disable the sending of chat messages when the party you’re talking to goes offline. I ran into a problem while trying to write the tests for this feature and thought I’d share it in case someone else finds it useful.

When in a chat, messages are sent either using the send button or by pressing the enter key. Testing the correct behaviour on button press is straight forward. I’m using Jasmine and jsMocha for testing and jQuery for implementation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
beforeEach(function() {
  // code that disables the sending of messages here

  // setup UI.runner for mocking
  var mock = new Mock(UI.runner);

  // add an expectation that raise is never called
  UI.runner.expects('raise').passing('message_send', Match.an_object).never();
});

it("should not allow messages to be sent", function() {
  // add some text to the textarea
  $("#chat-form textarea").val("the text I want to send");

  // simulate the click event
  $("#send-button").click();
});

Our UI code raises events that our main application code listens to in order to send out the messages. Here I setup a mock in the before block saying the raise method should never be called with the ‘message_send’ parameter. Then in the test I insert some text into the text area and simulate a click event on the send button.

It became slightly more tricky when I came to test the enter key functionality, our implementation code looks something like this:

1
2
3
4
5
$("#chat-form textarea").unbind().keyup(function(e){
  if (e.which === 13) {
    // send message code here
  }
});

In order to programatically simulate a press of the enter key I needed to pass an event object containing a which value of 13. After a bit of hunting around I found the jQuery.Event object, that allows the creation of events which can then be fired. The test for disabling the enter key looked like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
beforeEach(function() {
  // code that disables the sending of messages here

  // setup UI.runner for mocking
  var mock = new Mock(UI.runner);

  // add an expectation that raise is never called
  UI.runner.expects('raise').passing('message_send', Match.an_object).never();
});

it("should not allow messages to be sent", function() {
  // add some text to the textarea
  $("#chat-form textarea").val("the text I want to send");

  // create a new keyup event
  var e = jQuery.Event("keyup");

  // set the key that was pressed to the enter key
  e.which = 13;

  // trigger the event on the textarea element
  $("#chat-form textarea").trigger(e);
});

Using this technique it should be possible to programatically simulate any key event in a test environment.

Introducing Scupper, the JavaScript Library for Easily Dealing With HTML Snippets in Test Suites

| Comments

I came across a problem today while writing tests in JavaScript. The code I was testing required a snippet of HTML to work with. A user list needed to be reordered depending on their status. No problem I thought, I’ll create div to store an HTML snippet, then before my test I’ll duplicate and copy it into a test div.

1
2
3
4
5
6
7
8
<div id="snippets" style="display:none">
  <ul id="user-list-snippet">
    <li id="user-0">Shaun White<span class='user-0-status'>busy</span></li>
    <li id="user-1">Jeremy Jones<span class='user-1-status'>online</span></li>
    <li id="user-2">Jake Burton<span class='user-2-status'>offline</span></li>
    <li id="user-3">Tara Dakides<span class='user-3-status'>online</span></li>
  </ul>
</div>

And the JavaScript to copy the element:

1
$("#user-list-snippet").clone().removeAttr("id").attr("id", "user-list").appendTo($('#dom_test'));

Of course when working with id’s in HTML they have to be unique so the technique caused some of the other tests in the suite to fail, I needed to find another way to do this. One thing I hate is using jQuery to create more than a few dom elements as it gets complex very quickly and it’s not easy to see if the code is producing the desired HTML at a glance.

After taking a break I came up with a simple solution, the Scupper library was about to be written. I wanted to write snippets in HTML so I kept the snippet used in the first attempt. I then created a library that collected all of the snippets from the dom, storing them internally, before deleting them from the dom. This allowed the dom to be free from conflicting id’s and general pollution. The HTML snippet became:

1
2
3
4
5
6
7
8
9
10
<div id="snippets" style="display:none">
  <div id="user-list-snippet">
    <ul id="user-list">
      <li id="user-0">Shaun White<span class='user-0-status'>busy</span></li>
      <li id="user-1">Jeremy Jones<span class='user-1-status'>online</span></li>
      <li id="user-2">Jake Burton<span class='user-2-status'>offline</span></li>
      <li id="user-3">Tara Dakides<span class='user-3-status'>online</span></li>
    </ul>
  </div>
</div>

The containing div #user-list-snippet gives an element to latch onto in order to grab the contents inside. I created a method that sucks up all snippets inside a dom element and stores them:

1
2
3
4
5
6
7
8
init: function(element_id){
  var element = $('#' + element_id);
  element.children().each(function(i, elm){
    elm = $(elm);
    Scupper.items[elm.attr('id')] = elm.html();
  });
  element.empty();
}

All that was needed is an easy way to pull them out an insert them into the dom:

1
2
3
4
5
6
7
8
9
10
11
insert_into: function(source_id, destination_id){
  return $('#' + destination_id).append(Scupper.retrieve(source_id));
},

retrieve: function(id){
  if(Scupper.items[id] !== undefined){
    return Scupper.items[id];
  }else{
    throw "Requested Scupper element not found with id: " + id;
  }
}

Calling insert_into() grabs the snippet HTML and inserts it into the specified dom element ready for the test to use it.

If you want to use Scupper the source is freely available on github.

Shoulda Macro Should_render_a_form_to

| Comments

I’ve been writing a fair number of functional tests recently, one thing that kept cropping up was the need to check if a form had been rendered and that it is going to perform a particular action. Shoulda has a should_render_a_form macro, unfortunately it’s been depreciated and doesn’t do anything other than check a form element has been rendered in the view.

I decided to come up with my own macro that checks the specifics of a form element, enter should_render_a_form_to. This takes three arguments, a description, an options hash and a block that contains the expected url. You can use the macro as follows…

Check there is a form posting to the new_user_post_path:

1
should_render_a_form_to("create a new post", {:method => "post"}) { new_user_post_path(@user.id) }

Check there is a form putting to the user_post_path and that the form has the id of ‘post_edit_form’:

1
should_render_a_form_to("update a post", {:method => "put", :id => "post_edit_form"}) { user_post_path( :user_id => @user.id, :id => 1) }

The macro code is available on github with test coverage. If you just want to cut and paste into your own macro’s file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
def should_render_a_form_to(description, options = {}, &block)
  should "render a form to #{description}" do
    expected_url  = instance_eval(&block)
    form_method   = case options[:method]
      when "post", "put", "delete" : "post"
      else "get"
      end
    assert_select "form[action=?][method=?]",
                  expected_url,
                  form_method,
                  true,
                  "The template doesn't contain a <form> element with the action #{expected_url}" do |elms|

      if options[:id]
        elms.each do |elm|
          assert_select elm,
                        "##{options[:id]}",
                        true,
                        "The template doesn't contain a <form> element with the id #{options[:id]}"
        end
      end

      unless %w{get post}.include? options[:method]
        assert_select "input[name=_method][value=?]",
                      options[:method],
                      true,
                      "The template doesn't contain a <form> for #{expected_url} using the method #{options[:method]}"
      end
    end
  end
end

The macro checks both the forms action attribute as well as the hidden input rails uses to specify the method where necessary. I’ve also been playing with creating a macro to check for a form with specific fields such as should_render_a_form_with_fields. This is proving to be slightly more difficult than I originally anticipated and defining a nice interface to the method has been rather tricky.

Vlad the Deployer Hoptoad Integration

| Comments

I’ve just had to setup Hoptoad for one of our apps that uses Vlad for deployment, the integration isn’t quite as easy as with Capistrano. I couldn’t find much information on how to integrate the two so I thought I’d share my solution.

The original Hoptoad task for use with Capistrano needed a little modification.

1
2
3
4
5
6
7
8
task :notify_hoptoad do
  rails_env = fetch(:rails_env, "production")
  local_user = ENV['USER'] || ENV['USERNAME']
  notify_command = "rake hoptoad:deploy TO=#{rails_env} REVISION=#{current_revision} REPO=#{repository} USER=#{local_user}"
  puts "Notifying Hoptoad of Deploy (#{notify_command})"
  `#{notify_command}`
  puts "Hoptoad Notification Complete."
end

fetch is a Capistrano method so needed to be removed, we can use the Vlad environment pattern for this. I also wanted to use the git information for the user instead of the system user, finally as far as I can tell the git commit SHA being deployed is not available in Vlad.

In the Vlad deployment script I added a Hoptoad task to replace the default Capistrano task provided by Hoptoad.

1
2
3
4
5
6
task :notify_hoptoad => [:git_user, :git_revision] do
  notify_command = "rake hoptoad:deploy TO=#{rails_env} REVISION=#{current_sha} REPO=#{repository} USER='#{current_user}'"
  puts "Notifying Hoptoad of Deploy (#{notify_command})"
  `#{notify_command}`
  puts "Hoptoad Notification Complete."
end

Then added it as a dependency for the deploy task

1
task :deploy => [:update, :migrate, :start_app, :notify_hoptoad]

There are a couple of helper tasks I’ve added to get the git user and the SHA of the commit being deployed

1
2
3
4
5
6
7
remote_task :git_revision do
  set :current_sha, run("cd #{File.join(scm_path, 'repo')}; git rev-parse origin/master").strip
end

task :git_user do
  set :current_user, `git config --get user.name`.strip
end

For the Love of Table Football, Why I Stayed Up for 48 Hours

| Comments

Update: wuzlr.com is no longer a live site

What a weekend! it all started on Friday night, a feverish last minute planning session began on how we would implement “Wuzlr”. We’d bounced around some ideas earlier in the week and had a pretty good idea of what we wanted, whittling that down into a set of features we could implement in 48 hours was no easy task, there was so many good ideas and we didn’t have time to implement them all. At 1am it all began…

The Pitch

If your office is anything like ours things get pretty serious whenever a game of table football breaks out (especially when @theozaurus is playing). We’ve wanted a way to track who’s the best in the office for quite some time, finally we have just the thing, and so do you. Wuzlr is a table football league tracking application that lets you see performance over time with all sorts of fun and interesting facts and figures displayed.

Wuzlr (or wuzler if you use the correct non web 2.0 spelling) is the Austrian word for table football and just so happened to be the first domain we came across that’s still available. We also liked the hat tip to the ‘e’ dropping crowd, no, not you party people, I’m talking about flickr and the like.

Application Features

  • Create leagues and record all your games, Check out the Jiva office league
  • Compare yourself to other players, Me Vs. Theo
  • View your nemesis, best team mate, worst team mate and more, my player page
  • League standings, games played per day, table bias, most dedicated players (who’s put the most time in)

Our Team

Yes we know the site looks terrible in IE, who uses IE anyway?

My Computer Loves Autotest-fsevent

| Comments

I’m a big fan of autotest for testing, unfortunately it does stress my poor MacBook Pro and makes the fan go berserk if running anything other than the most simple of test suites. This is due to autotest having to check each file in your project for changes.

No more will autotest stress out my mac, autotest-fsevent is a great gem that uses OS X’s FSEvent system to be notified when files have changed rather than having to constantly poll the filesystem. You need mac OS X 10.5 or later to take advantage of FSEvent.

The other nice thing autotest-fsevent does is take care of all the .autotest config options, I managed to delete my entire config file which I’ve been tweaking for as long as I can remember trying to get the perfect setup.

I can now run even the most demanding of test suites and my computer barely breaks a sweat. Thanks bitcetera, my computer ♥’s you.

DRYing Up Multiple User Contexts With Shoulda Macros

| Comments

Today I’ve been writing tests for a legacy Rails application I inherited recently. The application has several user roles, each role having varying permissions. To deal with this nicely I setup shoulda macro’s to create contexts for each of the user roles, public user, standard user, admin user etc. then in my tests I could write…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
public_context do

  context "on GET to :index" do
    setup do
      get :index
    end

    should_redirect_to("root url") { root_url }
  end
end

signed_in_user_context do

  context "on GET to :index" do
    setup do
      get :index
    end

    should_redirect_to("user url") { user_url }
  end
end

This is pretty standard practice now and something I picked up from looking at the code produced by the guys at Thought Bot. While working on the test suite it became apparent many of the methods behaved in the same way for multiple user roles. I wanted to come up with a way to run a group of tests under multiple user roles without having to duplicate any code. Shoulda macros to the rescue again! After creating another macro to deal with multiple contexts I can write my tests like this…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
multiple_contexts 'public_context', 'signed_in_user_context' do

  context "on GET to :show" do
    setup do
      @advert = Factory(:advert)
      get :show, :id => @advert.to_param
    end

    should_render_with_layout :application
    should_render_template :show
    should_not_set_the_flash
    should_assign_to( :advert ) { @advert }
    should_respond_with :success
  end
end

And the shoulda macro code itself…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
def multiple_contexts(*contexts, &blk)
  contexts.each { |context|
    send(context, &blk) if respond_to?(context)
  }
end

def public_context(&blk)
  context "The public" do
    setup { sign_out }
    merge_block(&blk)
  end
end

def signed_in_user_context(&blk)
  context "A signed in user" do
    setup do
      @user = Factory(:user)
      sign_in_as @user
    end
    merge_block(&blk)
  end
end