• Skip to primary navigation
  • Skip to main content

Inodes

Fractional CTO Consulting

  • Home
  • About Us
  • Contact Us
  • Block Examples
  • Landing Page
  • Pricing Page
  • Show Search
Hide Search

vquence

Taking your cucumber tests back to the future with Delorean

John Ferlito · 31 March 2010 · Leave a Comment

I’m currently working on an API for Vquence’s VQdata product which allows our customers to use a REST interface to retrieve videos with certain keywords they have previously stored. While writing tests I need to be able to mock out the Time object so that my tests were deterministic relative to time.

I remembered listening to a Ruby5 podcast which mentioned a great little gem called Delorean.

Delorean easily allows you to mock time in your tests. In no time I had hooked it up to cucumber.

In features/support/delorean.rb:

Ruby
require 'delorean'  
                    
# Make sure we fix the time up after each scenario
After do
  Delorean.back_to_the_present
end

and then in features/step_definitions/delorean_steps.rb

Ruby
Given /^I time travel to (.+)$/ do |period|
  Delorean.time_travel_to(period)
end      

this lets me create steps like

Gherkin
  Scenario: Link attributes are correct for yesterday
    Given I time travel to 2010-02-01 05:00
    When I GET the videos keywords feeds page
    Then I should see "start_time=2010-02-01T00:00:00"

Some other examples you can use with Delorean are

  • 2 weeks ago
  • tomorrow
  • next tuesday 5pm

You can find more examples in the Chronic gem documentation which Delorean uses to achieve this functionality.

Adding multiple database support to Cucumber

John Ferlito · 8 October 2009 · Leave a Comment

The Vqmetrics application needs to connect to two different databases. The first holds the videos, authors and their relevant statistics, while the second database holds the users, monitors and trackers.

We do this by specifying two databases in config/database.yml.

YAML
development:
  database: vqmetrics_devel
  < <: *login_dev_local

vqdata_development: &VQDATA_TEST
  database: vqdata_devel
  <<: *login_dev_local

So by default the vqmetrics_devel database will be used. When we need to specify a model where we need to connect to the vqdata_devel database we use

Ruby
class Video < ActiveRecord::Base
  establish_connection "vqdata_#{RAILS_ENV}"
end

and for migrations that need to connect to this database we do the following.

Ruby
class InitialSetup < ActiveRecord::Migration
  def self.connection
    Video.connection
  end
end

This setup works really well. However recently I moved this application to using Cucumber for testing. Tests worked fine the first time they are run but not the second time.

I discovered that the transaction on the second database where not being rolled back as they should be. Cucumber only sets up the first database for roll back by using

Ruby
ActiveRecord::Base.connection

where it should be rolling them all back by looping through

Ruby
ActiveRecord::Base.connection_handler.connection_pools.values.map do |pool|
  pool.connection
end

I’ve filed a bug at lighthouseapp.

Puppet, Facts and Certificates

John Ferlito · 13 March 2008 · Leave a Comment

I’m currently setting up Puppet at Vquence so that, among other things, we can deploy hosts into Amazon EC2 more easily.

To ensure a minimum setup time on a new server I wanted the setup to be as simple as

Bash
echo 'DAEMON_OPTS="-w 120 --fqdn newserver.vquence.com --server puppetmaster.vquence.com"' > /etc/default/puppet 

aptitude install puppet

This means that the puppet client will use newserver.vquence.com as the common name in the SSL certificate it creates for itself. On the puppet master the SSL cert name is then used to pick a node rather than the hostname reported by facter.

This means that I don’t need to worry about setting up /etc/hostname, even better /etc/hostname can be managed by puppet.

You can control this functionality on the puppet master by using the node_name option. From the docs

Bash
 # How the puppetmaster determines the client's identity 
 # and sets the 'hostname' fact for use in the manifest, in particular 
 # for determining which 'node' statement applies to the client. 
 # Possible values are 'cert' (use the subject's CN in the client's 
 # certificate) and 'facter' (use the hostname that the client 
 # reported in its facts)
 # The default value is 'cert'.
 # node_name = cert

The problem was that the ‘hostname’ fact wasn’t being set. It looks like there was a regression in SVN#1673 when some refactoring was performed.

I’ve filed bug #1133 and you can clone my git repository.

I haven’t included any tests in the patch as I’m not sure how to. The master.rb test already tests this functionality but doesn’t test that the facts object has actually been changed. I think a test on getconfig is probably required but I’m not sure how you would access the facts after calling it.

Update: This patch is now in puppet as of 0.24.3.

Amazon EC2 ruby gem and large user_data

John Ferlito · 26 February 2008 · Leave a Comment

When you create an instance in EC2 you can send Amazon some user data that is accessible by your instance. At Vquence we use this to send a script that gets executes at boot up. This script contains some openvpn and puppet RSA keys so its approaching about 10k in size.

This works without any problems when using the java based command line tools. However I was getting the following error when using the EC2 Ruby GEM.

Plaintext
/usr/lib/ruby/1.8/net/protocol.rb:133:in `sysread': Connection reset by peer (Errno::ECONNRESET)
	from /usr/lib/ruby/1.8/net/protocol.rb:133:in `rbuf_fill'
	from /usr/lib/ruby/1.8/timeout.rb:56:in `timeout'
	from /usr/lib/ruby/1.8/timeout.rb:76:in `timeout'
	from /usr/lib/ruby/1.8/net/protocol.rb:132:in `rbuf_fill'
	from /usr/lib/ruby/1.8/net/protocol.rb:116:in `readuntil'
	from /usr/lib/ruby/1.8/net/protocol.rb:126:in `readline'
	from /usr/lib/ruby/1.8/net/http.rb:2020:in `read_status_line'
	from /usr/lib/ruby/1.8/net/http.rb:2009:in `read_new'
	 ... 6 levels...
	from ./lib/ec2helpers.rb:43:in `start_instance'
	from ./ec2-puppet:107
	from ./ec2-puppet:89:in `each_pair'
	from ./ec2-puppet:89

Doing some tcpdumping indicated that after receiving the request Amazon waits for a while and then sends a TCP RESET. Not very nice at all. My next step was to use ngrep to compare the output from the command line tools and the ruby gem. This got nowhere fast since the command line tools use the SOAP API while the ruby gem uses the Query API.

What I did notice however is that while the command line tools performed a POST the ruby library performed a GET. At this stage I decided to test how much data I could send. So I started trying different user data sizes. The offending amount was around 7.8k, suspiciously close to exactly 8k.

The HTTP/1.1 spec doesn’t place an actual limit on the length but leaves it up to the server.


The HTTP protocol does not place any a priori limit on the length of
a URI. Servers MUST be able to handle the URI of any resource they
serve, and SHOULD be able to handle URIs of unbounded length if they
provide GET-based forms that could generate such URIs. A server
SHOULD return 414 (Request-URI Too Long) status if a URI is longer
than the server can handle (see section 10.4.15).

Note: Servers ought to be cautious about depending on URI lengths
above 255 bytes, because some older client or proxy
implementations might not properly support these lengths.

Apache for example limits this by default to 8190 bytes including the method and the protocol. You can change this using the LimitRequestLine directive.

I created a patch to modify the EC2 Gem to use a POST instead of a GET which has no such limitations. You can find the git tree for it at http://inodes.org/~johnf/git/amazon-ec2

Rails, ActiveRecord, MySQL, GUIDs and the rename_column bug

John Ferlito · 11 January 2008 · 1 Comment

Since I wasted over 4 hours of my life today working my way through this problem I feel the need to share.

Since it seems to be the in thing in the Web 2.0 space, just to be cool, we use GUIDs to identify different objects in our URLs at Vquence. For example my randomly created vquence on on Rails has a GUID of

Plaintext
cDuIhGWb8r3lDxaby-aaea

Andy Singleton has written a rails plugin called funnily enough guid. This allows you to do the following in your model.

Ruby
class Vquence < ActiveRecord::Base
  usesguid :column => 'guid'
end

Once you do this you will automatically get GUID looking identifiers in the db and your application. The guid column in the DB gets mapped to Vquence.id so you can do things like

Ruby
Vquence.find('cDuIhGWb8r3lDxaby-aaea');

We used to use Sphinx as our search index, we now use Lucene. Sphinx requires that you have an integer id for each document in your index. This is to make your SQL queries much faster. The dumb way to create your index is to use queries like the following.

SQL
SELECT * FROM videos LIMIT 0,10000
SELECT * FROM videos LIMIT 10000,10000
...
SELECT * FROM videos LIMIT 990000,10000

I know this as its what we originally used with Lucene. This works fine until you reach about 1,000,000 rows. The problem is that since there is no implicit ordering or range in the above query it means that for the final query MySQL needs to workout what the first 1,000,000 rows are and then return you the last 10,000.

A much better way to do it is the following

SQL
SELECT * FROM videos WHERE integer_id >= 1 and integer_id < = 10000
SELECT * FROM videos WHERE integer_id >= 10001 and integer_id < = 20000
...
SELECT * FROM videos WHERE integer_id >= 990000 and integer_id < = 1000000

This is fast as long as integer_id is indexed.

So to accommodate this in Rails we began using migrations like the following.

Ruby
class Videos < ActiveRecord::Migration
  def self.up
    create_table :videos do |t|
      t.column :uuid, :string, :limit =>22, :null => false
      ...

      t.timestamps
    end
    add_index :videos, :uuid, :unique => true
    rename_column :videos, :id, :integer_id
  end

  def self.down
    drop_table :videos
  end
end

This was all done months ago and the repercussions didn’t rear their ugly head until today. Previously everything in the videos table had been created by our external crawler and Rails never needed to insert into the table. Today I wrote some code that inserted into the videos table and everything broke horribly.

The problem is that ActiveRecord can still see the integer_id field and tries to insert a 0 value into it. It isn’t clever enough to realise that it is an auto increment field and to leave it alone. After some help from bitsweat on #RoR I implemented a dirty hack to hide the integer_id column from ActiveRecord. Thanks to Ruby overriding the ActiveRecord internals is really easy and I added the following to our guid plugin.

Ruby
  # HACK (JF) - This is too evil to even blog about
  # When we use guid as a primary key we usually rename the original 'id'
  # field to 'integer_id'. We need to hide this from rails so it doesn't
  # mess with it. WARNING: This means once you use usesguid anywhere you can
  # never access a column in any table anywhere called 'integer_id'

class ActiveRecord::Base
  private
    alias :original_attributes_with_quotes :attributes_with_quotes

    def attributes_with_quotes(include_primary_key = true, include_readonly_attributes = true)
      quoted = original_attributes_with_quotes(include_primary_key = true, include_readonly_attributes = true)
      quoted.delete('integer_id')
      quoted
    end
end

So this worked like a charm and after 4 hours I thought my pain was over, but then I tried to add second row to my test database. This resulted in the following.

Plaintext
 Mysql::Error: Duplicate entry '0' for key 1: INSERT INTO `videos` (`updated_at`, `sort_order`, `guid`, `description`,
 `user_id`, `created_at`) VALUES('2008-01-11 16:45:05', NULL, 'bcOMPqWaGr3k5CabxfFyeK', '', 5, '2008-01-11 16:44:28');

I ran the same SQL with MySQL client and got the same error. I then looked at the table and saw the following

Plaintext
mysql> show columns from moo;
+------------+-------------+------+-----+---------+-------+
| Field      | Type        | Null | Key | Default | Extra |
+------------+-------------+------+-----+---------+-------+
| integer_id | int(11)     | NO   | PRI | 0       |       |
| guid       | varchar(22) | NO   | UNI |         |       |
+------------+-------------+------+-----+---------+-------+

What I expected to see was

Plaintext
mysql> show columns from moo;
+------------+-------------+------+-----+---------+----------------+
| Field      | Type        | Null | Key | Default | Extra          |
+------------+-------------+------+-----+---------+----------------+
| integer_id | int(11)     | NO   | PRI | NULL    | auto_increment |
| guid       | varchar(22) | NO   | UNI |         |                |
+------------+-------------+------+-----+---------+----------------+

The difference is that when the column was renamed it seems to have lost its auto increment and NOT NULL properties. Some investigation showed that the SQL being used to rename the column was

SQL
ALTER TABLE `videos` CHANGE `id` `integer_id` int(11)

when it should be

SQL
ALTER TABLE `videos` CHANGE `id` `integer_id` int(11) NOT NULL AUTO_INCREMENT

It seems that this is already filled as a bug on the rails site, including a patch.

Funnily enough that bug is owned by bitsweat. It seems he’s managed to help me out twice in one day 🙂 It doesn’t seem that it made it into Rails 2.0 though so until then be careful about renaming columns using migrations.

  • Page 1
  • Page 2
  • Go to Next Page »

Hit the ground running with a minimalist look. Learn More

Copyright © 2025 · Inodes Pty Ltd · Log in

  • Privacy Policy