<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Inodes</title><description>John Ferlito - Fractional CTO</description><link>https://inodes.org</link><language>en-AU</language><item><title>Moving from WordPress to Astro</title><link>https://inodes.org/2026/03/15/moving-from-wordpress-to-astro</link><guid isPermaLink="true">https://inodes.org/2026/03/15/moving-from-wordpress-to-astro</guid><description>After nearly 20 years on WordPress, I&apos;ve finally moved inodes.org to Astro. The site is now a fully static site and deployed to AWS. Why move? WordPress served…</description><pubDate>Sun, 15 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;After nearly 20 years on WordPress, I&apos;ve finally moved inodes.org to
&lt;a href=&quot;https://astro.build&quot;&gt;Astro&lt;/a&gt;. The site is now a fully static site and deployed
to AWS.&lt;/p&gt;
&lt;h2&gt;Why move?&lt;/h2&gt;
&lt;p&gt;WordPress served me well but for what is fundamentally a simple site felt like
overkill. I wanted something that was:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Fast&lt;/strong&gt; — static HTML served from a CDN&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Simple&lt;/strong&gt; — markdown files in a git repository&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cheap&lt;/strong&gt; — S3 + CloudFront costs practically nothing&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Secure&lt;/strong&gt; — no database, no PHP, no attack surface&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;But mainly, after migrating to WP Engine a about a year ago, I wanted to save
the the $$$ and put it towards Lego!&lt;/p&gt;
&lt;h2&gt;RSS feed URL change&lt;/h2&gt;
&lt;p&gt;If you were subscribed to the old WordPress RSS feed, you&apos;ll need to update
your feed reader. The old feed was at:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;https://inodes.org/feed/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The new feed is at:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;https://inodes.org/rss.xml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The old &lt;code&gt;/feed/&lt;/code&gt; URL will redirect to &lt;code&gt;/rss.xml&lt;/code&gt; automatically, so most feed
readers should pick up the change. But if yours doesn&apos;t, update the URL
manually.&lt;/p&gt;
&lt;h2&gt;The stack&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://astro.build&quot;&gt;Astro&lt;/a&gt;&lt;/strong&gt; v6 — static site generator&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://tailwindcss.com&quot;&gt;Tailwind CSS&lt;/a&gt;&lt;/strong&gt; v4 — styling&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://pagefind.app&quot;&gt;Pagefind&lt;/a&gt;&lt;/strong&gt; — client-side search&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://aws.amazon.com/cdk/&quot;&gt;AWS CDK&lt;/a&gt;&lt;/strong&gt; — infrastructure as code (S3,
CloudFront, ACM, Route 53)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://shiki.style&quot;&gt;Shiki&lt;/a&gt;&lt;/strong&gt; — syntax highlighting&lt;/li&gt;
&lt;/ul&gt;
</content:encoded></item><item><title>My next big thing</title><link>https://inodes.org/2022/02/01/my-next-big-thing</link><guid isPermaLink="true">https://inodes.org/2022/02/01/my-next-big-thing</guid><description>Originally posted on Linked In &lt;https://www.linkedin.com/posts/johnfstartup-fundraising-angelinvesting-activity-6894054069724422145-LChf TL;DR - Founding a new…</description><pubDate>Mon, 31 Jan 2022 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Originally posted on Linked In &lt;a href=&quot;https://www.linkedin.com/posts/johnf_startup-fundraising-angelinvesting-activity-6894054069724422145-LChf&quot;&gt;https://www.linkedin.com/posts/johnf_startup-fundraising-angelinvesting-activity-6894054069724422145-LChf&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Founding a new startup, Gladly, where we are exploring a &lt;em&gt;&lt;strong&gt;fresh take on favours&lt;/strong&gt;&lt;/em&gt;- Stepping back at AC3 into a Strategic Advisory role&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href=&quot;/blog/2022/Cover-Page-Panel-1-V2.png&quot;&gt;&lt;img src=&quot;/blog/2022/Cover-Page-Panel-1-V2.png&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;At the start of this month, I stepped back from my role as Head of Product and Technology at AC3.&lt;/p&gt;
&lt;p&gt;My Bulletproof/AC3 story kicked off in late December of 2000 when I joined @Anthony Woodward as a co-founder of Bulletproof. It has been a 21-year journey with many key milestones and pivots that I am extremely proud of along the way:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Managed Linux Gateways&lt;/li&gt;
&lt;li&gt;Managed Cisco- Windows and Linux Shared and Dedicated Web hosting&lt;/li&gt;
&lt;li&gt;Australia’s first VMware Public Cloud (Not that we knew what a Public Cloud was back then)&lt;/li&gt;
&lt;li&gt;Managed AWS and Azure&lt;/li&gt;
&lt;li&gt;Listing on the ASX as BPF&lt;/li&gt;
&lt;li&gt;Acquisitions&lt;/li&gt;
&lt;li&gt;Infoplex, Cloud House, Pantha Corp&lt;/li&gt;
&lt;li&gt;Being acquired by AC3&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I want to thank Anthony for taking the jump all those years ago, it has been a phenomenal journey that I wouldn’t trade in a heartbeat.&lt;/p&gt;
&lt;p&gt;The last four years at AC3 have been a transforming experience for me. @Simon Xistouris has been an exceptional leader who has helped me level up. I want to thank Simon for welcoming me into the AC3 family with open arms. There is so much of the ethos, values and approach that I will take with me into all future endeavours. I am going to miss working daily with the AC3 team, in particular the executive team of Steph, James, Claudia, Bogdan and Harry. They are an amazing group of individuals and AC3 will continue to do amazing things under their stewardship. I’m glad to be able to stick around in an advisory capacity.&lt;/p&gt;
&lt;p&gt;For my next big adventure, I’m launching a new startup with @Chris Johnson as my co-founder. We are exploring a &lt;strong&gt;fresh take on favours&lt;/strong&gt; and launching the &lt;strong&gt;favour economy.&lt;/strong&gt; We’ve put in significant work over the last 6 months and are looking to launch in February. You can get a sneak peek at &lt;a href=&quot;https://gladlyapp.com&quot;&gt;https://gladlyapp.com&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;P.S. We are fundraising at the moment, if you are interested please reach out to me.&lt;/p&gt;
</content:encoded></item><item><title>Rekindling the fire</title><link>https://inodes.org/2018/09/21/rekindling-the-fire</link><guid isPermaLink="true">https://inodes.org/2018/09/21/rekindling-the-fire</guid><description>It has been over 8 years since my last post here. High time I rectified that. This is a test to make sure everything is still hooked together right. Watch this…</description><pubDate>Fri, 21 Sep 2018 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;It has been over 8 years since my last post here. High time I rectified that. This is a test to make sure everything is still hooked together right.&lt;/p&gt;
&lt;p&gt;Watch this space!&lt;/p&gt;
</content:encoded></item><item><title>Switching AWS Profiles</title><link>https://inodes.org/2018/09/21/switching-aws-profiles</link><guid isPermaLink="true">https://inodes.org/2018/09/21/switching-aws-profiles</guid><description>I tend to have a lot of projects on the go at once, whether they are Bulletproof related, personal side projects or helping out the odd startup. This means…</description><pubDate>Fri, 21 Sep 2018 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I tend to have a lot of projects on the go at once, whether they are Bulletproof related, personal side projects or helping out the odd startup. This means that I tend to need to switch between AWS accounts a lot.  Like many others I user AWS profiles to manage credentials for different AWS accounts. These credentials are stored in &lt;em&gt;$HOME/.aws/credentials&lt;/em&gt; and are used by the various AWS SDKs, the aws-cli and frameworks like &lt;a href=&quot;https://serverless.com&quot;&gt;serverless&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Currently, I have seven different profiles. This means I spend a lot of time manually typing things like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;export AWS_PROFILE=inodes
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I like to use lots of terminal tabs, so doing this every time I open a new tab gets old pretty quickly. After a long week of hacking marathons, I decided I needed some tab completion goodness and came up with the following.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# AWS Profile switcher

aws-profile() {
  export AWS_PROFILE=$1
}

_aws_profile_completer() {
  _commands=$(cat ~/.aws/credentials  | grep &apos;^\[&apos; | sed &apos;s/.$//;s/^.//&apos;)

  local cur prev
  COMPREPLY=()
  cur=&quot;${COMP_WORDS[COMP_CWORD]}&quot;
  COMPREPLY=( $(compgen -W &quot;${_commands}&quot; -- ${cur}) )

  return 0
}

complete -F _aws_profile_completer aws-profile
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This allows me to type (tab complete) &lt;strong&gt;aws-profile&lt;/strong&gt; and then complete the name of the AWS profile.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/blog/2018/aws-profiles.gif&quot; alt=&quot;&quot; /&gt;aws-profiles demo&lt;/p&gt;
&lt;p&gt;The next thing I want to explore is &lt;a href=&quot;https://direnv.net&quot;&gt;direnv&lt;/a&gt;, which was recommended by &lt;a href=&quot;https://twitter.com/gergnz&quot;&gt;Greg Cockburn&lt;/a&gt;. This should enable auto profile switching based on the directory I&apos;m in and speed things up even more.&lt;/p&gt;
</content:encoded></item><item><title>Linux Australia – President’s Report August 2010</title><link>https://inodes.org/2010/08/14/linux-australia-presidents-report-august-2010</link><guid isPermaLink="true">https://inodes.org/2010/08/14/linux-australia-presidents-report-august-2010</guid><description>It’s been just over a month since my last presidents report, which according to past presidents means that I’m doing well. Apparently the first report is the…</description><pubDate>Sat, 14 Aug 2010 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;It’s been just over a month since my last presidents report, which according to past presidents means that I’m doing well. Apparently the first report is the easy one, maintaining momentum is the key!&lt;/p&gt;
&lt;h3&gt;LCA2011&lt;/h3&gt;
&lt;p&gt;We are now well and truly into the run up to linux,conf.au 2011. The LCA Call for Papers, Miniconfs, Posters and Tutorials (that is quite a mouthful!), has been open since the 13th of July. The CFP closes at midnight tonight, so it’s still not too late to get a proposal in.&lt;/p&gt;
&lt;p&gt;Everyone should spend the next 5 minutes thinking of the one person or topic they would love to be able to listen to at LCA next year. Now, go and email that person and convince them to submit something. But hurry, you don’t have long.&lt;/p&gt;
&lt;p&gt;The Paper Review Committee will be performing an online review of all the papers over the next 2 weeks. They will then meet in Sydney for a final one day review to decide what makes it into the conference. I’ve been involved in the process for the last few years, and I can tell you that it is not an easy process. The quality of the submissions we receive for LCA each year is extremely high and it is a very difficult task to whittle down 200-300 submissions into the 90 or so proposals we have space for.&lt;/p&gt;
&lt;h3&gt;LCA2012 Bid process&lt;/h3&gt;
&lt;p&gt;Submissions for LCA2012 close on the 15th of August, which is tomorrow night! So far we have had an expression of interest from Ballarat, and the odd rumour that other cities also have some teams thinking about it.&lt;/p&gt;
&lt;p&gt;Once the bids come in, the council will take time to review them, and then we will begin visiting each team so that they can pitch their bid to us in person why they should earn the honour of hosting the next LCA.&lt;/p&gt;
&lt;p&gt;This year we have changed the process slightly and asked all the teams to post their submission publicly. I’m looking forward to reading the proposals and having a healthy community discussion about which city should host LCA.&lt;/p&gt;
&lt;h3&gt;Software Freedom Day&lt;/h3&gt;
&lt;p&gt;Software Freedom Day is just around the corner, being held on the 18th of September. SFD is a worldwide celebration of FOSS and also serves to educate the general public about the benefits of FOSS.&lt;/p&gt;
&lt;p&gt;According to the SFD website, it looks like we have about 6 teams registered in Australia. Noticeably missing are most of our capital cities. Please bring up SFD at your next LUG meeting or on your LUGs mailing list and try to organise an event in your area.&lt;/p&gt;
&lt;p&gt;This year Linux Australia will be assisting SFD teams by providing schwag from past LCAs to give away at events. You should see an email to the list with more details about this shortly.&lt;/p&gt;
&lt;p&gt;Also don’t forget that when you register your team on the official SFD website, Software Freedom International will send out SFD schwag for you to use on the day.&lt;/p&gt;
&lt;h3&gt;Australian Treasury Department, SBR and Auskey (Update)&lt;/h3&gt;
&lt;p&gt;As I mentioned last month, I’ve been doing some work in my capacity as President as well as my day job in regards to creating an Open Source project around the Australian Treasury’s Standard Business Reporting (SBR) project.&lt;/p&gt;
&lt;p&gt;We recently held a meeting with some representatives from the Department of Treasury, where we were able to discuss our plans and what is required to make SBR and Auskey available for the Open Source community. SBR have shown a keen interest in the project and have been quite helpful in making resources and people available to help us with the project.&lt;/p&gt;
&lt;p&gt;SBR have also recently announced that they will be supporting Linux on the AusKey website. This has not been possible up till this point as a browser plug-in is required to be able to interact with Auskey. SBR hopes to have a solution released by the end of the year and will be initially supporting Ubuntu. This means that Australian businesses using Open Source Software will soon be able to submit their BASs online again.&lt;/p&gt;
</content:encoded></item><item><title>Linux Australia - President&apos;s Report July 2010</title><link>https://inodes.org/2010/07/08/linux-australia-presidents-report-july-2010</link><guid isPermaLink="true">https://inodes.org/2010/07/08/linux-australia-presidents-report-july-2010</guid><description>It has been about 6 months since the current Linux Australia Council was voted in, and about a month since I became President, following James Turnbull’s…</description><pubDate>Thu, 08 Jul 2010 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;It has been about 6 months since the current Linux Australia Council was voted
in, and about a month since I became President, following James Turnbull’s
resignation. In that time, the Council has been working on implementing the
platform that we ran on. We have successfully managed to hold a Council meeting
every fortnight (with a very few exceptions), to allow us to get together to
organise events and implement the goals of Linux Australia.&lt;/p&gt;
&lt;p&gt;There has been the odd murmur that Linux Australia is not doing a good enough
job of communicating with the community, and I would have to agree. While we
are sending out meeting minutes every fortnight, I think we are lacking a more
direct communication as to what the Council and Linux Australia are up to. To
that end, I would like to initiate a monthly President&apos;s report to try and get
the word out as to what we are doing.&lt;/p&gt;
&lt;h3&gt;Changes to the Council&lt;/h3&gt;
&lt;p&gt;First of all, on behalf of the Council and the rest of the community, I would
like to send a large Thank You to James Turnbull for all the work he did during
the first half of the year in his role as president. I would especially like to
draw attention to the work that James put into the Linux Australia Membership
Survey, results of which we plan to release in the next month. James will be
sorely missed, and we wish him all the best in his future endeavours in
Portland.&lt;/p&gt;
&lt;p&gt;I&apos;d also like to welcome Joshua Hesketh to the Council. Josh is already doing a
wonderful job as treasurer, as well as our liaison with the LCA2011 team.&lt;/p&gt;
&lt;h3&gt;Australian Treasury Department, SBR and Auskey&lt;/h3&gt;
&lt;p&gt;As many of you may be aware, the Australian Treasury has just released a new
project called Standard Business Reporting (SBR). This project aims to
standardise reporting to government, with an aim to becoming a centralised
point where business can submit forms to government. In essence, it is an API
which will allow standard government documents, like a BAS or employment
declaration, to be filed electronically. At the moment the ATO, ASIC and
various Offices of State Revenue are involved in the project. However, there is
a large amount of interest from other departments, like Medicare and
Centerlink. Hand-in-hand with this project is another sub-project called
AusKey, which is an all-of-government PKI system that is already beginning to
replace the existing ECI system used at the ATO to authenticate BAS filing.&lt;/p&gt;
&lt;p&gt;A few months ago, I was contacted by Ron Skeoch from Muli Management. Muli have
been involved in the Open Source community for a number of years, and support a
piece of accounting software targeted at the construction industry. Muli need
to have their software support the SBR system, and they were interested in my
assistance; firstly helping them write the software to interface with the SBR,
but secondly in assisting them create this as a fully fledged open source
project that other projects could then use. At this stage, I put my Linux
Australia hat on, and indicated that we would like to work together with Muli
to help make that happen.&lt;/p&gt;
&lt;p&gt;While this process is still at an early stage, we have already submitted a
document to Treasury outlining the requirements for the Open Source community
to be able to interact with SBR. We also pointed out the current issues with
AusKey in relation to being able to file a BAS. The response from Treasury has
been very promising, and they are quite eager to work with Linux Australia and
Muli to try and aid the Open Source community in any way they can; including
potentially even placing the reference clients under an appropriate license, so
that we can make use of them.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;For purposes of transparency I would like to point out a potential
conflict of interest here. Muli Management is a customer of my business and has
engaged me to among other things write the code and help create the open source
project.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;LCA2011&lt;/h3&gt;
&lt;p&gt;Preparations for linux.conf.au 2011 in Brisbane are well under way. Some
members of the Council, along with past LCA organisers and the new LCA team,
met for Ghosts in April in Brisbane. This was an extremely valuable experience
where past organisers were able to pass on some wisdom, and the current team
was able to pass on some of the ideas they have in store for us next year. The
meeting was held at the venue itself, where we were able to take a short tour
of where the conference will be held as well as some of the surrounding areas.
I have a lot of confidence that Shaun and his team are going to put together an
excellent conference. The Call for Papers should open shortly, so now is the
time to start thinking about the presentation you want to give at the next LCA.&lt;/p&gt;
&lt;h3&gt;LCA2012 Bid process&lt;/h3&gt;
&lt;p&gt;We recently announced our request for formal submissions for hosting
linux.conf.au 2012. So far we have an official expression of Interest from
Ballarat, and I have heard the odd rumour of goings on in Sydney and Canberra.
Submissions close on August 15th, just over a month away. That is still plenty
of time to put in a bid for the conference. If you think you might have it in
you, but need some co-conspirators, then please feel free to send the Council a
quick email. We may know of people in your area who are in the same position
and can help put you in touch with each other.&lt;/p&gt;
&lt;h3&gt;Media Sub-Committee&lt;/h3&gt;
&lt;p&gt;One area in which we have been lacking recently is getting our message about
things we care about out effectively to the media. This is in relation to
events we are holding, announcements about linux.conf.au and opinions on
relevant issues. The idea of a media sub-committee was originally raised at the
Face to Face meeting in February although it is not a new idea. There was a
press team once upon a time; the mailing list even still exists! I&apos;ve asked
James Purser to put together a team and a framework for it to work in, so that
not too great a burden is placed on any one member. If you are interested in
helping out with media related activities, whether on twitter or with media
organisations directly, please get in touch with James.&lt;/p&gt;
&lt;h3&gt;Linux Australia Membership Survey&lt;/h3&gt;
&lt;p&gt;As mentioned above, we recently ran a survey of Linux Australia Members. The
survey was aimed at the Australian FOSS community and our aim was to gather
information to aid us in making decisions about what Linux Australia is, and
the directions that it should take as an organisation. We had an excellent
response with 528 submissions, including three people claiming to be Linus
Torvalds. The Council is working at the moment on collating all of the results.
Our plan is to release all of the anonymised raw data to the community in the
next month. It is our hope that the community will help us in spending some
time to analyse the data and tell us what they think it means. In due course,
the Council will present some analyses of its own.&lt;/p&gt;
&lt;h3&gt;Events&lt;/h3&gt;
&lt;p&gt;We recently had two very successful events which were supported by Linux
Australia. The first was PyCon AU 2010, this is the first time that this event
has been run in Australia and was possible due to the hard work of Tim Ansell,
Neil Davenport and Richard Jones. I hear that the event was a tremendous
success, and sold out before close of registrations. A few attendees I&apos;ve
talked to were very excited and can&apos;t wait for next years conference. The
conference is running on a model of the same team running it twice in a row in
the same city and a formal request for bids to host PyCon AU 2012-2013 in the
next few months.&lt;/p&gt;
&lt;p&gt;The other event was the Sydney Education Expo. The Linux Australia stand at
this event was organised by Patrick Elliott-Brennan who did a wonderful job in
preparing everything required for the stand at the expo. Sridhar Dhanapalan
also assisted in his role as Technical Manager at OLPC Australia, who shared
the stand with us and provided some sponsorship.&lt;/p&gt;
&lt;p&gt;That&apos;s all for this month. It feels like we&apos;ve been fairly busy. Hopefully I&apos;ll
have just as much to write about next month. See you then!&lt;/p&gt;
</content:encoded></item><item><title>Devops Down Under 2010</title><link>https://inodes.org/2010/04/29/devops-down-under-2010</link><guid isPermaLink="true">https://inodes.org/2010/04/29/devops-down-under-2010</guid><description>I&apos;ll be at Devops Down Under this weekend. This should be an amazing weekend, filled with talks which aim to help bridge the Developer and Sysadmin divide.…</description><pubDate>Thu, 29 Apr 2010 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;ll be at &lt;a href=&quot;http://devopsdownunder.org&quot;&gt;Devops Down Under&lt;/a&gt; this weekend. This should be an amazing weekend, filled with talks which aim to help bridge the Developer and Sysadmin divide.&lt;/p&gt;
&lt;p&gt;I&apos;ll be giving a presentation entitiled &lt;strong&gt;Commit early, Deploy often&lt;/strong&gt;. I&apos;ll be talking about using package management to empower developers to deploy applications locally just as they would in production. This also means sysadmins can deploy using the exact same environment.&lt;/p&gt;
&lt;p&gt;There are still &lt;a href=&quot;http://devopsdownunder.eventbrite.com/&quot;&gt;a few tickets left&lt;/a&gt;, so if you are in Sydney this weekend and are either a developer or a sysadmin then make sure you come along.&lt;/p&gt;
&lt;p&gt;Disclaimer: I&apos;m also sponsoring the event.&lt;/p&gt;
</content:encoded></item><item><title>Taking your cucumber tests back to the future with Delorean</title><link>https://inodes.org/2010/03/31/taking-your-cucumber-tests-back-to-the-future-with-delorean</link><guid isPermaLink="true">https://inodes.org/2010/03/31/taking-your-cucumber-tests-back-to-the-future-with-delorean</guid><description>I&apos;m currently working on an API for Vquence&apos;s VQdata product which allows our customers to use a REST interface to retrieve videos with certain keywords they…</description><pubDate>Wed, 31 Mar 2010 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;m currently working on an API for Vquence&apos;s VQdata product which allows our customers to use a REST interface to retrieve videos with certain keywords they have previously stored. While writing tests I need to be able to mock out the Time object so that my tests were deterministic relative to time.&lt;/p&gt;
&lt;p&gt;I remembered listening to a &lt;a href=&quot;http://ruby5.envylabs.com/episodes/56-episode-54-february-26-2010&quot;&gt;Ruby5 podcast&lt;/a&gt; which mentioned a great little gem called &lt;a href=&quot;http://github.com/bebanjo/delorean&quot;&gt;Delorean&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Delorean easily allows you to mock time in your tests. In no time I had hooked it up to cucumber.&lt;/p&gt;
&lt;p&gt;In features/support/delorean.rb:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;require &apos;delorean&apos;  
                    
# Make sure we fix the time up after each scenario
After do
  Delorean.back_to_the_present
end

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and then in features/step_definitions/delorean_steps.rb&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Given /^I time travel to (.+)$/ do |period|
  Delorean.time_travel_to(period)
end      

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;this lets me create steps like&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  Scenario: Link attributes are correct for yesterday
    Given I time travel to 2010-02-01 05:00
    When I GET the videos keywords feeds page
    Then I should see &quot;start_time=2010-02-01T00:00:00&quot;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Some other examples you can use with Delorean are&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;2 weeks ago&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;tomorrow&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;next tuesday 5pm&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can find more examples in the &lt;a href=&quot;http://chronic.rubyforge.org/&quot;&gt;Chronic gem documentation&lt;/a&gt; which Delorean uses to achieve this functionality.&lt;/p&gt;
</content:encoded></item><item><title>Careful what you call your server!</title><link>https://inodes.org/2010/03/01/careful-what-you-call-your-server</link><guid isPermaLink="true">https://inodes.org/2010/03/01/careful-what-you-call-your-server</guid><description>I was setting up a server recently and I was using KVM to virtualise a whole lot of hosts. Being fairly unimaginative I decided to call the machine kvm. As…</description><pubDate>Sun, 28 Feb 2010 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I was setting up a server recently and I was using KVM to virtualise a whole lot of hosts. Being fairly unimaginative I decided to call the machine kvm. As usual I used LVM for the disks. Now on Ubuntu this means that by default the VG will be called the same as the host name. This means the root LV will appear on the system as &lt;strong&gt;/dev/kvm/root&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;When they KVM modules are loaded, they try and create a device called &lt;strong&gt;/dev/kvm&lt;/strong&gt;. This fails pretty miserably since &lt;strong&gt;/dev/kvm&lt;/strong&gt; is already a directory due to LVM shenanigans.&lt;/p&gt;
&lt;p&gt;Not all is lost though if you&apos;ve done a lot of setup like I had. You can rename VGs. Simply boot from your Ubuntu install CD, choose rescue mode and then jump into a shell. First you deactivate the LVs using&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;vgchange -a n
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;then you can rename the VG using&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;vgrename kvm kvmvg
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Not sure whether I should file this problem as a bug. It is a bit of a weird situation.&lt;/p&gt;
</content:encoded></item><item><title>Less is more for ISOs</title><link>https://inodes.org/2010/01/28/less-is-more-for-isos</link><guid isPermaLink="true">https://inodes.org/2010/01/28/less-is-more-for-isos</guid><description>I was tidying up some data recently and found a couple of ISO images lying around with cryptic file names. I didn&apos;t have cdinfo installed, so I though I&apos;d run…</description><pubDate>Thu, 28 Jan 2010 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I was tidying up some data recently and found a couple of ISO images lying around with cryptic file names. I didn&apos;t have &lt;strong&gt;cdinfo&lt;/strong&gt; installed, so I though I&apos;d run &lt;strong&gt;less&lt;/strong&gt; hoping that the binary data would have some useful text in it. Instead I was surprised to see the following:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
CD-ROM is in ISO 9660 format
System id: LINUX
Volume id: Ubuntu-Server 9.10 i386
Volume set id: 
Publisher id: 
Data preparer id: 
Application id: GENISOIMAGE ISO 9660/HFS FILESYSTEM CREATOR
   (C) 1993 E.YOUNGDALE (C) 1997-2006 J.PEARSON/J.SCHILLING 
   (C) 2006-2007 CDRKIT TEAM
Copyright File id: 
Abstract File id: 
Bibliographic File id: 
Volume set size is: 1
Volume set sequence number is: 1
Logical block size is: 2048
Volume size is: 327972
El Torito VD version 1 found, boot catalog is in sector 1804
Joliet with UCS level 3 found
Rock Ridge signatures version 1 found
Eltorito validation header:
    Hid 1
    Arch 0 (x86)
    ID &apos;&apos;
    Key 55 AA
    Eltorito defaultboot header:
        Bootid 88 (bootable)
        Boot media 0 (No Emulation Boot)
        Load segment 0
        Sys type 0
        Nsect 4
        Bootoff 704 1796

/.disk
/README.diskdefines
&amp;lt;snip&amp;gt;File system listing&amp;lt;/snip&amp;gt;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Looks like less runs &lt;strong&gt;isoinfo -f -R -J -i ubuntu-9.10-server-i386.iso&lt;/strong&gt;. So I did have the tools I needed installed, I just didn&apos;t know it yet :).&lt;/p&gt;
</content:encoded></item><item><title>Linux Australia Elections, Last chance to vote!</title><link>https://inodes.org/2010/01/10/linux-australia-elections-last-chance-to-vote</link><guid isPermaLink="true">https://inodes.org/2010/01/10/linux-australia-elections-last-chance-to-vote</guid><description>If you haven&apos;t done so, please go and vote in the Linux Australia elections. If you aren&apos;t a member then just join first, membership is free. I&apos;m running for…</description><pubDate>Sun, 10 Jan 2010 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;If you haven&apos;t done so, please go and vote in the Linux Australia elections. If you aren&apos;t a member then just join first, membership is free.&lt;/p&gt;
&lt;p&gt;I&apos;m running for the position of Treasurer, but you don&apos;t need to vote for me since I&apos;m running unopposed.&lt;/p&gt;
&lt;p&gt;I&apos;m running on a common platform with a group of other like minded individuals. You can find the details of the platform &lt;a href=&quot;http://docs.google.com/Doc?docid=0AQ1T1dSjXs2iYWpneGdnYzUzanpnXzR3c2R6ZmNoaA&amp;amp;hl=en&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The main reason I&apos;m running is I believe that Linux Australia can achieve so much more than it does today. Linux Australia should not simply be a conduit for linux.conf.au.&lt;/p&gt;
&lt;p&gt;I want to help turn Linux Australia into an organisation that is relevant to all of us. It should be an organisation that not only fosters and supports the community but also represents the community.&lt;/p&gt;
&lt;p&gt;We should offer supportive services to our members, spread the FOSS message through the community as well as actively lobby government for the things we believe in.&lt;/p&gt;
&lt;p&gt;Most importantly it is essential that we all become involved. The community is nothing without people to move it forwards. So I would encourage you to vote for&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;President&lt;/strong&gt; James Turnbull&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Vice President&lt;/strong&gt; Lindsay Holmwood&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Secretary&lt;/strong&gt; Peter Lieverdink&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Treasurer&lt;/strong&gt; John Ferlito&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ordinary Committee Members&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Alice Boxhall&lt;/li&gt;
&lt;li&gt;Elspeth Thorne&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once you have finished voting, go and join the &lt;a href=&quot;http://lists.linux.org.au&quot;&gt;mailing lists&lt;/a&gt; and get involved.&lt;/p&gt;
</content:encoded></item><item><title>Adding multiple database support to Cucumber</title><link>https://inodes.org/2009/10/08/adding-multiple-database-support-to-cucumber</link><guid isPermaLink="true">https://inodes.org/2009/10/08/adding-multiple-database-support-to-cucumber</guid><description>The Vqmetrics application needs to connect to two different databases. The first holds the videos, authors and their relevant statistics, while the second…</description><pubDate>Thu, 08 Oct 2009 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;The &lt;a href=&quot;http://vquence.com.au&quot;&gt;Vqmetrics&lt;/a&gt; application needs to connect to two different databases. The first holds the videos, authors and their relevant statistics, while the second database holds the users, monitors and trackers.&lt;/p&gt;
&lt;p&gt;We do this by specifying two databases in &lt;strong&gt;config/database.yml&lt;/strong&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;development:
  database: vqmetrics_devel
  &amp;lt; &amp;lt;: *login_dev_local

vqdata_development: &amp;amp;VQDATA_TEST
  database: vqdata_devel
  &amp;lt;&amp;lt;: *login_dev_local

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So by default the &lt;strong&gt;vqmetrics_devel&lt;/strong&gt; database will be used. When we need to specify a  model where we need to connect to the &lt;strong&gt;vqdata_devel&lt;/strong&gt; database we use&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class Video &amp;lt; ActiveRecord::Base
  establish_connection &quot;vqdata_#{RAILS_ENV}&quot;
end

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and for migrations that need to connect to this database we do the following.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class InitialSetup &amp;lt; ActiveRecord::Migration
  def self.connection
    Video.connection
  end
end

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This setup works really well. However recently I moved this application to using &lt;a href=&quot;http://cukes.info&quot;&gt;Cucumber&lt;/a&gt; for testing. Tests worked fine the first time they are run but not the second time.&lt;/p&gt;
&lt;p&gt;I discovered that the transaction on the second database where not being rolled back as they should be. Cucumber only sets up the first database for roll back by using&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ActiveRecord::Base.connection
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;where it should be rolling them all back by looping through&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ActiveRecord::Base.connection_handler.connection_pools.values.map do |pool|
  pool.connection
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I&apos;ve filed a bug at &lt;a href=&quot;https://rspec.lighthouseapp.com/projects/16211-cucumber/tickets/480-cucumber-only-turns-on-transactions-for-one-database&quot;&gt;lighthouseapp&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>rm -rf /usr/lib</title><link>https://inodes.org/2009/09/15/rm-rf-usrlib</link><guid isPermaLink="true">https://inodes.org/2009/09/15/rm-rf-usrlib</guid><description>So in another case of tab completion gone wrong I ended up staring at the following on my laptop. The command only ran for a few seconds so the damage wasn&apos;t…</description><pubDate>Tue, 15 Sep 2009 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;So in another case of tab completion gone wrong I ended up staring at the following on my laptop.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;johnf@zoot:~/dev/vquence/metrics/trunk$ sudo rm -rf /usr/lib
^C
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The command only ran for a few seconds so the damage wasn&apos;t to bad, but what did I lose?&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;locate&lt;/strong&gt; command came to my rescue. locate runs out of cron, usually once a day, and creates a database with a list of every file on your machine. You can then use it to search for files. So to work out what was missing I did the following.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Get the list of files before we removed them
locate --regexp &apos;.&apos; &amp;gt; /tmp/before_rm

# update the locate database
sudo updatedb

# Get the list of current files on the system
locate --regexp &apos;.&apos; &amp;gt; /tmp/after_rm

# Create a list of what&apos;s missing
diff -u /tmp/before_rm /tmp/after_rm &amp;gt; /tmp/diff_rm
grep &apos;^-&apos; /tmp/diff_rm | sed -e &apos;s/^-//&apos; &amp;gt; /tmp/missing_rm

# Ask the dpkg system what packages those files belong to
for i in `cat /tmp/missing_rm`
do
    dpkg -S $i;
done | awk &apos;{print $1}&apos; | sed -e &apos;s/:$//;s/,//g&apos; &amp;gt; /tmp/packages

# Reinstall those packages
sudo aptitude reinstall `cat /tmp/packages`
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After this process it is probably worth running the step from updatedb again to work out what is still missing.&lt;/p&gt;
&lt;p&gt;For the record I lost 102 files and had to reinstall 97 packages.&lt;/p&gt;
&lt;p&gt;Now back to real work!&lt;/p&gt;
</content:encoded></item><item><title>Building a Private PPA on Ubuntu</title><link>https://inodes.org/2009/09/14/building-a-private-ppa-on-ubuntu</link><guid isPermaLink="true">https://inodes.org/2009/09/14/building-a-private-ppa-on-ubuntu</guid><description>One of the things I love about the Ubuntu project and launchpad is the Personal Package Archive. PPAs make it so simple and easy to backport packages. The only…</description><pubDate>Mon, 14 Sep 2009 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;One of the things I love about the Ubuntu project and launchpad is the Personal Package Archive. PPAs make it so simple and easy to backport packages. The only problem with PPAs is that they are public. I had a need to be able to host some private internal packages as well as squid with SSL support, which you can&apos;t distribute in binary form due to licensing restrictions.&lt;/p&gt;
&lt;p&gt;Basically I wanted to create the equivalent of an Ubuntu PPA service running on our own servers so we could place it behind our firewall. This post is basically the process I followed to integrate &lt;a href=&quot;http://julien.danjou.info/rebuildd/&quot;&gt;rebuilld&lt;/a&gt; and &lt;a href=&quot;http://mirrorer.alioth.debian.org/&quot;&gt;reprepro&lt;/a&gt; to replicate a PPA setup.&lt;/p&gt;
&lt;p&gt;So first up install reprepro&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;aptitude install reprepro
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;next we need do create a reprepro repository&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;mkdir -p /srv/reprepro/{conf,incoming,incomingtmp}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now we need to tell reprepro which distributions we care about. Create /srv/reprepro/conf/distributions with the following contents&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Suite: hardy
Version: 8.04
Codename: hardy
Architectures: i386 amd64 source
Components: main
Description: Local Hardy
SignWith: repository@inodes.org
DebIndices: Packages Release . .gz .bz2
DscIndices: Sources Release .gz .bz2
Tracking: all includechanges keepsources
Log: logfile
  --changes /srv/reprepro/bin/build_sources

Suite: intrepid
Version: 8.10
Codename: intrepid
Architectures: i386 amd64 source
Components: main
Description: Local Intrepid
SignWith: repository@inodes.org
DebIndices: Packages Release . .gz .bz2
DscIndices: Sources Release .gz .bz2
Tracking: all includechanges keepsources
Log: logfile
  --changes /srv/reprepro/bin/build_sources

Suite: jaunty
Version: 9.04
Codename: jaunty
Architectures: i386 amd64 source
Components: main
Description: Local Jaunty
SignWith: repository@inodes.org
DebIndices: Packages Release . .gz .bz2
DscIndices: Sources Release .gz .bz2
Tracking: all includechanges keepsources
Log: logfile
  --changes /srv/reprepro/bin/build_sources
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I also like to create reprepro options file to setup some defaults, edit /srv/reprepro/conf/options&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;verbose
verbose
verbose
verbose
verbose
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next we need to setup an incoming queue so that we can use dput to get the source packages into reprepro,&lt;/p&gt;
&lt;p&gt;vi /srv/reprepro/conf/incoming&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Name: incoming
IncomingDir: incoming
Allow: hardy intrepid jaunty
Cleanup: on_deny on_error
Tempdir: incomingtmp
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The repository is now ready to go. So now we can setup apache. Edit /etc/apache/sites-enabled/pppa&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ServerName packages.inodes.org
DocumentRoot /srv/reprepro
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and we should also configure our sources.list to use these repositories, edit /etc/apt/sources.list&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Sources for rebuildd
deb-src http://packages.inodes.org hardy main
deb-src http://packages.inodes.org intrepid main
deb-src http://packages.inodes.org jaunty main
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next we want to setup our dput.cf to make the magic happen to get the source packages into the archive, edit ~/.dput.cf&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[DEFAULT]
default_host_main = notspecified

[local]
fqdn = localhost
method = local
incoming = /srv/reprepro/incoming
allow_unsigned_uploads = 0
run_dinstall = 0
post_upload_command = reprepro -V -b /srv/reprepro processincoming incoming
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So now we can do the following&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;apt-get source squid3
cd squid3*
dch -i # increment version number
dpkg-buildpackage -sa -S
cd ..
dput local *changes
aptitude update
apt-get source squid3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So when you run dput, first it copies the source package files to /srv/reprepro/incoming and then it gets reprepro to process it&apos;s incoming queue. This means that the source package is now sitting in the repository.&lt;/p&gt;
&lt;p&gt;So the second apt-get source should have downloaded the source package from our local repository which is exactly what rebuildd will do before it tries to build it.&lt;/p&gt;
&lt;p&gt;Next step is to setup rebuildd so that it builds the binary packages and installs them into the repository.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;aptitude install rebuildd
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Setup so it runs out of init.d and the releases we care about, edit /etc/default/rebuildd&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;START_REBUILDD=1
START_REBUILDD_HTTPD=1
DISTS=&quot;hardy intrepid jaunty&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now when a source package is uploaded into the repository we want to kick off rebuildd to build the package. We can do this through the reprepro log hooks. You&apos;ll notice in the conf/distributions above the following lines.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Log: logfile
  --changes /srv/reprepro/bin/build_sources
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This script will be run any time a .changes file is added to the repository. Create /srv/reprepro/bin/build_sources&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/bin/bash

action=$1
release=$2
package=$3
version=$4
changes_file=$5

# Only care about packages being added
if [ &quot;$action&quot; != &quot;accepted&quot; ]
then
	exit 0
fi

# Only care about source packages
echo $changes_file | grep -q _source.changes
if [ $? = 1 ]
then
	exit 0
fi

# Kick off the job
echo &quot;$package $version 1 $release&quot;  | sudo rebuildd-job add
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This script basically checks the right type of package is being added. Then it calls &lt;strong&gt;rebuildd-job&lt;/strong&gt; to ask for that specific package and version to be built for that Ubuntu release.&lt;/p&gt;
&lt;p&gt;Now the first thing that rebuildd does is download the source for the package. However we need to update the sources first since our server doesn&apos;t know there are new files in the repository yet. So edit /etc/rebuildd/rebuilddrv an change&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;apt-get -q --download-only -t ${d} source ${p}=${v}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;to&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;source_cmd = /srv/reprepro/bin/get_sources ${d} ${p} ${v}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and create /srv/reprepro/bin/get_sources with&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/bin/bash

d=$1
p=$2
v=$3

sudo aptitude update &amp;gt;/dev/null
apt-get -q --download-only -t ${d} source ${p}=${v}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;By this stage we have rebuildd building packages but we need to make sure they get re-injected back into the repository. We can do this with a post script. Edit /etc/rebuildd/rebuilddrc&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;post_build_cmd = /srv/reprepro/bin/upload_binaries ${d} ${p} ${v} ${a}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and create /srv/reprepro/bin/upload_binaries&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/bin/bash

d=$1
p=$2
v=$3
a=$4

su -l -c &quot;reprepro -V -b /srv/reprepro include ${d} /var/cache/pbuilder/result/${p}_${v}_${a}.changes&quot; johnf
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now the su is in there because rebuildd needs to be able to access the GPG passphrase to sign the repository with. So rather than have a passphrase-less key we make sure that gpg-agent is running by adding the following to your .profile.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;if test -f $HOME/.gpg-agent-info &amp;amp;&amp;amp;    kill -0 `cut -d: -f 2 $HOME/.gpg-agent-info` 2&amp;gt;/dev/null; then
	GPG_AGENT_INFO=`cat $HOME/.gpg-agent-info`
	export GPG_AGENT_INFO
else
	eval `gpg-agent --daemon`
	echo $GPG_AGENT_INFO &amp;gt;$HOME/.gpg-agent-info
fi

GPG_TTY=`tty`
export GPG_TTY
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So that&apos;s it you now have your own personal PPA. Just in case you had fallen asleep. Here is a little script I wrote so you can auto build the source packages for each release you care about in one go.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/bin/bash

set -e

RELEASES=&quot;hardy intrepid jaunty&quot;

if [ ! -f debian/changelog ]
then
	echo &quot;This isn&apos;t a debian repo&quot;
	exit 1
fi

# Check for changes
if [ `bzr st | wc -l` != &quot;0&quot; ]
then
	echo &quot;You have uncommitted changes!&quot;
	exit 1
fi

if [ -d ../tmpbuild ]
then
	echo &quot;The tmpbuild dir exists&quot;
	exit 1
fi

bzr export ../tmpbuild
cp debian/changelog ../tmpbuild.changelog
cd ../tmpbuild

PACKAGE=`head -1 debian/changelog | awk &apos;{print $1}&apos;`
VERSION=`head -1 debian/changelog | awk &apos;{print $2}&apos; | sed -r -e &apos;s/^(//;s/)$//&apos;`

for release in $RELEASES
do
	
	sed -r -e &quot;1s/) [^;]+; /~${release}) ${release}; /&quot; ../tmpbuild.changelog &amp;gt; debian/changelog 
	head -1 debian/changelog
	dpkg-buildpackage -S -sa
	dput local ../${PACKAGE}_${VERSION}~${release}_source.changes
done

cd ..
rm -rf tmpbuild
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So the above documentation is a bit of a brain dump on what I&apos;ve been working on for the past 2 days and I&apos;m sure I&apos;ve left some bits out. So please give me any feedback you have in the comments.&lt;/p&gt;
</content:encoded></item><item><title>Linux Australia SysAdmin Day Gift</title><link>https://inodes.org/2009/08/01/linux-australia-sysadmin-day-gift</link><guid isPermaLink="true">https://inodes.org/2009/08/01/linux-australia-sysadmin-day-gift</guid><description>I would like to send out a big thank you to the Linux Australia Council. As I&apos;m sure you all know yesterday was System Administrator Appreciation Day. The…</description><pubDate>Sat, 01 Aug 2009 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I would like to send out a big thank you to the &lt;a href=&quot;http://linux.org.au/About/Council&quot;&gt;Linux Australia Council&lt;/a&gt;. As I&apos;m sure you all know yesterday was &lt;a href=&quot;http://sysadminday.com&quot;&gt;System Administrator Appreciation Day&lt;/a&gt;. The Council decided to send me a &lt;a href=&quot;http://thinkgeek.com&quot;&gt;ThinkGeek&lt;/a&gt; gift certificate in appreciation for my work as an LA Admin.&lt;/p&gt;
&lt;p&gt;After hours of searching I finally decided on the &lt;a href=&quot;http://www.thinkgeek.com/computing/usb-gadgets/a7ea/&quot;&gt;USB SATA Drive Dock&lt;/a&gt; :).&lt;/p&gt;
&lt;p&gt;Again a big thank you to the LA council and to Steve Walsh for organising the gift certificate.&lt;/p&gt;
</content:encoded></item><item><title>Changing the type on a legacy table in ActiveRecord</title><link>https://inodes.org/2009/07/14/changing-the-type-on-a-legacy-table-in-activerecord</link><guid isPermaLink="true">https://inodes.org/2009/07/14/changing-the-type-on-a-legacy-table-in-activerecord</guid><description>I&apos;m doing some work for a client which involves extracting some data from a legacy database and displaying it in a web interface. One of the fields in the…</description><pubDate>Tue, 14 Jul 2009 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;m doing some work for a client which involves extracting some data from a legacy database and displaying it in a web interface. One of the fields in the table is the number of megabytes included in the quota. For some crazy reason this is defined as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;CREATE TABLE quota (
  bandwidth_in_included DECIMAL(8,2)
);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This means that in the web interface I get &lt;strong&gt;10,000.0 MB&lt;/strong&gt; instead of &lt;strong&gt;10,000 MB&lt;/strong&gt;. Notice the decimal point. Also I wanted bytes rather than MB since the legacy app was a bit all over the place in this regard.&lt;/p&gt;
&lt;p&gt;My first solution was to simple create a virtual attribute in the model to override the type.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class Quota &amp;lt; ActiveRecord::Base
  # We need it as an int and in bytes
  def bandwidth_in_included
    attributes[&apos;bandwidth_in_included].to_i * 1000
  end
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This worked great except that I&apos;m actually rendering the data to XML to be accessed over a REST service so this was generating XML elements like&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;10000
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Eventually I discovered that you can tell ActiveRecord to override the type, so I ended up with&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class Quota &amp;lt; ActiveRecord::Base
  # We want to treat the bandwidth_included a an integer
  class &amp;lt;&amp;lt; columns_hash[&apos;bandwidth_in_included&apos;]
    def type
      :integer
    end 
  end 

  # We need it as an int and in bytes
  def bandwidth_in_included
    attributes[&apos;bandwidth_in_included].to_i * 1000
  end
end
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>CruiseControl.rb and Bazaar</title><link>https://inodes.org/2009/05/22/cruisecontrolrb-and-bazaar</link><guid isPermaLink="true">https://inodes.org/2009/05/22/cruisecontrolrb-and-bazaar</guid><description>Today I was investigating Continuous Integration solutions for rails projects. In the end I ended up settling on CruiseControl.rb mainly because it&apos;s a rails…</description><pubDate>Fri, 22 May 2009 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Today I was investigating Continuous Integration solutions for rails projects. In the end I ended up settling on &lt;a href=&quot;http://cruisecontrolrb.thoughtworks.com/&quot;&gt;CruiseControl.rb&lt;/a&gt; mainly because it&apos;s a rails app and most of the others where Java based.&lt;/p&gt;
&lt;p&gt;The only problem is that CruiseControl.rb doesn&apos;t currently support &lt;a href=&quot;http://bazaar-vcs.org&quot;&gt;Bazaar&lt;/a&gt;, in fact the released version only supports SVN while the development version supports Git and Mercurual.&lt;/p&gt;
&lt;p&gt;Anyway after a couple of hours of hacking I came up with the following &lt;a href=&quot;http://inodes.org/blog/wp-content/uploads/2009/05/bazaar_scm.patch&quot;&gt;patch&lt;/a&gt; which I&apos;ve filed as a &lt;a href=&quot;https://cruisecontrolrb.lighthouseapp.com/projects/9150/tickets/236-add-bazaar-support#ticket-236-1&quot;&gt;bug&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Launchpad PPA builder status</title><link>https://inodes.org/2009/04/16/launchpad-ppa-builder-status</link><guid isPermaLink="true">https://inodes.org/2009/04/16/launchpad-ppa-builder-status</guid><description>I uploaded some packages to my Launchpad PPA today. Normally they would build in not less than 20 minutes, however 2 hours later I was still waiting. All my…</description><pubDate>Thu, 16 Apr 2009 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I uploaded some packages to my Launchpad PPA today. Normally they would build in not less than 20 minutes, however 2 hours later I was still waiting. All my googling for a build bot status page led to nothing useful. &lt;em&gt;wgrant&lt;/em&gt; on #launchpad pointed me at &lt;a href=&quot;https://launchpad.net/builders/&quot;&gt;https://launchpad.net/builders/&lt;/a&gt; which I though I would note here to help others.&lt;/p&gt;
</content:encoded></item><item><title>Bzr keeps easing my pain</title><link>https://inodes.org/2009/04/03/bzr-keeps-easing-my-pain</link><guid isPermaLink="true">https://inodes.org/2009/04/03/bzr-keeps-easing-my-pain</guid><description>There has been a trend in the Annodex community lately to move towards using git rather than SVN for source code management. Now while I applaud the move to a…</description><pubDate>Fri, 03 Apr 2009 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;There has been a &lt;a href=&quot;http://http://blog.kfish.org/2009/04/liboggplay-liboggz-libfishsound.html#links&quot;&gt;trend&lt;/a&gt; in the Annodex community lately to move towards using git rather than SVN for source code management. Now while I applaud the move to a DVCS, I hate having to use git. It is just extremely painful IMHO.&lt;/p&gt;
&lt;p&gt;I just shouldn&apos;t have to look up a man page or tutorial every time I want to use a tool. Something I don&apos;t have to do with any of CSV, SVN, bzr or mecurial. Git may have some benefits under the hood but I think its user interface still has a long way to go. I can totally understand how git is the perfect tool for the kernel community but I just don&apos;t think it makes a lot of sense for some other communities who have jumped on the badwagon.&lt;/p&gt;
&lt;p&gt;The nice folks over in the &lt;a href=&quot;http://bazaar-vcs.org&quot;&gt;bazaar&lt;/a&gt; community have found a way to ease my pain. Some of you may be familiar with the &lt;a href=&quot;http://bazaar-vcs.org/BzrForeignBranches/Subversion&quot;&gt;bzr-svn&lt;/a&gt; plugin written by &lt;a href=&quot;http://http://jelmer.vernstok.nl/blog/index.php&quot;&gt;Jelmer Vernooij&lt;/a&gt;. Well he has recently expanded on the work started by Rob Collins and now we have a working &lt;a href=&quot;http://http://bazaar-vcs.org/BzrForeignBranches/Git&quot;&gt;bzr-git&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;johnf@zoot:~$ bzr branch git://git.xiph.org/liboggz.git
Branched 734 revision(s).
johnf@zoot:~$ cd liboggz.git/
johnf@zoot:~/liboggz.git$ bzr log -r -1
------------------------------------------------------------
revno: 734
git commit: ef3b0ebc1fdc299a09119df01fbd1c8867f90d8b
committer: Conrad Parker
timestamp: Wed 2009-04-01 00:59:36 +0000
message:
  Update the link to the theora spec
  Patch by Ralph Giles
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Joy!!! Many thanks to the wonderful guys in the bazzar community for making my life so much easier. All we need now is bzr-hg and I&apos;ll never have to leave my comfort zone :)&lt;/p&gt;
</content:encoded></item><item><title>OLPC Library - Trying to get XOs out of people wardrobes</title><link>https://inodes.org/2009/01/20/olpc-library-trying-to-get-xos-out-of-people-wardrobes</link><guid isPermaLink="true">https://inodes.org/2009/01/20/olpc-library-trying-to-get-xos-out-of-people-wardrobes</guid><description>XOs at LCA08 This time last year was a very exciting time at linux.conf.au 2008. The conference organisers had arranged for 100 XO laptops to be given away to…</description><pubDate>Tue, 20 Jan 2009 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;http://farm3.static.flickr.com/2408/2230576130_c3b1dbf081.jpg?v=0&quot; alt=&quot;XOs at LCA08&quot; /&gt;&lt;/p&gt;
&lt;p&gt;This time last year was a very exciting time at linux.conf.au 2008. The conference organisers had arranged for 100 XO laptops to be &lt;a href=&quot;http://lwn.net/Articles/267113/&quot;&gt;given away&lt;/a&gt; to conference attendees.&lt;/p&gt;
&lt;p&gt;The XOs came with the following message attached.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Please do something wonderful with this XO, or inspire someone else and pass it on.
I was fortunate enough to get one of these XOs. I knew however that I wouldn&apos;t have any time in the foreseeable future to actually do anything cool with my XO. At the same time I didn&apos;t simply want to give it away to someone, since I knew at some stage I would actually want to do something with it.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;After chatting this over with a few other people I came up with the idea of putting together an OLPC Library. (It was originally going to be OLPC Bank but after chatting it over with &lt;a href=&quot;http://pipka.org&quot;&gt;Pia&lt;/a&gt; we decided that a Library seemed to fit the ideals of the project much better).&lt;/p&gt;
&lt;p&gt;So as part of the work I&apos;m doing with &lt;a href=&quot;http://olpcfriends.org&quot;&gt;OLPC Friends&lt;/a&gt; we have finally launched &lt;a href=&quot;http://www.olpclibrary.org&quot;&gt;OLPC Library&lt;/a&gt;. At the moment this is just a place holder page but hopefully soon we will have a site up to actually enable people to loan out OLPCs whether that be to a developer wanting to write a new piece of software/port an application or a community advocate putting on a demo at a school or trade show.&lt;/p&gt;
&lt;p&gt;If you are interested in helping out you can see the beginnings of the ideas for the website at the &lt;a href=&quot;http://project.olpclibrary.org/wiki/olpclibrary&quot;&gt;OLPC Library Project&lt;/a&gt; page and you can also join the &lt;a href=&quot;http://www.olpclibrary.org/mailman/listinfo&quot;&gt;mailing lists&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Mac OS X L2TP VPN to Cisco IOS</title><link>https://inodes.org/2008/12/16/mac-os-x-l2tp-vpn-to-cisco-ios</link><guid isPermaLink="true">https://inodes.org/2008/12/16/mac-os-x-l2tp-vpn-to-cisco-ios</guid><description>Just spent a couple of hours trying to get a Mac OS X laptop connected to a Cisco IOS IPSEC/L2TP server. The existing configuration worked fine for windows and…</description><pubDate>Tue, 16 Dec 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Just spent a couple of hours trying to get a Mac OS X laptop connected to a Cisco IOS IPSEC/L2TP server. The existing configuration worked fine for windows and linux servers but the Mac just refused to establish a connection. The Cisco logs contained the usual cryptic message.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Dec 16 16:53:47.955: IPSEC(validate_proposal_request): proposal part #1,
  (key eng. msg.) INBOUND local= 117.53.171.241, remote= 124.171.30.131,.
    local_proxy= 117.53.171.241/255.255.255.255/17/1701 (type=1),.
    remote_proxy= 124.171.30.131/255.255.255.255/17/1701 (type=1),
    protocol= ESP, transform= esp-3des esp-sha-hmac  (Transport-UDP),.
    lifedur= 0s and 0kb,.
    spi= 0x0(0), conn_id= 0, keysize= 0, flags= 0x800
Dec 16 16:53:47.955: Crypto mapdb : proxy_match
    src addr     : 117.53.171.241
    dst addr     : 124.171.30.131
    protocol     : 17
    src port     : 1701
    dst port     : 49561
Dec 16 16:53:47.955: map_db_find_best did not find matching map
Dec 16 16:53:47.955: IPSEC(validate_transform_proposal): no IPSEC cryptomap exists for local address A.B.C.D
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After much googling I discovered that the problem was * dst port: 49561 *. Unlike most other L2TP clients the Mac uses a random source port for the L2TP part of the connection. Most others use 1701 for source and destination.&lt;/p&gt;
&lt;p&gt;So relaxing this&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ip access-list extended L2TP
 permit udp host 117.53.171.241 eq 1701 any eq 1701
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;to this&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ip access-list extended L2TP
 permit udp host 117.53.171.241 eq 1701 any
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;solved the problem.&lt;/p&gt;
&lt;p&gt;It would now normally be the time for me to rant about how IPSEC has to be one of the most badly implemented protocols by all vendors and how getting two different implementations to talk to each other always takes a minimum of 2 hours even if you&apos;ve done it before but it would just be too exhausting.&lt;/p&gt;
</content:encoded></item><item><title>OLPC Wireless packet loss</title><link>https://inodes.org/2008/11/25/olpc-wireless-packet-loss</link><guid isPermaLink="true">https://inodes.org/2008/11/25/olpc-wireless-packet-loss</guid><description>Last week Pia asked me to help her out with her yet to be name Australian OLPC deployment. The deployment involves two remote sites connected by an ADSL WAN…</description><pubDate>Tue, 25 Nov 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Last week Pia asked me to help her out with her yet to be name &lt;a href=&quot;http://pipka.org/blog/2008/25/australias-first-olpc-trial-technical-documentation/&quot;&gt;Australian OLPC deployment&lt;/a&gt;. The deployment involves two remote sites connected by an ADSL WAN and one of the key applications across this LAN is the use of the &lt;a href=&quot;http://www.google.com.au/url?sa=t&amp;amp;source=web&amp;amp;ct=res&amp;amp;cd=1&amp;amp;url=http%3A%2F%2Fwiki.laptop.org%2Fgo%2FVideo_Chat&amp;amp;ei=8UwrSczkMM-_kAXdu-WWAw&amp;amp;usg=AFQjCNHKmyP1t8B-AVrbQcq1ZH9ONV87fA&amp;amp;sig2=eg754BRkRrFfQIq64gpacw&quot;&gt;VideoChat&lt;/a&gt; activity.&lt;/p&gt;
&lt;p&gt;The children at the site were experiencing audio blips and video artefacts, a sure sign of some sort of network related packet loss. With Pia at one site and myself at the other we did some testing to try and rule out the WAN itself as the problem and determine what the issue was.&lt;/p&gt;
&lt;p&gt;It became quickly obvious that the WAN wasn&apos;t at fault. We setup some pings with an interval of 1/10 of a second from the XO&apos;s to their respective default gateways and between the default gateways themselves. Pia and I then started counting out loud, which got us a couple of strange looks from children playing around us :). During the audio blips there was no loss across the WAN but there was loss to the default gateways.&lt;/p&gt;
&lt;p&gt;Now here comes the interesting part, the packet loss to the default gateways seemed to be syncronised. Now remember these are totally independant wireless networks sitting a couple of 100 kilometers apart. At this stage I was cooking up crazy theories about difficult to decode/encode video packets hitting both XOs at the same time but I was fairly dubious.&lt;/p&gt;
&lt;p&gt;We did a little testing on XOs at the same site and while the problem didn&apos;t seem to manifest in as obvious a manner it was still there (I think the latency involved across the WAN exacerbated the symptoms).&lt;/p&gt;
&lt;p&gt;Back at home I did some further testing for a few days, trying all manner of different loads and writing various script to watch tcpdump output. To cut a long story short eventually while glancing at the XO during packet loss I noticed the antennae light was flashing which would indicate the XO is disassociating from the network.&lt;/p&gt;
&lt;p&gt;A few minutes later I was able to verify that wireless scans were causing the problem and that it is easily reproducible by doing&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ping -i 0.1 GATEWAY_IP &amp;amp;

iwlist eth0 scan
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should notice the drop of about 4 packets.&lt;/p&gt;
&lt;p&gt;I&apos;ve filed the bug on the OLPC bug tracker&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://dev.laptop.org/ticket/9048&quot;&gt;Ticket #9048 - Wireless scanning causes network pauses&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;A temporary work around is to get Network Manager to stop performing scans, although I assume this means the network view probably won&apos;t get updated. You can do this using wpa_cli.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;wpa_cli
&amp;gt; ap_scan 0
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Melbourne Cup Dip2</title><link>https://inodes.org/2008/11/04/melbourne-cup-dip2</link><guid isPermaLink="true">https://inodes.org/2008/11/04/melbourne-cup-dip2</guid><description>To quote Justaan:  This is what we call the Melbourne Cup Network Effect Melbourne Cup network effect It seems it really is the race that stops the nation.…</description><pubDate>Tue, 04 Nov 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;To quote Justaan:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;This is what we call the Melbourne Cup Network Effect
&lt;img src=&quot;/blog/2008/mel_cup.png&quot; alt=&quot;Melbourne Cup network effect&quot; /&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It seems it really is the race that stops the nation. This is a graph of Bulletproof&apos;s outbound web traffic for today. That&apos;s a 37% drop in outbound traffic just after 3pm.&lt;/p&gt;
&lt;p&gt;Make sure you take note of my l33t gimp skills!&lt;/p&gt;
</content:encoded></item><item><title>Disabling &quot;Subscribe to feed&quot; in firefox</title><link>https://inodes.org/2008/07/06/disabling-subscribe-to-feed-in-firefox</link><guid isPermaLink="true">https://inodes.org/2008/07/06/disabling-subscribe-to-feed-in-firefox</guid><description>At Vquence we do a lot of crawling of various video hosting sites and where possible we like to use APIs or RSS feeds instead of page scraping. A semi-recent…</description><pubDate>Sun, 06 Jul 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;At Vquence we do a lot of crawling of various video hosting sites and where possible we like to use APIs or RSS feeds instead of page scraping. A semi-recent features of firefox is that when you click on an RSS link you get a &quot;Subscribe to this feed in your favourite reader&quot; header and then the formatted contents of the feed.&lt;/p&gt;
&lt;p&gt;This is really annoying if what you really want to see is the raw XML. Sure I could hit CTRL-U to see the source but thats an extra step and a whole other window I now have open. I couldn&apos;t find any way to disable this functionality so I ended up writing a greasemonkey script called &lt;a href=&quot;http://inodes.org/johnf/gm/disable_subscribe_feed.js&quot;&gt;disable_subscribe_feed.js&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The meat of the script looks like&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// Pick three element ids that appear in the &quot;Subscribe to page&quot; and probably 
var tag1 = document.getElementById(&apos;feedHeaderContainer&apos;);
var tag2 = document.getElementById(&apos;feedSubscriptionInfo2&apos;);
var tag3 = document.getElementById(&apos;feedSubscribeLine&apos;);

// Show the source
if (tag1 &amp;amp;&amp;amp; tag2 &amp;amp;&amp;amp; tag3) {
    location.href = &apos;view-source:&apos; + document.location.href;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Basically it tries to detect the &quot;subscribe to feed&quot; page based on a couple of tag ids that exist on it and then performs a redirect to &lt;strong&gt;view-source:&lt;/strong&gt; for that page. Which gives us nicely formatted XML.&lt;/p&gt;
</content:encoded></item><item><title>Getting your key into debian-maintainers using jetring</title><link>https://inodes.org/2008/07/05/getting-your-key-into-debian-maintainers-using-jetring</link><guid isPermaLink="true">https://inodes.org/2008/07/05/getting-your-key-into-debian-maintainers-using-jetring</guid><description>I&apos;m currently going through the process of becoming a Debian Maintainer so that I can upload Annodex packages without bugging one of the DDs I know. Thanks to…</description><pubDate>Sat, 05 Jul 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;m currently going through the &lt;a href=&quot;http://wiki.debian.org/Maintainers&quot;&gt;process&lt;/a&gt; of becoming a Debian Maintainer so that I can upload &lt;a href=&quot;http://annodex.net&quot;&gt;Annodex&lt;/a&gt; packages without bugging one of the DDs I know. Thanks to &lt;a href=&quot;http://www.vergenet.net/~horms&quot;&gt;horms&lt;/a&gt; and &lt;a href=&quot;http://spacepants.org/blog&quot;&gt;jaq&lt;/a&gt; for their help thus far.&lt;/p&gt;
&lt;p&gt;As part of this process you need to file a bug against the &lt;a href=&quot;http://packages.debian.org/sid/debian-maintainers&quot;&gt;debian-maintainers&lt;/a&gt; package to get your key added. You need to do this using a piece of software called jetring. jetring allows you to create changesets for a gpg keyring, a binary format, to make it easy for the maintainers to add and remove keys and know exactly whats being added and removed. I couldn&apos;t find very much information on how you actually do this and hence the reason for this post.&lt;/p&gt;
&lt;p&gt;To start with you need to grab the latest copy of the debian-maintainers keyring and extract the actual keyring from it. You can find the link to the latest version at &lt;a href=&quot;http://packages.debian.org/sid/debian-maintainers&quot;&gt;debian-maintainers&lt;/a&gt;, just click on &lt;strong&gt;all&lt;/strong&gt; to download it.&lt;/p&gt;
&lt;p&gt;Here is the process I followed with comments along the way&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Download the latest debian-maintainers keyring
wget http://http.us.debian.org/debian/pool/main/d/
debian-maintainers/debian-maintainers_1.38_all.deb
dpkg-deb -x *.deb keyring
mv keyring/usr/share/keyrings/debian-maintainers.gpg .
rm -rf keyring *.deb

# Create a copy of it and add your key to it
cp debian-maintainers.gpg debian-maintainers.gpg.orig
gpg --export johnf@inodes.org | 
    gpg --import --no-default-keyring --keyring `pwd`/debian-maintainers.gpg

# Create the changset with jetring
jetring-gen debian-maintainers.gpg.orig debian-maintainers.gpg 
    &quot;Add John Ferlito &amp;lt;johnf @inodes.org&amp;gt; as a Debian Maintainer&quot;

# Check the changeset
jetring-review -d debian-maintainers.gpg.orig add-*
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once you have completed the above you should have a file with something like the following contents&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Comment: Add John Ferlito &amp;lt;johnf @inodes.org&amp;gt; as a Debian Maintainer
Date: Sat, 05 Jul 2008 14:26:31 +1000
Action: import
Data: 
  -----BEGIN PGP PUBLIC KEY BLOCK-----
  Version: GnuPG v1.4.6 (GNU/Linux)
  
  mQGiBEd6MmQRBADF+BLVChN/AqKVXkrJFU2LtJoiCdYJ
  &amp;lt;snip&amp;gt;
  =SSNk
  -----END PGP PUBLIC KEY BLOCK-----
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should now add something along the lines of the below to the top of the file.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Recommended-By:
  Simon Horman &amp;lt;horms @verge,net.au&amp;gt;,
  Jamie Wilkinson &amp;lt;jaq @spacepants.org&amp;gt;
Agreement: http://lists.debian.org/debian-newmaint/2008/07/msg00010.html
Advocates:
  http://lists.debian.org/debian-newmaint/2008/07/msg00011.html,
  http://lists.debian.org/debian-newmaint/2008/07/msg00012.html
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The agreement line should be a URL to your signed email applying to become a DM and the advocates should be the URLs for the signed emails from your advocates.&lt;/p&gt;
&lt;p&gt;Once you&apos;ve done that, submit a bug with the file attached and hopefully sometime later you will have become a DM.&lt;/p&gt;
</content:encoded></item><item><title>Firefox popup blocking</title><link>https://inodes.org/2008/06/20/firefox-popup-blocking</link><guid isPermaLink="true">https://inodes.org/2008/06/20/firefox-popup-blocking</guid><description>Wouldn&apos;t it make more sense for firefox to allow popups based on the destination site rather than on the source? For example most popups I click on are for…</description><pubDate>Thu, 19 Jun 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Wouldn&apos;t it make more sense for firefox to allow popups based on the destination site rather than on the source?&lt;/p&gt;
&lt;p&gt;For example most popups I click on are for YouTube. Now some on these are on random blogging sites. Which means that to jump to the YouTube page for that video I have to allow popups for some random blog, which can now popup as many ads as it wants.&lt;/p&gt;
&lt;p&gt;Wouldn&apos;t it make more sense to allow YouTube as a popup destination. It really comes down to the fact that I trust YouTube more than some random blog embedding YouTube videos.&lt;/p&gt;
&lt;p&gt;I haven&apos;t thought about this very much so maybe there is a good reason why you wouldn&apos;t want this. If a few other people agree with me I&apos;ll go file a bug. Hmm I wonder if you could write an extension to do it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; Peter &lt;a href=&quot;http://hardy.dropbear.id.au/blog/2008/why-destination-based-popup-blocking-fails&quot;&gt;raises a good point&lt;/a&gt; as to why this is a bad idea.&lt;/p&gt;
</content:encoded></item><item><title>bzr-svn and svn revisions</title><link>https://inodes.org/2008/06/14/bzr-svn-and-svn-revisions</link><guid isPermaLink="true">https://inodes.org/2008/06/14/bzr-svn-and-svn-revisions</guid><description>I was updating an svn branch today using bzr, thanks to bzr-svn, and I wanted to know what svn revision I was at. You can easily see the bzr revision by…</description><pubDate>Sat, 14 Jun 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I was updating an svn branch today using bzr, thanks to bzr-svn, and I wanted to know what svn revision I was at.&lt;/p&gt;
&lt;p&gt;You can easily see the bzr revision by running&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;johnf@zoot:~/trunk$ bzr revno
34
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But it gives no indication of where you are in SVN land. After a bit of rummaging around I discovered the following&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;johnf@zoot:~/trunk$ bzr version-info
revision-id: svn-v3-trunk0:90e61fa5-4541-0410-a685-e5b9dba3c764:trunk:74
date: 2008-05-29 19:24:44 +0000
build-date: 2008-06-14 19:10:59 +1000
revno: 34
branch-nick: trunk
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;strong&gt;revision-id&lt;/strong&gt; field seems to be the key and seems to indicate I&apos;m using SVN revision 74. Checking the branch via the web confirmed that.&lt;/p&gt;
</content:encoded></item><item><title>Hardy, exim4, SMTP-AUTH and LDAP... (or debian openssl causes pain)</title><link>https://inodes.org/2008/05/15/hardy-exim4-smtp-auth-and-ldap-or-debian-openssl-causes-pain</link><guid isPermaLink="true">https://inodes.org/2008/05/15/hardy-exim4-smtp-auth-and-ldap-or-debian-openssl-causes-pain</guid><description>As most people will know yesterday caused a lot of people a lot of pain as they ran around replacing SSH keys and SSL certificates. While running around fixing…</description><pubDate>Thu, 15 May 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;As most people will know yesterday caused a lot of people a lot of pain as they ran around replacing SSH keys and SSL certificates.&lt;/p&gt;
&lt;p&gt;While running around fixing up all our servers, most of them in one felll swoop thanks to puppet, I realised two of our servers were still running Edgy. I figured it was high time I moved them to Hardy.&lt;/p&gt;
&lt;p&gt;Everything went fairly smoothly with some minor hicups, except for SMTP-AUTH for exim. We use an ldap backed SMTP-AUTH and this just wouldn&apos;t work after the upgrade. The following error was appearing in the logs.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ldap_search failed: -7, Bad search filter	
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This lead to hours upon hours of google searches, staring at debug messages and even at one stage resorting to using GDB. Eventually after staring at debug messages harder it twigged when I saw the following.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;perform_ldap_search: ldapdn URL = &quot;ldap:///ou=people,o=vquence?dn?sub?(uid=moo) &quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice the space just before the closing double quote. It seems that the new openldap libraries don&apos;t like errant spaces in your search filter.&lt;/p&gt;
&lt;p&gt;Now to remember what I was doing yesterday morning before this whole derailment began.&lt;/p&gt;
&lt;p&gt;Note: Before anyone comments I will completely deny that during these upgrades I did anything as silly as &lt;strong&gt;rm -rf &lt;code&gt;dpkg -L random-font-package&lt;/code&gt;&lt;/strong&gt;, no matter what twitter says.&lt;/p&gt;
</content:encoded></item><item><title>Hardy and password locking</title><link>https://inodes.org/2008/04/29/hardy-and-password-locking</link><guid isPermaLink="true">https://inodes.org/2008/04/29/hardy-and-password-locking</guid><description>In gutsy the above would simply lock the account by placing an ! in front of the passwd in your /etc/shadow file.  In hardy it now also sets the account as…</description><pubDate>Tue, 29 Apr 2008 00:00:00 GMT</pubDate><content:encoded>&lt;pre&gt;&lt;code&gt;passwd -l root
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In gutsy the above would simply lock the account by placing an ! in front of the passwd in your /etc/shadow file.&lt;/p&gt;
&lt;p&gt;In hardy it now also sets the account as expired. Meaning you can&apos;t ssh to it even if you have SSH keys in place.&lt;/p&gt;
&lt;p&gt;Time to go and rebuild my EC2 AMI. :(&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; To get the old behavour back you can do the following&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;passwd -l root
usermod -e &quot;&quot; root
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Sorting in Mutt</title><link>https://inodes.org/2008/04/14/sorting-in-mutt</link><guid isPermaLink="true">https://inodes.org/2008/04/14/sorting-in-mutt</guid><description>A couple of days ago I discovered the following mutt config option. This means you get the usual threading but that a thread is sorted by the date the last…</description><pubDate>Mon, 14 Apr 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;A couple of days ago I discovered the following mutt config option.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;set sort = threads
set sort_aux = last-date-received
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This means you get the usual threading but that a thread is sorted by the date the last message in the thread was received. This keeps a thread which receives new mail at the bottom of your mailbox rather than up at the top.&lt;/p&gt;
&lt;p&gt;Another idea I found useful is to sort my spam mailbox by subject. Since a lot of SPAM has exactly the same subject it makes it really easily to quickly scan the mailbox for HAM.&lt;/p&gt;
&lt;p&gt;You can easily do this with the following additions to your muttrc&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;folder-hook . set sort=threads
folder-hook spam set sort=subject
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You need to set the default as mutt will change the sort order when you change to the spam folder but won&apos;t change it back when you jump out of it.&lt;/p&gt;
</content:encoded></item><item><title>Firefox 3 and howtoforge.com</title><link>https://inodes.org/2008/03/19/firefox-3-and-howtoforgecom</link><guid isPermaLink="true">https://inodes.org/2008/03/19/firefox-3-and-howtoforgecom</guid><description>There is currently a bug in firefox 3 which causes it to  crash with an XError BadAloc when you go to any page hosted on howtoforge. This seems to be related…</description><pubDate>Wed, 19 Mar 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;There is currently a bug in firefox 3 which causes it to &lt;a href=&quot;https://bugzilla.mozilla.org/show_bug.cgi?id=402204&quot;&gt; crash with an XError BadAloc&lt;/a&gt; when you go to any page hosted on &lt;a href=&quot;http://howtoforge.com&quot;&gt;howtoforge&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This seems to be related to the image at &lt;a href=&quot;http://howtoforge.com/themes/htf_glass/images/bg_header_bottom_left15.png&quot;&gt;http://howtoforge.com/themes/htf_glass/images/bg_header_bottom_left15.png&lt;/a&gt;. I suggest you don&apos;t click on that link :)&lt;/p&gt;
&lt;p&gt;Apparently this image is 10,000 pixels wide. It looks like this is probably a GTK issue since the same problem happended when I opened the image with evince!&lt;/p&gt;
&lt;p&gt;I tried writing a greasemonkey script to get around this problem but it loads too late to avert the crash. So iptables to the rescue.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;iptables  
    -I OUTPUT  # Match packets levaing my laptop
    -d howtoforge.com   # Only packets going to howtoforge
    -m string  # Invoke the string matcher
    --algo bm  # Pick a matching algorithm
    --to 70  # Only check the first 70 bytes of each packet
    --string &quot;GET /themes/htf_glass/images/bg_header_bottom_left15.png&quot;  
    -j DROP # Drop the sucker
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Puppet, Facts and Certificates</title><link>https://inodes.org/2008/03/13/puppet-facts-and-certificates</link><guid isPermaLink="true">https://inodes.org/2008/03/13/puppet-facts-and-certificates</guid><description>I&apos;m currently setting up Puppet at Vquence so that, among other things, we can deploy hosts into Amazon EC2 more easily.  To ensure a minimum setup time on a…</description><pubDate>Thu, 13 Mar 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;m currently setting up &lt;a href=&quot;http://reductivelabs.com/projects/puppet/&quot;&gt;Puppet&lt;/a&gt; at Vquence so that, among other things, we can deploy hosts into Amazon EC2 more easily.&lt;/p&gt;
&lt;p&gt;To ensure a minimum setup time on a new server I wanted the setup to be as simple as&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;echo &apos;DAEMON_OPTS=&quot;-w 120 --fqdn newserver.vquence.com --server puppetmaster.vquence.com&quot;&apos; &amp;gt; /etc/default/puppet 

aptitude install puppet
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This means that the puppet client will use &lt;strong&gt;newserver.vquence.com&lt;/strong&gt; as the common name in the SSL certificate it creates for itself. On the puppet master the SSL cert name is then used to pick a node rather than the hostname reported by facter.&lt;/p&gt;
&lt;p&gt;This means that I don&apos;t need to worry about setting up /etc/hostname, even better /etc/hostname can be managed by puppet.&lt;/p&gt;
&lt;p&gt;You can control this functionality on the puppet master by using the node_name option. From the docs&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; # How the puppetmaster determines the client&apos;s identity 
 # and sets the &apos;hostname&apos; fact for use in the manifest, in particular 
 # for determining which &apos;node&apos; statement applies to the client. 
 # Possible values are &apos;cert&apos; (use the subject&apos;s CN in the client&apos;s 
 # certificate) and &apos;facter&apos; (use the hostname that the client 
 # reported in its facts)
 # The default value is &apos;cert&apos;.
 # node_name = cert
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The problem was that the &apos;hostname&apos; fact wasn&apos;t being set. It looks like there was a regression in SVN#1673 when some refactoring was performed.&lt;/p&gt;
&lt;p&gt;I&apos;ve filed bug &lt;a href=&quot;http://reductivelabs.com/trac/puppet/ticket/1133&quot;&gt;#1133&lt;/a&gt; and you can clone my git &lt;a href=&quot;http://inodes.org/~johnf/git/puppet&quot;&gt;repository.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I haven&apos;t included any tests in the patch as I&apos;m not sure how to. The master.rb test already tests this functionality but doesn&apos;t test that the facts object has actually been changed. I think a test on &lt;strong&gt;getconfig&lt;/strong&gt; is probably required but I&apos;m not sure how you would access the facts after calling it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; This patch is now in puppet as of 0.24.3.&lt;/p&gt;
</content:encoded></item><item><title>Amazon EC2 ruby gem and large user_data</title><link>https://inodes.org/2008/02/26/amazon-ec2-ruby-gem-and-large-user_data</link><guid isPermaLink="true">https://inodes.org/2008/02/26/amazon-ec2-ruby-gem-and-large-user_data</guid><description>When you create an instance in EC2 you can send Amazon some user data that is accessible by your instance. At Vquence we use this to send a script that gets…</description><pubDate>Tue, 26 Feb 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;When you create an instance in EC2 you can send Amazon some user data that is accessible by your instance. At Vquence we use this to send a script that gets executes at boot up. This script contains some openvpn and puppet RSA keys so its approaching about 10k in size.&lt;/p&gt;
&lt;p&gt;This works without any problems when using the java based command line tools. However I was getting the following error when using the &lt;a href=&quot;http://amazon-ec2.rubyforge.org/&quot;&gt;EC2 Ruby GEM&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;/usr/lib/ruby/1.8/net/protocol.rb:133:in `sysread&apos;: Connection reset by peer (Errno::ECONNRESET)
	from /usr/lib/ruby/1.8/net/protocol.rb:133:in `rbuf_fill&apos;
	from /usr/lib/ruby/1.8/timeout.rb:56:in `timeout&apos;
	from /usr/lib/ruby/1.8/timeout.rb:76:in `timeout&apos;
	from /usr/lib/ruby/1.8/net/protocol.rb:132:in `rbuf_fill&apos;
	from /usr/lib/ruby/1.8/net/protocol.rb:116:in `readuntil&apos;
	from /usr/lib/ruby/1.8/net/protocol.rb:126:in `readline&apos;
	from /usr/lib/ruby/1.8/net/http.rb:2020:in `read_status_line&apos;
	from /usr/lib/ruby/1.8/net/http.rb:2009:in `read_new&apos;
	 ... 6 levels...
	from ./lib/ec2helpers.rb:43:in `start_instance&apos;
	from ./ec2-puppet:107
	from ./ec2-puppet:89:in `each_pair&apos;
	from ./ec2-puppet:89
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Doing some tcpdumping indicated that after receiving the request Amazon waits for a while and then sends a TCP RESET. Not very nice at all. My next step was to use ngrep to compare the output from the command line tools and the ruby gem. This got nowhere fast since the command line tools use the SOAP API while the ruby gem uses the Query API.&lt;/p&gt;
&lt;p&gt;What I did notice however is that while the command line tools performed a POST the ruby library performed a GET. At this stage I decided to test how much data I could send. So I started trying different user data sizes. The offending amount was around 7.8k, suspiciously close to exactly 8k.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;http://www.ietf.org/rfc/rfc2616.txt&quot;&gt;HTTP/1.1&lt;/a&gt; spec doesn&apos;t place an actual limit on the length but leaves it up to the server.&lt;/p&gt;
&lt;p&gt;The HTTP protocol does not place any a priori limit on the length of
a URI. Servers MUST be able to handle the URI of any resource they
serve, and SHOULD be able to handle URIs of unbounded length if they
provide GET-based forms that could generate such URIs. A server
SHOULD return 414 (Request-URI Too Long) status if a URI is longer
than the server can handle (see section 10.4.15).&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  Note: Servers ought to be cautious about depending on URI lengths
  above 255 bytes, because some older client or proxy
  implementations might not properly support these lengths.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Apache for example limits this by default to 8190 bytes including the method and the protocol. You can change this using the &lt;a href=&quot;http://httpd.apache.org/docs/2.0/mod/core.html#limitrequestline&quot;&gt;LimitRequestLine&lt;/a&gt; directive.&lt;/p&gt;
&lt;p&gt;I created a patch to modify the EC2 Gem to use a POST instead of a GET which has no such limitations. You can find the git tree for it at http://inodes.org/~johnf/git/amazon-ec2&lt;/p&gt;
</content:encoded></item><item><title>EC2UI extension for Firefox 3</title><link>https://inodes.org/2008/02/25/ec2ui-extension-for-firefox-3</link><guid isPermaLink="true">https://inodes.org/2008/02/25/ec2ui-extension-for-firefox-3</guid><description>I&apos;ve been doing some work with Amazon EC2 the last few days. An invaluable tool is the EC2UI firefox extension that Amazon have written. This provides you with…</description><pubDate>Mon, 25 Feb 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;ve been doing some work with Amazon EC2 the last few days. An invaluable tool is the &lt;a href=&quot;http://developer.amazonwebservices.com/connect/entry.jspa?externalID=609&quot;&gt;EC2UI&lt;/a&gt; firefox extension that Amazon have written. This provides you with a simple GUI inside the firefox chrome which makes it really easy to manipulate your EC2 instances.&lt;/p&gt;
&lt;p&gt;A few weeks ago Hardy moved to using firefox 3. This meant, amongst other things, that the amazon plugin stopped working. The firefox guys have a webpage up that explains how to &lt;a href=&quot;http://developer.mozilla.org/en/docs/Updating_extensions_for_Firefox_3&quot;&gt;update extensions for Firefox 3&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The main problem was with changes to the password manager. You can find my changes on my &lt;a href=&quot;http://inodes.org/~johnf/bzr/elasticfox/ff3/&quot;&gt;bzr branch&lt;/a&gt; and a packaged up version of the extension &lt;a href=&quot;http://inodes.org/blog/wp-content/uploads/2008/03/ec2ui.xpi&quot;&gt;EC2UI for Firefox 3.0b4&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update: See comments below for new versions&lt;/strong&gt;&lt;/p&gt;
</content:encoded></item><item><title>Vim and spell checking</title><link>https://inodes.org/2008/02/08/vim-and-spell-checking</link><guid isPermaLink="true">https://inodes.org/2008/02/08/vim-and-spell-checking</guid><description>I just discovered Vim has spell checking. No more having to manually spell check in mutt with ispell when writing emails, Hurray!! In your .vimrc file simply…</description><pubDate>Fri, 08 Feb 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I just discovered Vim has spell checking. No more having to manually spell check in mutt with ispell when writing emails, Hurray!!&lt;/p&gt;
&lt;p&gt;In your .vimrc file simply add&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;setlocal spell spelllang=en_au
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; By default vim only installs en_us spell files. If you are running debian then there is a &lt;em&gt;vim-spellfiles&lt;/em&gt; package. There is an ubuntu &lt;a href=&quot;https://bugs.launchpad.net/ubuntu/+source/vim/+bug/66878&quot;&gt;bug&lt;/a&gt; to do something about this as well. Since I&apos;m using ubuntu I simply grabbed the &lt;em&gt;en&lt;/em&gt; directory from ftp://ftp.vim.org/pub/vim/runtime/spell/ and dumped it in &lt;em&gt;/usr/share/vim/vim71/spell&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Vim will now highlight words it thinks are misspelled. The magic incarnations you will need are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;z= - Suggest alternatives for the word&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;zg - Add word to dictionary&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;zw - Remove word from dictionary&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
</content:encoded></item><item><title>Squid and Rails caching</title><link>https://inodes.org/2008/01/15/squid-and-rails-caching</link><guid isPermaLink="true">https://inodes.org/2008/01/15/squid-and-rails-caching</guid><description>At Vquence our Rails setup looks something like this. (Who needs Inkscape when you have ASCII art) This infrastructure is hosted in the US and up until…</description><pubDate>Tue, 15 Jan 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;At Vquence our Rails setup looks something like this.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;------------     ---------     ------------ 
| Internet |----&amp;gt;| Squid |----&amp;gt;| Mongrels | 
------------     ---------     ------------ 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;(Who needs Inkscape when you have ASCII art)&lt;/p&gt;
&lt;p&gt;This infrastructure is hosted in the US and up until recently squid hadn&apos;t been doing much of anything except really sitting there.&lt;/p&gt;
&lt;p&gt;Now a few months ago when we signed a contract with an Australian customer we decided we needed to place a squid cache in Australia which would actually cache content. For two reasons, firstly the US is a long way away and the 300ms latency is really noticeable and secondly because some of our pages involving graphs have long statistical calculations which can take minutes to render. (OK its really because no one has had a chance to optimise them yet but lets pretend that&apos;s not the case). So we changed the above setup for the Australian customers to look like the following.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;------------     ------------     ------------     ------------
| Internet |----&amp;gt;| Squid AU |----&amp;gt;| Squid US |----&amp;gt;| Mongrels |
------------     ------------     ------------     ------------
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We hand out urls like http://www.client.b2b.vquence.com/widget to Australian customers and the rails backend is smart enough to make sure all the URLs look similar (I&apos;ll blog about how I did that another time).&lt;/p&gt;
&lt;p&gt;Without much time to look into thing properly I did some really nasty things on the AU squid cache to make sure it cached the pages.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;refresh_pattern /client/graph  1440 0% 1440 ignore-no-cache ignore-reload
refresh_pattern /client/static 1440 0% 1440 ignore-no-cache ignore-reload
refresh_pattern /client/video  1440 0% 1440 ignore-no-cache ignore-reload
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is evil, breaks a whole heap of RFCs but it did the trick and got us out of a bind quickly.&lt;/p&gt;
&lt;p&gt;A few weeks ago I moved the production site to Rails 2.0, I noticed around this time that the caching had stopped working. The client was no longer using our services as their campaign had finished so it wasn&apos;t an urgent concern.&lt;/p&gt;
&lt;p&gt;It seems that Rails 2.0 goes one step further to ensure that caches don&apos;t cache content and instead of just sending&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Cache-Control: no-cache
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;it now sends&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Cache-Control: private, max-age=0, must-revalidate
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I tried adding &lt;strong&gt;ignore-private&lt;/strong&gt;, since if you&apos;re breaking some aspects of the RFC you may as well break a couple more, but squid still refused to cache the content. After struggling with this for a bit I decided that the universe was trying to tell me I should actually do things properly.&lt;/p&gt;
&lt;p&gt;So with squid set back to its defaults I went exploring how to accomplish this. Google wasn&apos;t all that helpful at first since most Rails caching articles talk about caching to static files as most sites don&apos;t implement reverse proxying for caching. It turns out however its fairly simple. In the appropriate actions in your controllers simply do the following.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class VideoController &amp;lt; ApplicationController
  def vquence
    # Lots of code here

    expires_in 8.hours, :private =&amp;gt; false
    render :template =&amp;gt; &quot;videos/vquence&quot;
  end
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will send the following header and cache the page for 8 hours.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Cache-Control: max-age=28800
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now everything is much faster!!&lt;/p&gt;
</content:encoded></item><item><title>Google Analytics new tracking code</title><link>https://inodes.org/2008/01/14/google-analytics-new-tracking-code</link><guid isPermaLink="true">https://inodes.org/2008/01/14/google-analytics-new-tracking-code</guid><description>As I was setting up a new site in Google Analytics today I discovered that there is now new tracking code you are supposed to use. The old code is now…</description><pubDate>Mon, 14 Jan 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;As I was setting up a new site in Google Analytics today I discovered that there is now new tracking code you are supposed to use. The old code is now unsupported. Your javascript snippet should now look something like.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;var gaJsHost = ((&quot;https:&quot; == document.location.protocol) ? &quot;https://ssl.&quot; : &quot;http://www.&quot;);
document.write(unescape(&quot;%3Cscript src=&apos;&quot; + gaJsHost + &quot;google-analytics.com/ga.js&apos; type=&apos;text/javascript&apos;%3E%3C/script%3E&quot;));

var pageTracker = _gat._getTracker(&quot;PUT_YOUR_TRACKING_ID_HERE&quot;);
pageTracker._initData();
pageTracker._trackPageview();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This has been a community service announcement. :)&lt;/p&gt;
</content:encoded></item><item><title>Rails, ActiveRecord, MySQL, GUIDs and the rename_column bug</title><link>https://inodes.org/2008/01/11/rails-activerecord-mysql-guids-and-the-rename_column-bug</link><guid isPermaLink="true">https://inodes.org/2008/01/11/rails-activerecord-mysql-guids-and-the-rename_column-bug</guid><description>Since I wasted over 4 hours of my life today working my way through this problem I feel the need to share. Since it seems to be the in thing in the Web 2.0…</description><pubDate>Fri, 11 Jan 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Since I wasted over 4 hours of my life today working my way through this problem I feel the need to share.&lt;/p&gt;
&lt;p&gt;Since it seems to be the in thing in the Web 2.0 space, just to be cool, we use &lt;a href=&quot;http://en.wikipedia.org/wiki/Globally_Unique_Identifier&quot;&gt;GUIDs&lt;/a&gt; to identify different objects in our URLs at &lt;a href=&quot;http://vquence.com&quot;&gt;Vquence&lt;/a&gt;. For example my randomly created vquence on on &lt;a href=&quot;http://www.vqslices.com/vq/cDuIhGWb8r3lDxaby-aaea&quot;&gt;Rails&lt;/a&gt; has a GUID of&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;cDuIhGWb8r3lDxaby-aaea
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Andy Singleton has written a rails plugin called funnily enough &lt;a href=&quot;http://tools.assembla.com/breakout/wiki/FreeSoftware&quot;&gt;guid&lt;/a&gt;. This allows you to do the following in your model.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class Vquence &amp;lt; ActiveRecord::Base
  usesguid :column =&amp;gt; &apos;guid&apos;
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once you do this you will automatically get GUID looking identifiers in the db and your application. The &lt;strong&gt;guid&lt;/strong&gt; column in the DB gets mapped to &lt;strong&gt;Vquence.id&lt;/strong&gt; so you can do things like&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Vquence.find(&apos;cDuIhGWb8r3lDxaby-aaea&apos;);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We used to use &lt;a href=&quot;http://www.sphinxsearch.com/&quot;&gt;Sphinx&lt;/a&gt; as our search index, we now use &lt;a href=&quot;http://lucene.apache.org/&quot;&gt;Lucene&lt;/a&gt;. Sphinx requires that you have an integer id for each document in your index. This is to make your SQL queries much faster. The dumb way to create your index is to use queries like the following.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;SELECT * FROM videos LIMIT 0,10000
SELECT * FROM videos LIMIT 10000,10000
...
SELECT * FROM videos LIMIT 990000,10000
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I know this as its what we originally used with Lucene. This works fine until you reach about 1,000,000 rows. The problem is that since there is no implicit ordering or range in the above query it means that for the final query MySQL needs to workout what the first 1,000,000 rows are and then return you the last 10,000.&lt;/p&gt;
&lt;p&gt;A much better way to do it is the following&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;SELECT * FROM videos WHERE integer_id &amp;gt;= 1 and integer_id &amp;lt; = 10000
SELECT * FROM videos WHERE integer_id &amp;gt;= 10001 and integer_id &amp;lt; = 20000
...
SELECT * FROM videos WHERE integer_id &amp;gt;= 990000 and integer_id &amp;lt; = 1000000
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is fast as long as &lt;strong&gt;integer_id&lt;/strong&gt; is indexed.&lt;/p&gt;
&lt;p&gt;So to accommodate this in Rails we began using migrations like the following.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class Videos &amp;lt; ActiveRecord::Migration
  def self.up
    create_table :videos do |t|
      t.column :uuid, :string, :limit =&amp;gt;22, :null =&amp;gt; false
      ...

      t.timestamps
    end
    add_index :videos, :uuid, :unique =&amp;gt; true
    rename_column :videos, :id, :integer_id
  end

  def self.down
    drop_table :videos
  end
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This was all done months ago and the repercussions didn&apos;t rear their ugly head until today. Previously everything in the videos table had been created by our external crawler and Rails never needed to insert into the table. Today I wrote some code that inserted into the videos table and everything broke horribly.&lt;/p&gt;
&lt;p&gt;The problem is that ActiveRecord can still see the &lt;strong&gt;integer_id&lt;/strong&gt; field and tries to insert a 0 value into it. It isn&apos;t clever enough to realise that it is an auto increment field and to leave it alone. After some help from &lt;em&gt;bitsweat&lt;/em&gt; on #RoR I implemented a dirty hack to hide the &lt;strong&gt;integer_id&lt;/strong&gt; column from ActiveRecord. Thanks to Ruby overriding the ActiveRecord internals is really easy and I added the following to our guid plugin.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  # HACK (JF) - This is too evil to even blog about
  # When we use guid as a primary key we usually rename the original &apos;id&apos;
  # field to &apos;integer_id&apos;. We need to hide this from rails so it doesn&apos;t
  # mess with it. WARNING: This means once you use usesguid anywhere you can
  # never access a column in any table anywhere called &apos;integer_id&apos;

class ActiveRecord::Base
  private
    alias :original_attributes_with_quotes :attributes_with_quotes

    def attributes_with_quotes(include_primary_key = true, include_readonly_attributes = true)
      quoted = original_attributes_with_quotes(include_primary_key = true, include_readonly_attributes = true)
      quoted.delete(&apos;integer_id&apos;)
      quoted
    end
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So this worked like a charm and after 4 hours I thought my pain was over, but then I tried to add second row to my test database. This resulted in the following.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; Mysql::Error: Duplicate entry &apos;0&apos; for key 1: INSERT INTO `videos` (`updated_at`, `sort_order`, `guid`, `description`,
 `user_id`, `created_at`) VALUES(&apos;2008-01-11 16:45:05&apos;, NULL, &apos;bcOMPqWaGr3k5CabxfFyeK&apos;, &apos;&apos;, 5, &apos;2008-01-11 16:44:28&apos;);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I ran the same SQL with MySQL client and got the same error. I then looked at the table and saw the following&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;mysql&amp;gt; show columns from moo;
+------------+-------------+------+-----+---------+-------+
| Field      | Type        | Null | Key | Default | Extra |
+------------+-------------+------+-----+---------+-------+
| integer_id | int(11)     | NO   | PRI | 0       |       |
| guid       | varchar(22) | NO   | UNI |         |       |
+------------+-------------+------+-----+---------+-------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;What I expected to see was&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;mysql&amp;gt; show columns from moo;
+------------+-------------+------+-----+---------+----------------+
| Field      | Type        | Null | Key | Default | Extra          |
+------------+-------------+------+-----+---------+----------------+
| integer_id | int(11)     | NO   | PRI | NULL    | auto_increment |
| guid       | varchar(22) | NO   | UNI |         |                |
+------------+-------------+------+-----+---------+----------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The difference is that when the column was renamed it seems to have lost its auto increment and NOT NULL properties. Some investigation showed that the SQL being used to rename the column was&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ALTER TABLE `videos` CHANGE `id` `integer_id` int(11)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;when it should be&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ALTER TABLE `videos` CHANGE `id` `integer_id` int(11) NOT NULL AUTO_INCREMENT
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It seems that this is already filled as a &lt;a href=&quot;http://dev.rubyonrails.org/ticket/6999&quot;&gt;bug&lt;/a&gt; on the rails site, including a patch.&lt;/p&gt;
&lt;p&gt;Funnily enough that bug is owned by &lt;strong&gt;bitsweat&lt;/strong&gt;. It seems he&apos;s managed to help me out twice in one day :) It doesn&apos;t seem that it made it into Rails 2.0 though so until then be careful about renaming columns using migrations.&lt;/p&gt;
</content:encoded></item><item><title>Elastix and VMware</title><link>https://inodes.org/2008/01/05/elastix-and-vmware</link><guid isPermaLink="true">https://inodes.org/2008/01/05/elastix-and-vmware</guid><description>Took the plunge today to update my asterisk server. I&apos;ve been using asterisk for about 5 years now and am pretty adept and manipulating its cryptic…</description><pubDate>Sat, 05 Jan 2008 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Took the plunge today to update my asterisk server. I&apos;ve been using &lt;a href=&quot;http://www.asterisk.org&quot;&gt;asterisk&lt;/a&gt; for about 5 years now and am pretty adept and manipulating its cryptic configuration files but I wanted to move to more of an appliance. I decided to give &lt;a href=&quot;http://www.elastix.org&quot;&gt;Elastix&lt;/a&gt; a try.&lt;/p&gt;
&lt;p&gt;These days I virtualise all my boxes on a VMware Server environment. I got Elastix installed with no problems but then I wanted to get VMware Tools installed. This gives you better network drivers and make sure your clock stays in sync.&lt;/p&gt;
&lt;p&gt;Since this requires you to compile some kernel modules you need to have the &lt;strong&gt;kernel-devel&lt;/strong&gt; package installed so you can compile against your current kernel. This would normally be a simple matter of&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;yum install kernel-devel
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;However this seemed to do nothing. After a fair bit of investigation I worked out that Elastix ship there own kernel and modules for some asterisk specific hardware like zaptel and rhino. To make sure you don&apos;t use the CentOS kernel they disable that package from that repository.&lt;/p&gt;
&lt;p&gt;If you don&apos;t particularly need the Elastix kernel (I don&apos;t since this system will be pure VoIP) you can renable the CentOS modules by editing &lt;em&gt;/etc/yum.repos.d/CentOS-Base.repo&lt;/em&gt; and commenting out ball the lines that look like&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;exclude=kernel*
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; So it seems that this means that I won&apos;t get the ztdummy module. This module uses the USB chipset to provide timing for some asterisk related things like the multi user conference module. I don&apos;t really use this at the moment it&apos;s not a big deal but I may have to roll my own kernel RPMs later down the track.&lt;/p&gt;
</content:encoded></item><item><title>linux.conf.au 2008 selling out</title><link>https://inodes.org/2007/12/21/linuxconfau-2008-selling-out</link><guid isPermaLink="true">https://inodes.org/2007/12/21/linuxconfau-2008-selling-out</guid><description>It&apos;s 15:11pm and there are only 11 tickets left for linux.conf.au. WARNING: If you have registered but haven&apos;t gotten around to paying yet then you are going…</description><pubDate>Fri, 21 Dec 2007 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;It&apos;s 15:11pm and there are only 11 tickets left for linux.conf.au.&lt;/p&gt;
&lt;p&gt;WARNING: If you have registered but haven&apos;t gotten around to paying yet then you are going to miss out.&lt;/p&gt;
&lt;p&gt;So hop to it. Otherwise you are going to be a sad panda.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;http://farm1.static.flickr.com/213/492819160_b19547643b_m.jpg&quot; alt=&quot;Sad Banda&quot; /&gt;&lt;/p&gt;
</content:encoded></item><item><title>Sad day for open standards</title><link>https://inodes.org/2007/12/12/sad-day-for-open-standards</link><guid isPermaLink="true">https://inodes.org/2007/12/12/sad-day-for-open-standards</guid><description>Its a sad day when one of the most open of standards bodies bows to corporate pressure.   Removal of Ogg Vorbis and Theora from HTML5: an outrageous disaster  …</description><pubDate>Tue, 11 Dec 2007 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Its a sad day when one of the most open of standards bodies bows to corporate pressure.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;http://rudd-o.com/archives/2007/12/11/removal-of-ogg-vorbis-and-theora-from-html5-an-outrageous-disaster&quot;&gt;Removal of Ogg Vorbis and Theora from HTML5: an outrageous disaster&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;http://html5.org/tools/web-apps-tracker?from=1142&amp;amp;to=1143&quot;&gt;Lift the cat who was amongst the pigeons up and put him back on his pedestal for now. (remove requirement on ogg for now)&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;http://www.w3.org/2007/08/video/&quot;&gt;W3C Video on the Web Workshop&lt;/a&gt; starts tomorrow. Hopfully &lt;a href=&quot;http://blog.gingertech.net&quot;&gt;Silvia&lt;/a&gt; can help kick some heads back into shape.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; Possibly with a silver lining though, check out K&apos;s take on the change &lt;a href=&quot;http://blog.kfish.org/2007/12/html5-for-free-media-today-on-whatwg.html&quot;&gt; HTML5 for free media: Today on #whatwg&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Energy Australia social engineering attack</title><link>https://inodes.org/2007/10/25/energy-australia-social-engineering-attack</link><guid isPermaLink="true">https://inodes.org/2007/10/25/energy-australia-social-engineering-attack</guid><description>In the middle of a power outage at the moment, so I called Energy Australia to see what was going on. Me: Hi, I&apos;m in Ryde and have no power. EA: Sure we are…</description><pubDate>Thu, 25 Oct 2007 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In the middle of a power outage at the moment, so I called Energy Australia to see what was going on.&lt;/p&gt;
&lt;p&gt;Me: Hi, I&apos;m in Ryde and have no power.
EA: Sure we are having a problem in that area. What&apos;s your address?
Me: 54 Blah St.
EA: Thats under the name of Ferlito?
Me: Yes.&lt;/p&gt;
&lt;p&gt;So if you need to find out who lives somewhere really easily just call Energy Australia and claim your having a power outage. Probably won&apos;t work every time but it will some of the time.&lt;/p&gt;
&lt;p&gt;Oh yeah no power till this afternoon. Bummer!&lt;/p&gt;
</content:encoded></item><item><title>Google Reader Subscribers</title><link>https://inodes.org/2007/07/26/google-reader-subscribers</link><guid isPermaLink="true">https://inodes.org/2007/07/26/google-reader-subscribers</guid><description>I love Google Reader and have been using it for about 4 months to manage the 189 RSS feeds I currently care about. (Here are my shared items for anyone that is…</description><pubDate>Thu, 26 Jul 2007 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I love &lt;a href=&quot;http://www.google.com/reader&quot;&gt;Google Reader&lt;/a&gt; and have been using it for about 4 months to manage the 189 RSS feeds I currently care about. (Here are my &lt;a href=&quot;http://www.google.com/reader/shared/07037136184763497713&quot;&gt;shared items&lt;/a&gt; for anyone that is interested.)&lt;/p&gt;
&lt;p&gt;While browsing the &lt;a href=&quot;http://www.google.com/help/reader/publishers.html&quot;&gt;Google Reader FAQ&lt;/a&gt; looking for how to get vquences embedded properly I came across the following.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Does Google Reader report subscriber counts?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Yes, Google Reader reports subscriber counts when we crawl feeds (within the &quot;User-Agent:&quot; header in HTTP). Currently, these counts include users of both Reader and Google; over time they&apos;ll also include subscriptions from other Google properties.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Here is an example from my logs&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;209.85.238.4 - - [26/Jul/2007:07:31:54 +1000] &quot;GET /blog/feed/atom/ HTTP/1.1&quot; 304 0 &quot;-&quot; &quot;Feedfetcher-Google; (+&lt;a href=&quot;http://www.google.com/feedfetcher.html&quot;&gt;http://www.google.com/feedfetcher.html&lt;/a&gt;; 5 subscribers; feed-id=15287401989222975041)&quot;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This is something I&apos;ve always wanted to know. The stats aren&apos;t particularly interesting but does point out an optimisation Google could make.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;/blog/feed/atom/ - 5 subscribers&lt;/li&gt;
&lt;li&gt;/blog/feed - 2 subscribers&lt;/li&gt;
&lt;li&gt;/blog/feed/ - 1 subscriber&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;ie the last 2 are identical (note the difference is the trailing slash) and they are all pointing at the same blog. It would be cool if Google worked out the above are all exactly the same and only probed once.&lt;/p&gt;
&lt;p&gt;Even more interestingly Google is probing these URLs at different frequencies.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;/blog/feed/atom/ - Every hour&lt;/li&gt;
&lt;li&gt;/blog/feed - Every hour&lt;/li&gt;
&lt;li&gt;/blog/feed/ - Every 3 hours&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Looks like it might be related to the number of subscribers, would be interesting to see other peoples data here.&lt;/p&gt;
</content:encoded></item><item><title>Out of the wilderness</title><link>https://inodes.org/2007/07/17/out-of-the-wilderness</link><guid isPermaLink="true">https://inodes.org/2007/07/17/out-of-the-wilderness</guid><description>I took another step out of the wilderness today... Those who have know me for a while will know that up until recently I exclusively used linux virtual…</description><pubDate>Tue, 17 Jul 2007 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I took another step out of the wilderness today...&lt;/p&gt;
&lt;p&gt;Those who have know me for a while will know that up until recently I exclusively used linux virtual consoles (ie what  CTRL-ALT-F1 gives you from within X) to do all my work except for browsing the web. Recently I stopped using them all together and moved totally into the land of X and started using gnome-terminal instead.&lt;/p&gt;
&lt;p&gt;Well I suppose it wasn&apos;t that big a step as my processes havn&apos;t changed that much. I simply have a gnome-terminal with tabs full screen in the monitor on my left and a full screen firefox in the monitor on my right :)&lt;/p&gt;
&lt;p&gt;I took another step today moving from centericq to pidgin for my IM needs. I&apos;m quite liking it so far especially some of the pop up notification plugins since I can follow channel conversations without switching away from what I&apos;m doing.&lt;/p&gt;
&lt;p&gt;Now does anyone know if there is a plugin to sync all my configuration settings between different machines. That was the handiest thing about running centericq from inside a screen.&lt;/p&gt;
&lt;p&gt;But have no fear I&apos;m still using mutt for mail and doubt that will ever change.&lt;/p&gt;
</content:encoded></item><item><title>SFD2006 - Return to sender</title><link>https://inodes.org/2007/07/11/sfd2006-return-to-sender</link><guid isPermaLink="true">https://inodes.org/2007/07/11/sfd2006-return-to-sender</guid><description>Pia posting about Software freedom day, software freedom day online shop is up, reminded me about something I&apos;ve been meaning to post for a while. When you…</description><pubDate>Tue, 10 Jul 2007 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Pia posting about Software freedom day, &lt;a href=&quot;http://pipka.org/blog/2007/10/software-freedom-day-online-shop-is-up/&quot;&gt;software freedom day online shop is up&lt;/a&gt;, reminded me about something I&apos;ve been meaning to post for a while.&lt;/p&gt;
&lt;p&gt;When you send in the address to get your team&apos;s t-shirts and goodies, make sure you get it right!&lt;/p&gt;
&lt;p&gt;Last year I helped pack all the goodies that we sent overseas, this was sometime in August if I remember correctly. We needed to put a return address on the packages so I offered the use of Bulletproof&apos;s address.&lt;/p&gt;
&lt;p&gt;6 months later the following turned up on our doorstep.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/blog/2007/photo-0004.jpg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/blog/2007/photo-0005.jpg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/blog/2007/photo-0006.jpg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Notice the use of hemp rope and wax seal. This box has been through a lot!&lt;/p&gt;
</content:encoded></item><item><title>DSPAM case sensitivity</title><link>https://inodes.org/2007/05/11/dspam-case-sensitivity</link><guid isPermaLink="true">https://inodes.org/2007/05/11/dspam-case-sensitivity</guid><description>I use DSPAM to handle my spam checking and have been quite happy with it as it normally delivers 99.9% hit rate. In the last few weeks the amount of spam in my…</description><pubDate>Fri, 11 May 2007 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I use &lt;a href=&quot;http://www.nuclearelephant.com&quot;&gt;DSPAM&lt;/a&gt; to handle my spam checking and have been quite happy with it as it normally delivers &amp;gt;99.9% hit rate.&lt;/p&gt;
&lt;p&gt;In the last few weeks the amount of spam in my INBOX had been getting progressively worse to the point where I noticed no spam whatsoever was making its way into my spam folder.&lt;/p&gt;
&lt;p&gt;Looking through my logs I eventually found the following&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;May 10 10:03:03 fozzie dspam[30287]: Unable to find a valid signature. Aborting.
May 10 10:03:03 fozzie dspam[30287]: process_message returned error -5.  dropping message.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I process my spam by using a mutt macro which bounces emails to johnf-spam at inodes dot org. This then passes the email to DSPAM which reclassifies it. It does this by looking at a header it added to the email.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;X-DSPAM-Signature: 464400d0223642194712985
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;However these were appearing in my INBOX as&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;X-Dspam-Signature: 464400d0223642194712985
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I use procmail and a perl script to pre-process some of my email and it uses &lt;strong&gt;Mail::Internet&lt;/strong&gt; which in turn uses &lt;strong&gt;Mail::Header&lt;/strong&gt;. It bestows this piece of wisdom upon the world.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# attempt to change the case of a tag to that required by RFC822. That
# being all characters are lowercase except the first of each word. Also
# if the word is an `acronym&apos; then all characters are uppercase. We decide
# a word is an acronym if it does not contain a vowel.

sub _tag_case
{
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now I can&apos;t see where in &lt;a href=&quot;http://www.ietf.org/rfc/rfc0822.txt&quot;&gt;RFC822&lt;/a&gt; it specifies this but in section B.2 it does specify&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Upper and lower case are not dis-tinguished when comparing field-names.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So on that basis I choose to blame DSPAM and applied the following diff&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;diff -ur dspam-3.8.0.orig/src/dspam.c dspam-3.8.0/src/dspam.c
--- dspam-3.8.0.orig/src/dspam.c        2006-12-13 02:33:45.000000000 +1100
+++ dspam-3.8.0/src/dspam.c     2007-05-11 16:25:11.000000000 +1000
@@ -2165,7 +2165,7 @@
           while(node_header != NULL) {
             head = (ds_header_t) node_header-&amp;gt;ptr;
             if (head-&amp;gt;heading &amp;amp;&amp;amp; 
-                !strcmp(head-&amp;gt;heading, &quot;X-DSPAM-Signature&quot;)) {
+                !strcasecmp(head-&amp;gt;heading, &quot;X-DSPAM-Signature&quot;)) {
               if (!strncmp(head-&amp;gt;data, SIGNATURE_BEGIN, 
                            strlen(SIGNATURE_BEGIN))) 
               {
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now to work out the best way to push that upstream.&lt;/p&gt;
</content:encoded></item><item><title>CeBIT - Open Source and Business Communities</title><link>https://inodes.org/2007/05/10/cebit-open-source-and-business-communities</link><guid isPermaLink="true">https://inodes.org/2007/05/10/cebit-open-source-and-business-communities</guid><description>At CeBIT last week I participated in a panel discussion on Open Source and Business Communities as part of OpenCeBIT. Also on the panel were Simon Phipps and…</description><pubDate>Thu, 10 May 2007 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;At CeBIT last week I participated in a panel discussion on Open Source and Business Communities as part of OpenCeBIT. Also on the panel were &lt;a href=&quot;http://www.webmink.net/&quot;&gt;Simon Phipps&lt;/a&gt; and &lt;a href=&quot;http://jon.oxer.com.au&quot;&gt;Jon Oxer&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Simon Phipps created a podcast of the event which you can find here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://mediacast.sun.com/share/webmink/Linux%20Australia%20Panel%20at%20CeBIT.MP3&quot;&gt;MP3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://mediacast.sun.com/share/webmink/Linux%20Australia%20Panel%20at%20CeBIT.ogg&quot;&gt;Ogg&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://click.linksynergy.com/fs-bin/stat?id=kuK***sqbac&amp;amp;offerid=78941&amp;amp;type=3&amp;amp;subid=0&amp;amp;tmpid=1826&amp;amp;RD_PARM1=http%253A%252F%252Fphobos.apple.com%252FWebObjects%252FMZStore.woa%252Fwa%252FviewPodcast%253Fid%253D218534869%2526partnerId%253D30&quot;&gt;iTunes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can find the slides I used for reference during my presentation &lt;a href=&quot;http://inodes.org/blog/presentations/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Ubuntu, VLANs and Bridges</title><link>https://inodes.org/2007/04/30/ubuntu-vlans-and-bridges</link><guid isPermaLink="true">https://inodes.org/2007/04/30/ubuntu-vlans-and-bridges</guid><description>Bridge and VLAN support has improved dramatically under Ubuntu and probably Debian as well since I last looked into it. once upon a time to create a bridge…</description><pubDate>Mon, 30 Apr 2007 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Bridge and VLAN support has improved dramatically under Ubuntu and probably Debian as well since I last looked into it. once upon a time to create a bridge linked to a VLAN interface you would have to do horrible things like.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;auto eth0
ifconfig eth0 inet manual
    pre-up /sbin/vconfig set_name_type VLAN_PLUS_VID_NO_PAD || true

auto vlan7
iface vlan7 inet manual
    pre-up /sbin/vconfig add eth0 7 || true
    post-down /sbin/vconfig rem vlan7 || true

auto br0
    pre-up brctl addbr br0
    pre-up brctl addif br0 vlan7
    post-down brctl delbr br0
    address 10.38.38.1
    netmask 255.255.255.0
    network 10.38.38.0
    broadcast 10.38.38.255
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now the bridge-utils and vlan packages provide hooks into the ifup and ifdown commands so you can simply do&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;auto br-vlan4
iface br-vlan4 inet static
    address 10.38.38.1
    netmask 255.255.255.0
    network 10.38.38.0
    broadcast 10.38.38.255
    vlan-raw-device eth1
    bridge_ports vlan4
    bridge_maxwait 0
    bridge_fd 0
    bridge_stp off
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Which will automagically&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Bring up &lt;strong&gt;eth1&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create &lt;strong&gt;vlan4&lt;/strong&gt; bound to the &lt;strong&gt;eth1&lt;/strong&gt; interface&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Bring up &lt;strong&gt;vlan4&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create the &lt;strong&gt;br0&lt;/strong&gt; with &lt;strong&gt;vlan4&lt;/strong&gt; attached&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Give &lt;strong&gt;eth1&lt;/strong&gt; the same HW address as &lt;strong&gt;br0&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Bring up &lt;strong&gt;br0&lt;/strong&gt; with the IP address&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Nifty!&lt;/p&gt;
</content:encoded></item><item><title>Mongrel, rails and the theory of relativity</title><link>https://inodes.org/2007/04/04/mongrel-rails-and-the-theory-of-relativity</link><guid isPermaLink="true">https://inodes.org/2007/04/04/mongrel-rails-and-the-theory-of-relativity</guid><description>Summary (E = mc&amp;sup2;) When using mongrel for rails and you want to deploy an app under /otherurl then use in config/environments/production.rb instead of…</description><pubDate>Wed, 04 Apr 2007 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Summary (E = mc²)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When using mongrel for rails and you want to deploy an app under /other_url then use&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ActionController::AbstractRequest.relative_url_root = &quot;/other_url&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;in config/environments/production.rb instead of&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ENV[&apos;RAILS_RELATIVE_URL_ROOT&apos;] = &quot;/other_url&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Proof (From first principals)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;At &lt;a href=&quot;http://www.vquence.com&quot;&gt;Vquence&lt;/a&gt; we have a pretty standard rails setup&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Apache with mod_proxy&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;pen&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;mongrel&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href=&quot;http://blog.gingertech.net&quot;&gt;Silvia&lt;/a&gt; recently wrote an application to allow us to edit the news articles posted to our corporate website. I wanted to do something I thought would be pretty simple, have the application appear at /news on our admin web server.&lt;/p&gt;
&lt;p&gt;Step one was the obvious change to mod_proxy&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ProxyPass /news http://localhost:8000
ProxyPassReverse /news http://localhost:8000
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Of course the problem is that the rails app still thinks it is living on &lt;em&gt;/&lt;/em&gt; so it returns URLs like &lt;em&gt;/stylesheets/moo.css&lt;/em&gt; instead of &lt;em&gt;/news/stylesheets/moo.css&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;A bit of googling found a few email threads with a common solution. In your environment.rb set&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ENV[&apos;RAILS_RELATIVE_URL_ROOT&apos;] = &quot;/other_url&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is where things fell apart fairly quickly. I could not get this to work no matter what I tried. After a few hours of following a HTTP request through the whole Mongrel and rails stack I discovered the following.&lt;/p&gt;
&lt;p&gt;Setting &lt;em&gt;RAILS_RELATIVE_ROOT&lt;/em&gt; will work fine if you are running rails using CGI. For the simple reason, which should have been more obvious to me sooner, that CGIs use environment variables to access their parameters. This can be seen in the&lt;/p&gt;
&lt;p&gt;ruby CGI class&lt;/p&gt;
&lt;p&gt;/usr/lib/ruby/1.8/cgi.rb:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class CGI
  def env_table
    ENV
  end
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;However mongrel overloads &lt;em&gt;env_table&lt;/em&gt; and does the following instead&lt;/p&gt;
&lt;p&gt;/usr/lib/ruby/1.8/mongrel/cgi.rb:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class CGIWrapper &amp;lt; ::CGI
  # Used to wrap the normal env_table variable used inside CGI.
  def env_table
    @request.params
  end
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This makes sense since the rails code is now running inside the web server so environment variables aren&apos;t necessary. Upon investigation I found that the URL morphing magic is performed with rails as follows.&lt;/p&gt;
&lt;p&gt;/usr/share/rails/actionpack/lib/action_controller/request.rb:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class AbstractRequest
  cattr_accessor :relative_url_root
  
  # Returns the path minus the web server relative installation directory.
  # This can be set with the environment variable RAILS_RELATIVE_URL_ROOT.
  # It can be automatically extracted for Apache setups. If the server is not
  # Apache, this method returns an empty string.
  def relative_url_root
    @@relative_url_root ||= case
      when @env[&quot;RAILS_RELATIVE_URL_ROOT&quot;]
        @env[&quot;RAILS_RELATIVE_URL_ROOT&quot;]
      when server_software == &apos;apache&apos;
        @env[&quot;SCRIPT_NAME&quot;].to_s.sub(//dispatch.(fcgi|rb|cgi)$/, &apos;&apos;)
      else
        &apos;&apos;
    end
  end
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;What this all means is that you can solve the whole problem by placing the following in your &lt;em&gt;config/environments/production.rb&lt;/em&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ActionController::AbstractRequest.relative_url_root = &quot;/other_url&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now if only Einstein had put his theories to good use and invented a time machine then maybe I could get the last 4 hours of my life back :)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; Make sure &lt;em&gt;/other_url&lt;/em&gt; isn&apos;t the same name as one of your controllers or &lt;strong&gt;bad&lt;/strong&gt; things happen.&lt;/p&gt;
</content:encoded></item><item><title>linux.conf.au brings about another change</title><link>https://inodes.org/2007/03/26/linuxconfau-brings-about-another-change</link><guid isPermaLink="true">https://inodes.org/2007/03/26/linuxconfau-brings-about-another-change</guid><description>Being Technical Guru for linux.conf.au 2007 was one of the most amazing experiences I&apos;ve had in recent years. It was a lot of hard work but it was totally…</description><pubDate>Sun, 25 Mar 2007 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Being Technical Guru for &lt;a href=&quot;http://lca2007.linux.org.au&quot;&gt;linux.conf.au 2007&lt;/a&gt; was one of the most amazing experiences I&apos;ve had in recent years. It was a lot of hard work but it was totally worth it. Having a room burst into applause at the penguin dinner when you say your the network guy is pretty unbelievable.&lt;/p&gt;
&lt;p&gt;I went up to the Hunter for a week to recover from the conference and as usual after linux.conf.au I did a lot of thinking as to whether it was time to try something new. This time change won out at the end of the day and after 6 years at &lt;a href=&quot;http://bulletproof.net&quot;&gt;Bulletproof&lt;/a&gt; I decided it was time to move on.&lt;/p&gt;
&lt;p&gt;At the beginning of March I started as Director of Engineering at &lt;a href=&quot;http://www.vquence.com&quot;&gt;Vquence&lt;/a&gt;. Since we are a video company it was decided that we each needed to have our own &lt;a href=&quot;http://www.vquence.com/about/john_ferlito#video&quot;&gt;video&lt;/a&gt; on the web.&lt;/p&gt;
&lt;p&gt;The past three weeks have been so hectic that Bulletproof already seems a lifetime ago. I&apos;ve been involved in everything from setting up the new office and the corporate infrastructure to product development.&lt;/p&gt;
&lt;p&gt;Joining a startup right at the beginning is always an amazing experience. With just a few people on the ground you always get pulled in a few million directions and there is always a new challenge just another five minutes away. I definitely recommend anyone else to jump at the opportunity if it ever presents itself.&lt;/p&gt;
</content:encoded></item><item><title>SLUG VoIP Slides</title><link>https://inodes.org/2007/01/03/slug-voip-slides</link><guid isPermaLink="true">https://inodes.org/2007/01/03/slug-voip-slides</guid><description>I&apos;ve finally gotten around to putting the slides from my SLUG talk up. Funnily enough linux.conf.au has kept me pretty busy, as usual I&apos;ll take this…</description><pubDate>Tue, 02 Jan 2007 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;ve finally gotten around to putting the slides from my &lt;a href=&quot;http://slug.org.au&quot;&gt;SLUG&lt;/a&gt; talk up. Funnily enough &lt;a href=&quot;http://lca2007.linux.org.au&quot;&gt;linux.conf.au&lt;/a&gt; has kept me pretty busy, as usual I&apos;ll take this opportunity to &lt;a href=&quot;http://justblamepia.com&quot;&gt;just blame&lt;/a&gt; &lt;a href=&quot;http://pipka.org/blog&quot;&gt;Pia&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You can find the slides on my &lt;a href=&quot;http://inodes.org/blog/presentations/&quot;&gt;presentations&lt;/a&gt; page, and here is a direct link to the &lt;a href=&quot;/blog/2007/voip-what-it-can-do-for-you.pdf&quot;&gt;PDF&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A couple of people have asked me which VoIP phones and ATAs I recommend. I don&apos;t have a load of experience with different brands but have done a fair bit of research and really like the &lt;a href=&quot;http://snom.de&quot;&gt;SNOM&lt;/a&gt; phones and the &lt;a href=&quot;http://sipura.com&quot;&gt;Linksys (Sipura)&lt;/a&gt; ATAs the best.&lt;/p&gt;
&lt;p&gt;The main advantages of these units is that they are of fairly high quality a very good price. They are very configurable and have the advantage of being mass deployed via DHCP, TFTP and CGI based config files.&lt;/p&gt;
</content:encoded></item><item><title>linux.conf.au payment gateway</title><link>https://inodes.org/2006/12/02/lca-payment-gateway</link><guid isPermaLink="true">https://inodes.org/2006/12/02/lca-payment-gateway</guid><description>Some of you may have noticed that we have been having a few problems with the linux.conf.au payment gateway. These have ranged from timeouts due to email and…</description><pubDate>Sat, 02 Dec 2006 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Some of you may have noticed that we have been having a few problems with the linux.conf.au payment gateway. These have ranged from timeouts due to email and DNS issues to 500 server errors due to one or two bugs.&lt;/p&gt;
&lt;p&gt;For those of you worried about duplicate payments, don&apos;t :) We were just sending duplicate receipts for a while. You see Commsecure  as well as redirecting the user back to the payment_received page, also does a GET on the page themselves. Which means we effectively receive duplicate transactions for everything and this meant we were sending two receipts.&lt;/p&gt;
&lt;p&gt;Other than that the Commsecure setup is actually quite nice and does its best not to let users pay twice. It also seems to be written in python.&lt;/p&gt;
&lt;p&gt;I had always tried to avoid python, being a long time perl hacker. In the last few months I&apos;ve been dragged into it kicking and screaming. Scarily I&apos;ve actually come to like it. Its nice having real exceptions! Pylons, Myghty and SQLAlchemy are also pretty cool frameworks and have meant I&apos;ve come up to speed on the website code pretty quickly.&lt;/p&gt;
&lt;p&gt;Anyway back to LCA, we are a handful of rego&apos;s away from having 500! Don&apos;t forget you&apos;ve got till the &lt;strong&gt;8th December&lt;/strong&gt; to pay if you registered early enough to get earlybird rates.&lt;/p&gt;
</content:encoded></item><item><title>Lindsay made me do it! </title><link>https://inodes.org/2006/11/22/lindsay-made-me-do-it</link><guid isPermaLink="true">https://inodes.org/2006/11/22/lindsay-made-me-do-it</guid><description>While at the Waugh Partners launch party tonight, a bunch of people, mainly Lindsay asked for some details on what I&apos;d be talking about at SLUG on Friday. I…</description><pubDate>Wed, 22 Nov 2006 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;While at the &lt;a href=&quot;http://waughpartners.com.au&quot;&gt;Waugh Partners&lt;/a&gt; launch party tonight, a bunch of people, mainly &lt;a href=&quot;http://holmwood.id.au/~lindsay&quot;&gt;Lindsay&lt;/a&gt; asked for some details on what I&apos;d be talking about at &lt;a href=&quot;http://slug.org.au&quot;&gt;SLUG&lt;/a&gt; on Friday. I thought that was a very good question and that I should make something up :)&lt;/p&gt;
&lt;p&gt;So for those that are wondering I will attempt to cover the following topics in no particular order or level of detail&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;VoIP :)&lt;/li&gt;
&lt;li&gt;Codecs, which one should I use&lt;/li&gt;
&lt;li&gt;VoIP Hardware (Phones, ATAs, ISDN and PSTN cards, Mobile Pods)&lt;/li&gt;
&lt;li&gt;VoIP Providers and what they offer&lt;/li&gt;
&lt;li&gt;Asterisk and what it can do&lt;/li&gt;
&lt;li&gt;Beagle Internet IVR and distributed VoIP Call Centre as a case study&lt;/li&gt;
&lt;li&gt;Asterisk@Home&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If there is anything else that in particular you are interested in or would like me to talk about then let me know.&lt;/p&gt;
&lt;p&gt;I&apos;ll also be bringing along various bits of hardware and hope to have a full demonstration running.&lt;/p&gt;
&lt;p&gt;At &lt;a href=&quot;http://perkypants.org/blog/&quot;&gt;Jeff&apos;s&lt;/a&gt; request I will be doing an in depth overview  of the difference between FXO and FXS and why it is critically important to any VoIP implementation. This will most likely require at least 20 slides and about 50 minutes of explanation :P&lt;/p&gt;
</content:encoded></item><item><title>iptables evilness</title><link>https://inodes.org/2006/11/13/iptables-evilness</link><guid isPermaLink="true">https://inodes.org/2006/11/13/iptables-evilness</guid><description>Matt came to me with an interesting problem at Bulletproof this week. We have a dedicated hosting customer who talks to an external application for e-commerce.…</description><pubDate>Sun, 12 Nov 2006 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Matt came to me with an interesting problem at &lt;a href=&quot;http://bulletproof.net&quot;&gt;Bulletproof&lt;/a&gt; this week. We have a dedicated hosting customer who talks to an external application for e-commerce. The IP for this was going to change but they needed to do to some testing before the switch. As usual with most enterprise applications, the hostname was hard coded. :(&lt;/p&gt;
&lt;p&gt;Matt suggested we do some DNS poisoning or do some transparent proxying using squid or similar. While these would have worked they required firewall changes through three levels of firewalls and extra infrastructure.&lt;/p&gt;
&lt;p&gt;So I turned to an evil solution, iptables. :)  Most people use DNAT on the inbound connection from the  Internet to their internal private network to port forward to internal servers, or perform one-to-one NAT mappings. There is nothing stopping you using it the other way around.&lt;/p&gt;
&lt;p&gt;Lets say that every time someone browses to &lt;a href=&quot;http://bulletproof.net&quot;&gt;http://bulletproof.net&lt;/a&gt; we want them to hit &lt;a href=&quot;http://inodes.org&quot;&gt;http://inodes.org&lt;/a&gt; instead. All you need to do is use DNAT to translate one IP address into the other.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;animal:~ johnf$ host bulletproof.net
bulletproof.net has address 202.44.98.174
animal:~ johnf$ host inodes.org
inodes.org has address 202.125.41.97
animal:~ johnf$ sudo iptables -t nat -A PREROUTING -d 202.44.98.174 -j DNAT  --to 202.125.41.97
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now for some testing, a ping looks normal&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;animal:~johnf$ ping www.bulletproof.net
PING www.bulletproof.net.au (202.44.98.174) 56(84) bytes of data.
64 bytes from 202.44.98.174: icmp_seq=1 ttl=241 time=198 ms
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;but a tcpdump looks like&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;animal:~johnf$ sudo tcpdump -ni eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
16:35:25.099510 IP 211.30.227.143 &amp;gt; 202.125.41.97: icmp 64: echo request seq 1
16:35:25.301712 IP 202.125.41.97 &amp;gt; 211.30.227.143: icmp 64: echo reply seq 1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Of course if anyone needs to try and debug this they are going to have a really fun time working out what is going on. :)&lt;/p&gt;
&lt;p&gt;If you want to test this yourself you can do it on your own machine using the &lt;strong&gt;OUTPUT&lt;/strong&gt; chain instead of &lt;strong&gt;PREROUTING&lt;/strong&gt;.&lt;/p&gt;
</content:encoded></item><item><title>250!</title><link>https://inodes.org/2006/11/11/250</link><guid isPermaLink="true">https://inodes.org/2006/11/11/250</guid><description>We&apos;ve just hit 250 registrations for linux.conf.au, only 5 days to go before early bird registrations close. So here are some interesting stats of the attendee…</description><pubDate>Sat, 11 Nov 2006 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;We&apos;ve just hit 250 registrations for &lt;a href=&quot;http://lca2007.linux.org.au&quot;&gt;linux.conf.au&lt;/a&gt;, only 5 days to go before early bird registrations close.&lt;/p&gt;
&lt;p&gt;So here are some interesting stats of the attendee breakdown so far:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;By Country&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Country&lt;/th&gt;
&lt;th&gt;Number&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Brazil&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Canada&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;France&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ireland&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Liberia&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Nigeria&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;China&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Singapore&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Spain&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UK&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Croatia&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Germany&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Japan&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Romania&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;New Zealand&lt;/td&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;USA&lt;/td&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Australia&lt;/td&gt;
&lt;td&gt;188&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;Australia by state&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;State&lt;/th&gt;
&lt;th&gt;Number&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;NT&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TAS&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;WA&lt;/td&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;QLD&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SA&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ACT&lt;/td&gt;
&lt;td&gt;23&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VIC&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NSW&lt;/td&gt;
&lt;td&gt;77&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
</content:encoded></item><item><title>ThinkingLinux &apos;06</title><link>https://inodes.org/2006/10/20/thinkinglinux-06</link><guid isPermaLink="true">https://inodes.org/2006/10/20/thinkinglinux-06</guid><description>ThinkingLinux &apos;06 was held in Melbourne a few days ago. It was organised by Synergy Plus with sponsorship by RedHat. Novel and a few others. I gave a talk on…</description><pubDate>Fri, 20 Oct 2006 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;http://www.thinkinglinux.com.au&quot;&gt;ThinkingLinux &apos;06&lt;/a&gt; was held in Melbourne a few days ago. It was organised by &lt;a href=&quot;http://synergy.com.au&quot;&gt;Synergy Plus&lt;/a&gt; with sponsorship by RedHat. Novel and a few others.&lt;/p&gt;
&lt;p&gt;I gave a talk on &lt;a href=&quot;http://inodes.org/blog/presentations&quot;&gt;Open Source in the Data Centre&lt;/a&gt;. Luckily this talk was after lunch so I got to do some editing in the morning sessions to tweak it more towards a business rather than technical audience. :)&lt;/p&gt;
&lt;p&gt;The conference was pretty awesome with interesting talks, ranging from Xen to how wotif.com was started.&lt;/p&gt;
&lt;p&gt;Copies of the slides for all the talks should eventually make it onto the conference&apos;s website.&lt;/p&gt;
</content:encoded></item><item><title>Open Source in the Data Centre</title><link>https://inodes.org/2006/10/12/open-source-in-the-data-centre</link><guid isPermaLink="true">https://inodes.org/2006/10/12/open-source-in-the-data-centre</guid><description>Next Tuesday (17th Oct) I&apos;ll be giving a presentation at Thinking Linux &apos;06 in Melbourne. The talk is entitled Open Source in the Data Centre and I&apos;ll be…</description><pubDate>Wed, 11 Oct 2006 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Next Tuesday (17th Oct) I&apos;ll be giving a presentation at &lt;a href=&quot;http://www.thinkinglinux.com.au&quot;&gt;Thinking Linux &apos;06&lt;/a&gt; in Melbourne.&lt;/p&gt;
&lt;p&gt;The talk is entitled &lt;em&gt;Open Source in the Data Centre&lt;/em&gt; and I&apos;ll be covering things like&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Load Balancing &quot;Stuff&quot; (IPVS, keepalived, heartbeat)&lt;/li&gt;
&lt;li&gt;Monitoring using Nagios and MRTG/rrdtool&lt;/li&gt;
&lt;li&gt;Authentication with OpenLDAP anf FreeRADIUS&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;and a whole lot of other random things I can fit into 40 minutes.&lt;/p&gt;
&lt;p&gt;I choose to blame Pia for putting me in a position to give this talk but only because it&apos;s Jeff&apos;s fault and there isn&apos;t a justblamejdub.com :)&lt;/p&gt;
&lt;p&gt;If anyone wants to catch up on the Monday night down in Melbourne then let me know.&lt;/p&gt;
&lt;p&gt;I&apos;ll put slides up after the event.&lt;/p&gt;
</content:encoded></item><item><title>LCA2007 Paper Review</title><link>https://inodes.org/2006/10/09/lca2007-paper-review</link><guid isPermaLink="true">https://inodes.org/2006/10/09/lca2007-paper-review</guid><description>The seven review team met all day Saturday to work out which of the almost 300 submissions were going to make it into the conference. I had created some…</description><pubDate>Mon, 09 Oct 2006 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;The seven review team met all day Saturday to work out which of the almost 300 submissions were going to make it into the conference.&lt;/p&gt;
&lt;p&gt;I had created some statistics based on the reviews the team had performed, this helped the team easily pick the coolest papers and the mildly cool for inclusion and exclusion. Which then left the difficult job of sorting through the ones in the middle.&lt;/p&gt;
&lt;p&gt;The hardest job was rejecting papers, even though the team had to pick 80 out of 300 papers I can easily say that the quality of what was rejected was exceptional. They would almost all be included if we could hold a 3 week conference.&lt;/p&gt;
&lt;p&gt;Here is some info from those on the review team on how to make sure your paper is near the top of the list.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://puzzling.org/logs/tech/2006/October/9/lca-reviews&quot;&gt;Mary&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://ozlabs.org/~rusty/index.cgi/tech/2006-10-04.html&quot;&gt;Rusty&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://www.mega-nerd.com/erikd/Blog/lca2007_review_committee.html&quot;&gt;Eric&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://perkypants.org/blog/2006/16/seventh-heaven/&quot;&gt;Jeff&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Of course what would statistics be without pretty graphs, I&apos;ve posted daily and cumulative submissions graphs. As expected this proves that people always leave things till the last minute, and sometime a little bit later than that :)&lt;/p&gt;
&lt;p&gt;One enterprising individual even tried to submit something yesterday!&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/blog/2006/daily_cfp1.png&quot;&gt;&lt;img src=&quot;/blog/2006/daily_cfp1.png&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;
&lt;a href=&quot;/blog/2006/cumulative.png&quot;&gt;&lt;img src=&quot;/blog/2006/cumulative.png&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
</content:encoded></item><item><title>Build your own ISP</title><link>https://inodes.org/2006/09/28/build-your-own-isp</link><guid isPermaLink="true">https://inodes.org/2006/09/28/build-your-own-isp</guid><description>I&apos;ve finally gotten around to putting up the slides for my Build your own ISP talk I gave at Software Freedom Day and DEBSIG. You can find them on my…</description><pubDate>Thu, 28 Sep 2006 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;ve finally gotten around to putting up the slides for my &lt;strong&gt;Build your own ISP&lt;/strong&gt; talk I gave at Software Freedom Day and DEBSIG. You can find them on my &lt;a href=&quot;http://inodes.org/blog/presentations/&quot;&gt;Presentations&lt;/a&gt; page or a the direct link to the PDF &lt;a href=&quot;/blog/2006/build_your_own_isp1.pdf&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The slides are fairly sparse, the talk was a bit of a brain dump about random things to do with ISPs. I&apos;m sure someone is going to ask me to give it at SLUG again at some stage :)&lt;/p&gt;
</content:encoded></item><item><title>Software Freedom Day - AV and ISPs</title><link>https://inodes.org/2006/09/15/software-freedom-day-av-and-isps</link><guid isPermaLink="true">https://inodes.org/2006/09/15/software-freedom-day-av-and-isps</guid><description>This Saturday is Software Freedom Day, the main Sydney event sponsored by SLUG will be being held at UNSW, more specific details can be found at the Sydney SFD…</description><pubDate>Thu, 14 Sep 2006 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;This Saturday is &lt;a href=&quot;http://softwarefreedomday.org&quot;&gt;Software Freedom Day&lt;/a&gt;, the main Sydney event sponsored by &lt;a href=&quot;http://slug.org.au&quot;&gt;SLUG&lt;/a&gt; will be being held at &lt;a href=&quot;http://unsw.edu.au&quot;&gt;UNSW&lt;/a&gt;, more specific details can be found at the &lt;a href=&quot;http://softwarefreedomday/teams/oceania/au/sydney&quot;&gt;Sydney SFD page.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;As part of the lead up to linux.conf.au 2007, the AV Team is going to be using this day as a trial run to test out some of the technology we will probably be using at the conference, this currently includes&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://flumotion.net&quot;&gt;Flumotion&lt;/a&gt; for streaming of talks&lt;/li&gt;
&lt;li&gt;Streaming from a scan converter for slides&lt;/li&gt;
&lt;li&gt;Recording to DVDs using a consumer DVD recorder for post production and upload to the website&lt;/li&gt;
&lt;li&gt;Standard recording to DV Tape as a backup :)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Apparently I will also be giving a workshop at 1pm, I say apparently because as usual we get to &lt;a href=&quot;http://justblamepia.com&quot;&gt;justblamepia.com&lt;/a&gt;. For the unwary, suggesting to Pia that you might be able to give a talk means that you will :) The talk is currently titled &lt;strong&gt;How to build your own ISP - But why you shouldn&apos;t&lt;/strong&gt;, it will focus on the &lt;em&gt;right&lt;/em&gt; way to set one up from scratch from my experiences at ZipWorld and &lt;a href=&quot;http://beagle.com.au&quot;&gt;Beagle Internet&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>just blame Pia!</title><link>https://inodes.org/2006/09/12/just-blame-pia</link><guid isPermaLink="true">https://inodes.org/2006/09/12/just-blame-pia</guid><description>At a couple of linux.conf.au meetings we kept coming across the same recurring theme, everything just seemed to be Pia&apos;s fault :) So one dark and rainy night…</description><pubDate>Mon, 11 Sep 2006 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;At a couple of &lt;a href=&quot;http://lca2007.linux.org.au&quot;&gt;linux.conf.au&lt;/a&gt; meetings we kept coming across the same recurring theme, everything just seemed to be Pia&apos;s fault :)&lt;/p&gt;
&lt;p&gt;So one dark and rainy night &lt;a href=&quot;http://pipka.org/blog/2006/12/just-blame-me/&quot;&gt;justblamepia.com  &lt;/a&gt; was born.&lt;/p&gt;
&lt;p&gt;We even get to blame Pia for this post because the site isn&apos;t even ready yet, it is supposed to get a spruce up (I have no artistic skills you see), but she seems to have stumbled over it overnight.&lt;/p&gt;
&lt;p&gt;Just blame Pia!&lt;/p&gt;
</content:encoded></item><item><title>linux.conf.au 2007 Technical Guru</title><link>https://inodes.org/2006/09/08/linuxconfau-2007-technical-guru</link><guid isPermaLink="true">https://inodes.org/2006/09/08/linuxconfau-2007-technical-guru</guid><description>Most people probably aren&apos;t aware that a few months ago I became Head Technical Guru as part of the seven team organising linux.conf.au 2007. All blame for…</description><pubDate>Fri, 08 Sep 2006 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Most people probably aren&apos;t aware that a few months ago I became &lt;em&gt;Head Technical Guru&lt;/em&gt; as part of the seven team organising &lt;a href=&quot;http://lca2007.linux.org.au&quot;&gt;linux.conf.au 2007&lt;/a&gt;. All blame for this shall lie solely with &lt;a href=&quot;http://pipka.org/blog/&quot;&gt;Pia&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&amp;lt;Pia&amp;gt; Hey John are you still interested in helping out with the conference?&lt;/p&gt;
&lt;p&gt;&amp;lt;John&amp;gt; Yeah sure&lt;/p&gt;
&lt;p&gt;&amp;lt;Pia&amp;gt; Cool. Can you come to a meeting tonight at my place?&lt;/p&gt;
&lt;p&gt;&amp;lt;John@meeting&amp;gt; Pia, why are you writing my name down on the seven page?&lt;/p&gt;
&lt;p&gt;The next months are going to be pretty interesting. Those who know me well will know it&apos;s not like I had much else going on :).&lt;/p&gt;
&lt;p&gt;Basically it&apos;s my job to organise all the technical infrastructure and make sure it works, this includes&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Internet Connectivity&lt;/li&gt;
&lt;li&gt;Wireless Access&lt;/li&gt;
&lt;li&gt;AV Team - Streaming, Publishing of videos etc (This team is being run by &lt;a href=&quot;http://gingertech.dyndns.org/blog/&quot;&gt;Silvia&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Random other servers for things like the website, IRC, portal etc&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In the next few weeks I&apos;ll be gearing up towards a much clearer plan as to what actually needs doing and then I&apos;ll be looking for volunteers so if you are interested in helping out whether it be during the conference setting things up or during the lead up hacking on code to make all the infrastructure work then please let me know.&lt;/p&gt;
</content:encoded></item><item><title>TCP Window Scaling and kernel 2.6.17+</title><link>https://inodes.org/2006/09/06/tcp-window-scaling-and-kernel-2617</link><guid isPermaLink="true">https://inodes.org/2006/09/06/tcp-window-scaling-and-kernel-2617</guid><description>So I was tearing my hair out today. I&apos;d installed Ubuntu onto a new Sun X4200 so that I could migrate Bulletproof&apos;s monitoring system to it. (Note you need to…</description><pubDate>Wed, 06 Sep 2006 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;So I was tearing my hair out today. I&apos;d installed Ubuntu onto a new Sun X4200 so that I could migrate Bulletproof&apos;s monitoring system to it. (Note you need to use edgy knot-1 for the SAS drives to be supported). Anyway as I was installing packages I was getting speeds like 10kB/s. Normally I would expect 800-1000kB/s.&lt;/p&gt;
&lt;p&gt;I did the usual sort of debugging, where there any errors on the switch, was it affecting other servers on the same network etc etc. Everything looked fine. Our friend tcpdump showed a dump that looked something like this.`&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;root@oldlace:~# tcpdump -ni bond0 port 80
tcpdump: listening on bond0
1.2.3.4.42501 &amp;gt; 203.16.234.85.80: S 0:0 win 5840 &amp;lt;mss 1460,sackOK,timestamp 94318 0,nop,wscale 6&amp;gt; (DF)
203.16.234.85.80 &amp;gt; 1.2.3.4.42501: S 0:0(0) ack 1 win 5840&amp;lt;mss 1460,nop,wscale 2&amp;gt; (DF)
1.2.3.4.42501 &amp;gt; 203.16.234.85.80: . ack 1 win 92 (DF)
1.2.3.4.42501 &amp;gt; 203.16.234.85.80: P 1:352(351) ack 1 win 92 (DF)
203.16.234.85.80 &amp;gt; 1.2.3.4.42501: . ack 352 win 1608 (DF)`
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You&apos;ll notice that the server initially advertises a window size of 5840, then suddenly in the first ACK it is advertising a size of 92. This means that the other side can only send 92 bytes before waiting for an ACK!!! Not very conducive to quick WAN transfer speeds.&lt;/p&gt;
&lt;p&gt;After a lot of Google searching I discovered these threads on LKLM&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://www.gatago.com/linux/kernel/9440712.html&quot;&gt;http://www.gatago.com/linux/kernel/9440712.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://lwn.net/Articles/92727/&quot;&gt;http://lwn.net/Articles/92727/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://oss.sgi.com/archives/netdev/2004-07/msg00142.html&quot;&gt;http://oss.sgi.com/archives/netdev/2004-07/msg00142.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Of course what I was missing was the wscale 6, which means that the windows was actually 92&lt;em&gt;2^6 = 5888. Which is pretty close to 5840 so why bother with the scaling, because towards the end of the connection we get 16022&lt;/em&gt;2^6 = 1025408 which doesn&apos;t normally fit into a TCP header.&lt;/p&gt;
&lt;p&gt;So why aren&apos;t things screaming along with this massive window, well something in the middle doesn&apos;t like a windows scaling factor of 6 and is resetting it to zero. Which means the other end thinks the windows size really is 92.&lt;/p&gt;
&lt;p&gt;There are 2 quick fixes. First you can simply turn off windows scaling all together by doing&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;echo 0 &amp;gt; /proc/sys/net/ipv4/tcp_window_scaling
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;but that limits your window to 64k. Or you can limit the size of your TCP buffers back to pre 2.6.17 kernel values which means a wscale value of about 2 is used which is acceptable to most broken routers.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;echo &quot;4096    16384   131072&quot; &amp;gt; /proc/sys/net/ipv4/tcp_wmem
echo &quot;4096    87380   174760&quot; &amp;gt; /proc/sys/net/ipv4/tcp_rmem
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The original values would have had 4MB in the last column above which is what was allowing these massive windows.&lt;/p&gt;
&lt;p&gt;In a thread somewhere which I can&apos;t find anymore Dave Miller had a great quote along the lines of&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;I refuse to workaround it, window scaling has been part of the protocol since 1999, deal with it.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
</content:encoded></item><item><title>VMware Consolidated Backup</title><link>https://inodes.org/2006/08/23/vmware-consolidated-backup</link><guid isPermaLink="true">https://inodes.org/2006/08/23/vmware-consolidated-backup</guid><description>The last few months have seen me working at an insane pace at Bulletproof in the lead up to a launch of our latest and greatest product Dedicated Virtual…</description><pubDate>Wed, 23 Aug 2006 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;The last few months have seen me working at an insane pace at &lt;a href=&quot;http://bulletproof.net&quot;&gt;Bulletproof&lt;/a&gt; in the lead up to a launch of our latest and greatest product Dedicated Virtual Machine Hosting or DVMH for short. I&apos;ll ramble on a bit more about it after it&apos;s launched but basically it is similar to our existing Managed Dedicated Hosting but running on &lt;a href=&quot;http://vmware.org&quot;&gt;VMware&lt;/a&gt; and with a whole heap of cool features due to the benefits of virtualisation.&lt;/p&gt;
&lt;p&gt;Today saw me working with one of these cool features, Consolidated Backup. Basically what this lets you do is have a Windows 2003 server directly plugged into the SAN which can directly see all the VM images sitting in the VMFS LUNs. It then talks to the ESX servers takes a snapshot and makes a copy of it t local disk. Hey presto Disaster Recovery. Well mostly anyway, the restoration aspect isn&apos;t all that crash hot as you&apos;ll see below.&lt;/p&gt;
&lt;p&gt;Documentation on performing the backups is a bit scarce. VMware provide some scripts that let you tie it in to some commercial backup products like Legato, Veritas and NetBackup but no real docs on how to do it yourself.&lt;/p&gt;
&lt;p&gt;So here are some quick examples. &lt;em&gt;(You can find all these commands in C:Program FilesVMwareVMware Consolidated Backup Framework&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Getting a list of VMs on your ESX farm.&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;vcbVmName.exe -h VC_HOST -u USERNAME -p PASSWORD -s any:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Backing up a VM&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;vcbMounter.exe -h VC_HOST -u USERNAME -p PASSWORD -a moref:MOREF -r DESTINATION -t fullvm -m san
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;where MOREF comes from the list you created above and DESTINATION is a local path on your VCB proxy.&lt;/p&gt;
&lt;p&gt;You should then strictly unmount it by doing&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;vcbMounter.exe  -d DESTINATION
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;but I don&apos;t think this does anymore than delete the files, since the snapshot on the ESX server has already been closed.&lt;/p&gt;
&lt;p&gt;The above  creates something like this&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;catalog
MyVM.nvram
MyVM.vmx
scsi0-0-0-MyVM-s001.vmdk
scsi0-0-0-MyVM-s002.vmdk
scsi0-0-0-MyVM-s003.vmdk
scsi0-0-0-MyVM-s004.vmdk
scsi0-0-0-MyVM-s005.vmdk
scsi0-0-0-MyVM.vmdk
unmount.dat
vmware-1.log
vmware-2.log
vmware-3.log
vmware-4.log
vmware-5.log
vmware.log
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Mounting a VM image locally&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;vmmount.exe -d VMDK -cycleId -sysdl LOCATION
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;VMDK needs to be &lt;em&gt;scsi0-0-0-MyVM.vmdk&lt;/em&gt; from above.&lt;/p&gt;
&lt;p&gt;You can then unmount it by doing&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;vmount.exe -u LOCATION
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is nice and easy and really useful means you can now easily backup everything to tape.&lt;/p&gt;
&lt;p&gt;Recovery is another matter entirely, apparently in the Beta releases vcbRestore was distributed with Consolidated Backup but in the final release it now only exists on the ESX servers. So you need to move your directory above to one of your ESX boxes. You then do&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;vcbRestore -h VC_HOST -u USERNAME -p PASSWORD -s DIRECTORY
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will totally replace your existing VM, if you wanted a copy then you should copy the catalog file elsewhere, edit it to change the paths and&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;vcbRestore -h VC_HOST -u USERNAME -p PASSWORD -s DIRECTORY -a CATALOG
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are a couple more features I haven&apos;t mentioned which you can work out for yourself by using -h. eg File level backups for Windows VMs.&lt;/p&gt;
&lt;p&gt;Now all of the above is great but VMware have taken things a step further. With the above if your VM is running VMware Tools the equivalent of a sync is done before the snapshot is taken which effectively gives you slightly better than a crash consistent dump. Though you could still lose some DB data.&lt;/p&gt;
&lt;p&gt;So VMware have added some functionality to rectify this. Just before the snapshot is made /usr/sbin/pre-freeze-script or C:Windowspre-freeze-script.bat is run and  /usr/sbin/post-thaw-script or C:Windowspost-thaw-script.bat are run afterwards. Taking a snapshot only takes a few minutes so you could use these scripts to stop your database for example.&lt;/p&gt;
&lt;p&gt;I highly recommend reading the &lt;a href=&quot;http://www.vmware.com/pdf/vi3_vm_backup.pdf&quot;&gt;VMware Consolidated Backup&lt;/a&gt; manual for all the extra features I haven&apos;t covered.&lt;/p&gt;
</content:encoded></item><item><title>Hmm Blogging</title><link>https://inodes.org/2006/07/24/hmm-blogging</link><guid isPermaLink="true">https://inodes.org/2006/07/24/hmm-blogging</guid><description>So I&apos;ve decided to give this blogging thing a go. Lets see how long I keep it up for...</description><pubDate>Mon, 24 Jul 2006 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;So I&apos;ve decided to give this blogging thing a go. Lets see how long I keep it up for...&lt;/p&gt;
</content:encoded></item></channel></rss>