It has been over 8 years since my last post here. High time I rectified that. This is a test to make sure everything is still hooked together right.
Watch this space!
Technology and Random thoughts…
It has been over 8 years since my last post here. High time I rectified that. This is a test to make sure everything is still hooked together right.
Watch this space!
I’ll be at Devops Down Under this weekend. This should be an amazing weekend, filled with talks which aim to help bridge the Developer and Sysadmin divide.
I’ll be giving a presentation entitiled Commit early, Deploy often. I’ll be talking about using package management to empower developers to deploy applications locally just as they would in production. This also means sysadmins can deploy using the exact same environment.
There are still a few tickets left, so if you are in Sydney this weekend and are either a developer or a sysadmin then make sure you come along.
Disclaimer: I’m also sponsoring the event.
If you haven’t done so, please go and vote in the Linux Australia elections. If you aren’t a member then just join first, membership is free.
I’m running for the position of Treasurer, but you don’t need to vote for me since I’m running unopposed.
I’m running on a common platform with a group of other like minded individuals. You can find the details of the platform here.
The main reason I’m running is I believe that Linux Australia can achieve so much more than it does today. Linux Australia should not simply be a conduit for linux.conf.au.
I want to help turn Linux Australia into an organisation that is relevant to all of us. It should be an organisation that not only fosters and supports the community but also represents the community.
We should offer supportive services to our members, spread the FOSS message through the community as well as actively lobby government for the things we believe in.
Most importantly it is essential that we all become involved. The community is nothing without people to move it forwards. So I would encourage you to vote for
President James Turnbull
Vice President Lindsay Holmwood
Secretary Peter Lieverdink
Treasurer John Ferlito
Ordinary Committee Members
Alice Boxhall
Elspeth Thorne
Once you have finished voting, go and join the mailing lists and get involved.
The Vqmetrics application needs to connect to two different databases. The first holds the videos, authors and their relevant statistics, while the second database holds the users, monitors and trackers.
We do this by specifying two databases in config/database.yml.
development: database: vqmetrics_devel < <: *login_dev_local vqdata_development: &VQDATA_TEST database: vqdata_devel <<: *login_dev_local
So by default the vqmetrics_devel database will be used. When we need to specify a model where we need to connect to the vqdata_devel database we use
class Video < ActiveRecord::Base establish_connection "vqdata_#{RAILS_ENV}" end
and for migrations that need to connect to this database we do the following.
class InitialSetup < ActiveRecord::Migration def self.connection Video.connection end end
This setup works really well. However recently I moved this application to using Cucumber for testing. Tests worked fine the first time they are run but not the second time.
I discovered that the transaction on the second database where not being rolled back as they should be. Cucumber only sets up the first database for roll back by using
ActiveRecord::Base.connection
where it should be rolling them all back by looping through
ActiveRecord::Base.connection_handler.connection_pools.values.map {|pool| pool.connection}
I’ve filed a bug at lighthouseapp.
One of the things I love about the Ubuntu project and launchpad is the Personal Package Archive. PPAs make it so simple and easy to backport packages. The only problem with PPAs is that they are public. I had a need to be able to host some private internal packages as well as squid with SSL support, which you can’t distribute in binary form due to licensing restrictions.
Basically I wanted to create the equivalent of an Ubuntu PPA service running on our own servers so we could place it behind our firewall. This post is basically the process I followed to integrate rebuilld and reprepro to replicate a PPA setup.
So first up install reprepro
aptitude install reprepro
next we need do create a reprepro repository
mkdir -p /srv/reprepro/{conf,incoming,incomingtmp}
Now we need to tell reprepro which distributions we care about. Create /srv/reprepro/conf/distributions with the following contents
Suite: hardy Version: 8.04 Codename: hardy Architectures: i386 amd64 source Components: main Description: Local Hardy SignWith: repository@inodes.org DebIndices: Packages Release . .gz .bz2 DscIndices: Sources Release .gz .bz2 Tracking: all includechanges keepsources Log: logfile --changes /srv/reprepro/bin/build_sources Suite: intrepid Version: 8.10 Codename: intrepid Architectures: i386 amd64 source Components: main Description: Local Intrepid SignWith: repository@inodes.org DebIndices: Packages Release . .gz .bz2 DscIndices: Sources Release .gz .bz2 Tracking: all includechanges keepsources Log: logfile --changes /srv/reprepro/bin/build_sources Suite: jaunty Version: 9.04 Codename: jaunty Architectures: i386 amd64 source Components: main Description: Local Jaunty SignWith: repository@inodes.org DebIndices: Packages Release . .gz .bz2 DscIndices: Sources Release .gz .bz2 Tracking: all includechanges keepsources Log: logfile --changes /srv/reprepro/bin/build_sources
I also like to create reprepro options file to setup some defaults, edit /srv/reprepro/conf/options
verbose verbose verbose verbose verbose
Next we need to setup an incoming queue so that we can use dput to get the source packages into reprepro,
vi /srv/reprepro/conf/incoming
Name: incoming IncomingDir: incoming Allow: hardy intrepid jaunty Cleanup: on_deny on_error Tempdir: incomingtmp
The repository is now ready to go. So now we can setup apache. Edit /etc/apache/sites-enabled/pppa
ServerName packages.inodes.org DocumentRoot /srv/reprepro
and we should also configure our sources.list to use these repositories, edit /etc/apt/sources.list
# Sources for rebuildd deb-src http://packages.inodes.org hardy main deb-src http://packages.inodes.org intrepid main deb-src http://packages.inodes.org jaunty main
Next we want to setup our dput.cf to make the magic happen to get the source packages into the archive, edit ~/.dput.cf
[DEFAULT] default_host_main = notspecified [local] fqdn = localhost method = local incoming = /srv/reprepro/incoming allow_unsigned_uploads = 0 run_dinstall = 0 post_upload_command = reprepro -V -b /srv/reprepro processincoming incoming
So now we can do the following
apt-get source squid3 cd squid3* dch -i # increment version number dpkg-buildpackage -sa -S cd .. dput local *changes aptitude update apt-get source squid3
So when you run dput, first it copies the source package files to /srv/reprepro/incoming and then it gets reprepro to process it’s incoming queue. This means that the source package is now sitting in the repository.
So the second apt-get source should have downloaded the source package from our local repository which is exactly what rebuildd will do before it tries to build it.
Next step is to setup rebuildd so that it builds the binary packages and installs them into the repository.
aptitude install rebuildd
Setup so it runs out of init.d and the releases we care about, edit /etc/default/rebuildd
START_REBUILDD=1 START_REBUILDD_HTTPD=1 DISTS="hardy intrepid jaunty"
Now when a source package is uploaded into the repository we want to kick off rebuildd to build the package. We can do this through the reprepro log hooks. You’ll notice in the conf/distributions above the following lines.
Log: logfile --changes /srv/reprepro/bin/build_sources
This script will be run any time a .changes file is added to the repository. Create /srv/reprepro/bin/build_sources
#!/bin/bash action=$1 release=$2 package=$3 version=$4 changes_file=$5 # Only care about packages being added if [ "$action" != "accepted" ] then exit 0 fi # Only care about source packages echo $changes_file | grep -q _source.changes if [ $? = 1 ] then exit 0 fi # Kick off the job echo "$package $version 1 $release" | sudo rebuildd-job add
This script basically checks the right type of package is being added. Then it calls rebuildd-job to ask for that specific package and version to be built for that Ubuntu release.
Now the first thing that rebuildd does is download the source for the package. However we need to update the sources first since our server doesn’t know there are new files in the repository yet. So edit /etc/rebuildd/rebuilddrv an change
apt-get -q --download-only -t ${d} source ${p}=${v}
to
source_cmd = /srv/reprepro/bin/get_sources ${d} ${p} ${v}
and create /srv/reprepro/bin/get_sources with
#!/bin/bash d=$1 p=$2 v=$3 sudo aptitude update >/dev/null apt-get -q --download-only -t ${d} source ${p}=${v}
By this stage we have rebuildd building packages but we need to make sure they get re-injected back into the repository. We can do this with a post script. Edit /etc/rebuildd/rebuilddrc
post_build_cmd = /srv/reprepro/bin/upload_binaries ${d} ${p} ${v} ${a}
and create /srv/reprepro/bin/upload_binaries
#!/bin/bash d=$1 p=$2 v=$3 a=$4 su -l -c "reprepro -V -b /srv/reprepro include ${d} /var/cache/pbuilder/result/${p}_${v}_${a}.changes" johnf
Now the su is in there because rebuildd needs to be able to access the GPG passphrase to sign the repository with. So rather than have a passphrase-less key we make sure that gpg-agent is running by adding the following to your .profile.
if test -f $HOME/.gpg-agent-info && kill -0 `cut -d: -f 2 $HOME/.gpg-agent-info` 2>/dev/null; then GPG_AGENT_INFO=`cat $HOME/.gpg-agent-info` export GPG_AGENT_INFO else eval `gpg-agent --daemon` echo $GPG_AGENT_INFO >$HOME/.gpg-agent-info fi GPG_TTY=`tty` export GPG_TTY
So that’s it you now have your own personal PPA. Just in case you had fallen asleep. Here is a little script I wrote so you can auto build the source packages for each release you care about in one go.
#!/bin/bash set -e RELEASES="hardy intrepid jaunty" if [ ! -f debian/changelog ] then echo "This isn't a debian repo" exit 1 fi # Check for changes if [ `bzr st | wc -l` != "0" ] then echo "You have uncommitted changes!" exit 1 fi if [ -d ../tmpbuild ] then echo "The tmpbuild dir exists" exit 1 fi bzr export ../tmpbuild cp debian/changelog ../tmpbuild.changelog cd ../tmpbuild PACKAGE=`head -1 debian/changelog | awk '{print $1}'` VERSION=`head -1 debian/changelog | awk '{print $2}' | sed -r -e 's/^(//;s/)$//'` for release in $RELEASES do sed -r -e "1s/) [^;]+; /~${release}) ${release}; /" ../tmpbuild.changelog > debian/changelog head -1 debian/changelog dpkg-buildpackage -S -sa dput local ../${PACKAGE}_${VERSION}~${release}_source.changes done cd .. rm -rf tmpbuild
So the above documentation is a bit of a brain dump on what I’ve been working on for the past 2 days and I’m sure I’ve left some bits out. So please give me any feedback you have in the comments.