Lyte's Blog

Bad code, bad humour and bad hair.

Git Perms

Have you ever noticed that when git mentions a file it often shows an octal permission mode that doesn’t match with the permission on the file?

No? Try this, from an empty directory:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Set umask to 0002 (i.e don't mask out user/group bits)
$ umask 0002
# create a new git repo in the current dir
$ git init .
# create a new file
$ touch foo
# get the octal perms on it
$ stat -c "0%a %n" foo
0664 foo
# add and commit
$ git add foo
$ git commit -m "bar"
[master (root-commit) 51dc083] bar
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 foo

So I just added a file to git that has octal perms of 0664 but git reported 100644 – it’s safe to ignore the prefix, but why has the second last digit changed?

Here’s something else to try:

1
2
3
4
5
6
7
8
# Create a file with every (practical) octal mode
$ for file in 0{4..7}{0..7}{0..7}; do touch $file; chmod $file $file; done
# Stage those files
$ git add 0???
# Look at what perms git has staged:
$ git diff --staged | grep '^new file mode ' | sort | uniq -c
    128 new file mode 100644
    128 new file mode 100755

So git took all 256 different combinations of modes we gave it and stored 100644 half the time and 100755 the other half.

This is because git is really only looking at the u+x (user execute aka & 0100) bit – it doesn’t care about the rest of the perms.

X-Cache and X-Cache-Lookup Headers

I watched a discussion about X-Cache and X-Cache-Lookup headers unfold recently and it turns out a lot of people who I would have thought knew what these headers were indicating were a little muddled up. Further more, it turned out if you go looking for a good explanation, everyone seems to just link to this rather old blog post – despite being well meaning, it’s unfortunately slightly confused too.

So maybe I can give a better explanation.

First question – where is the spec for X-Cache? Well, there isn’t one – that little X- prefix indicates the header is not part of the spec, so its meaning may vary between proxy implementations (in fact with Varnish it’s common to add it yourself in the config file, so it could mean anything at all).

Why am I seeing these headers?

So with that out of the way I’ll focus on where I think you’re most likely to see them, Squid. Squid is a fairly common caching proxy, both as a forward (outbound) caching proxy and as a reverse (inbound/application accelerator/aggregating) proxy. Squid’s doco also fails to clearly define what the headers do, so if you’ve got here because you’re trying to figure out what they mean, there’s a good chance you’ve got a Squid proxy in the mix (even if you didn’t realise it).

What they mean

So, with Squid what these headers mean is:

  • X-Cache: Did this proxy serve the result from cache (HIT for yes, MISS for no).
  • X-Cache-Lookup: Did the proxy have a cacheable response to the request (HIT for yes, MISS for no).

This means if:

  • both are HITs, you made a cacheable request and the proxy had a cacheable response that matched and it handed it back to you.
  • X-Cache is a MISS and X-Cache-Lookup is a HIT, you made a request that had a cacheable response but you’ve forced a cache bypass – usually achieved by a hard refresh, commonly activated with Ctrl+F5 or by sending the headers Pragma: no-cache (HTTP/1.0) or Cache-Control: no-cache (HTTP/1.1).
  • both are MISSes, you made a request (it doesn’t matter if it was cacheable) and there was no corresponding response object in the proxy cache.

This is completely irrelevant to browser cache – if you’re viewing browser cache and inspecting the headers in Chrome or Firebug then you’ll see what the status of the proxy was at the time the proxy returned it to your browser. Sorry if this is obvious, but a surprising number of people seemed to think that the browser cares and bothers to modify these headers, it doesn’t. Really, it doesn’t. I promise.

How you can test this for yourself

First, if you’re trying to use a browser to inspect headers, understand what it’s really saying, e.g in Google Chrome, look for the (from cache) next to the Status Code: section in the headers:

This means nothing has gone over the network and it brought the object out of browser cache.

All other browsers are left as an exercise for the reader as frankly it’s not the right tool for the job (I just figured I had to cover one, as everyone seems to do it this way anyway).

Fire up a bash terminal and get used to curl.

This is what I usually run:

1
2
3
4
$ curl -sv http://example.com/ 2>&1 > /dev/null | egrep '< (X-Cache|Age)'
< Age: 1
< X-Cache: HIT from example.com
< X-Cache-Lookup: HIT from example.com:80

There’s a lot going on here, so lets break it down:

  • curl: this is the main thing we’re running,
  • -sv: this enables both --silent and --verbose, which (counterintuitive I know) is the simplest way to get curl in to a mode where it dumps both the body and headers out in a useful format (yes I know you can use --head, but that changes the request format and invalidates enough testing to make it useless).
  • http://example.com/: our URL of choice.
  • 2>&1: Take STDERR (headers) and redirect it to STDOUT (so we can pipe it to egrep).
  • > /dev/null: Ignore the original STDOUT (body).
  • | egrep: pipe the headers through egrep.
  • '< (X-Cache|Age): grab just headers starting with X-Cache or Age.

You may note I snuck that Age header you’ve been ignoring in there, you can thank me later.

So in this first example we’ve made a cacheable request (absent of Pragma: no-cache or similar headers) and the proxy has had a response from its upstream 1 second ago (Age: 1) that was cacheable.

Now that we have a basic command, lets fiddle with it to see what happens.

If we request the same thing, we should see the Age header count up by roughly the number of seconds since the start of either request:

1
2
3
4
$ curl -sv http://example.com/ 2>&1 > /dev/null | egrep '< (X-Cache|Age)'
< Age: 5
< X-Cache: HIT from example.com
< X-Cache-Lookup: HIT from example.com:80

If we request the same thing but with a hard refresh header (note: Squid and probably every other caching proxy ever can be configured to ignore these headers on the request side, so if you get different results here, 9 times out of 10 that’s why), we can see that the proxy had the object in cache, but didn’t use it:

1
2
3
$ curl -sv --header "Pragma: no-cache" http://example.com/ 2>&1 > /dev/null | egrep '< (X-Cache|Age)'
< X-Cache: MISS from example.com
< X-Cache-Lookup: HIT from example.com:80

If we request something we don’t expect the proxy to have seen before, the proxy neither has it in cache or serves it from cache:

1
2
3
$ curl -sv http://example.com/?"$(date -Ins)" 2>&1 > /dev/null | egrep '< (X-Cache|Age)'
< X-Cache: MISS from example.com
< X-Cache-Lookup: MISS from example.com:80

What the Hell PHP?

Sometimes I feel like I should have an entire section on my blog dedicated solely to PHP’s maniacle insanity just so I’d have a place to record whatever crazy new thing I learn about PHP on any day I work with it. Any day I work with it. Ever.

That said, this one particulary annoyed me.

PHP caches stat information. Why? I dunno really, VFS caches stat information too, you’d think it’d do a better job being that people other than PHP core developers designed it (so maybe some of them weren’t drunk at the time), plus it’s actually the right layer, it can see not only file system modifications made by the PHP you’re currently writing, but also all that other PHP you’ve written and the stuff Mr Jones wrote, also that strange bit of code that’s not PHP. So it would actually be able to cache the right things at VFS, instead PHP core devs actually thought it made sense to cache it in a PHP process, but what the hell lets leave OS problems to application developers and just move on.

So back on topic, PHP caches stat information or to put it another way, PHP has a stat cache (do you see what I did there?). Stat information are all those boring statistics you get about files when you write stat <file>, but really it’s so much more. It’s most of the data about a file that isn’t actually the file itself.

Great, so PHP caches stat information. That must make things faster or something? Possibly, but it also just makes it a massive hot squishy steaming pile of lies. Because any interaction with a file’s stat info is cached, if a file is modified in some way outside of PHP that you care about you now have to constantly call clearstatcache() so that when you check anything about the file you can be confident that PHP isn’t lying to you.

Did I mention PHP has a stat cache? A-packed-full-of-lies-but-at-least-it’s-faster-than-telling-you-the-truth-except-now-you-have-to-slow-things-down-by-constantly-clearing-it-stat-cache.

Somewhere about here you learn to deal with it, you think to yourself “it’s ok, I’ll just call clearstatcache() everywhere, it’s fine, it’s just a stupid cache that I can’t turn off, so I may as well deal with it, maybe I’ll write a Vim macro to insert clearstatcache() calls in front of any file system operations, that’ll fix it, it’ll be almost like PHP is a real language”.

Then, because you haven’t written that Vim macro (it’s a stupid idea any way) and you haven’t managed to turn off the broken stat cache (because you can’t) you run in to what must surely be a bug – even PHP core devs couldn’t consider this a feature, surely? You change ownership of a file using the built in chown() function, it’s a glorious thing, at first it’s owned by one user, then it’s owned by another user, cheers go up, standing ovations, it’s possible something built in to PHP actually works as designed. Cool. You go to check this using a PHP function (because like all PHP devs you totally do TDD and as such need to check such things) and because you updated the ownership information using a PHP function obviously the cache that PHP manages should be marked dirty and you’ll get back an accurate result right?

No.

What?

No, you don’t.

I’m going to say it again… What?

Ok it must be a bug.

Oh, no, you see, actually, it’s the first fscking example on the manual page.

Hmmmm….

Did I mention, PHP has a stat cache?

Spying With PHPUnit

Trying to spy on invocations with PHPUnit seems to normally involve either writing your own spy class:

1
2
3
4
5
6
class IAmASpy {
  public $invocations = array();
  public function foo() {
      $this->invocations []= 'foo';
  }
}

or trying to use execution checks on mock objects to determine that things were called with the right arguments:

1
2
3
4
$mock = $this->GetMock('Foo');
$mock->expects($this->once())
    ->method('bar')
    ->with($this->identicalTo('baz'));

What A Pain!

What if you want to check the arguments going in to the last call? Well you can use at():

1
$mock->expects($this->at(7)) // ...

… better hope we never add any other calls!

What if we don’t know the exact parameter that it’s being called with and want to check it with something more complex? Well if you dig really hard in the manual you’ll find there’s a whole bunch of assertions that let you feed in crazier stuff like:

1
2
3
4
// ...
->with($this->matchesRegularExpression(
    '/Oh how I love (regex|Regular Expressions)/'
));

So that’s pretty cool, if you happen to like really obscure features that are impossible to remember.

Surely there’s a better way? Think of the children!

What if you could just ask for all the invocations and test that they were right in that language you’re already using for all your production logic? Wouldn’t that be just dandy!

Turns out you can, but it’s hiding – and I don’t mean it’s hiding in a “you will find this if you read the manual” kind of way, I mean it’s hiding in the source code, where everyone totally looks first for easy examples right?

All you have to do is store the result of $this->any() and you can use it as a spy:

1
2
$exec->expects($spy = $this->any())
    ->method('foo');

(I’ve got to wonder if documenting those extra 7 characters might be the colloquial straw that breaks the PHPUnit manual’s back.)

Now that you have a spy, you can just do normal stuff that calls it, then use normal PHP logic (I had to laugh when I wrote “normal PHP logic”) to confirm it’s right:

1
2
3
// get the last invocation
$invocation = end($spy->getInvocations());
$this->assertEquals('foo', $invocation->arguments[0]);

An Example You Say?

As a concrete example, lets ensure the NSA is spying on its citizens just the right amount.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
<?php
// What we're testing today
class AverageCitizen {
    public function spyOn() {}
}

// Our tests (yes, normally these would be in some other file)
class TestAverageCitizens extends PHPUnit_Framework_TestCase {
    public function testSpyingLikeTheNSAShould() {
        $citizen = $this->getMock('AverageCitizen');
        $citizen->expects($spy = $this->any())
            ->method('spyOn');

        $citizen->spyOn("foo");

        $invocations = $spy->getInvocations();

        $this->assertEquals(1, count($invocations));

        // we can easily check specific arguments too
        $last = end($invocations);
        $this->assertEquals("foo", $last->parameters[0]);
    }

    public function testSpyingLikeTheNSADoes() {
        $citizen = $this->getMock('AverageCitizen');
        $citizen->expects($spy = $this->any())
            ->method('spyOn');

        $citizen->spyOn("foo");
        $citizen->spyOn("bar");

        $invocations = $spy->getInvocations();

        $this->assertEquals(1, count($invocations));
    }
}
?>

and when we run the tests we can see that even PHPUnit knows the NSA has crossed the line:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ phpunit --debug test.php 
PHPUnit 3.6.10 by Sebastian Bergmann.


Starting test 'TestAverageCitizens::testSpyingLikeTheNSAShould'.
.
Starting test 'TestAverageCitizens::testSpyingLikeTheNSADoes'.
F

Time: 0 seconds, Memory: 3.25Mb

There was 1 failure:

1) TestAverageCitizens::testSpyingLikeTheNSADoes
Failed asserting that 2 matches expected 1.

/i/be/a/coder/test.php:35

FAILURES!
Tests: 2, Assertions: 4, Failures: 1

Keep Cron Simple Stupid

Someone brought up from our Sys Admin chat at work yesterday that crontab and % are insane, the summary was:

If you want the % character in a command, as part of a cronjob:
1. You escape the %, so it becomes \%
2. echo and pipe the command you want to run (with the escaped %) into sed
3. Have sed unescape the %
4. Pipe it into the original program

Which I responded to with roughly “if you want a ‘%’ in your cron line you actually want a shell script instead”… this turns out to be a great argumentdebate as a lot of very good Sys Admins (who were online at the time) completely disagreed with me until I’d spelled out my argument in more detail.

Problems with cron

  • The syntax can be quite insane if you’re expecting it to behave like shell (hint: cron != shell)
  • There’s no widely used crontab linter (I was going to leave it at “there’s no crontab linter”, but found chkcrontab while writing this, which looks like a good start but isn’t packaged for any distro I’ve checked yet)
  • Badly breaking syntax in a crontab file will cause all jobs in that file to stop running (usually with no error recorded anywhere)
  • Unless you’re double entering your scheduling information you’re not going to be able to pick up the absense of the job in your monitoring solution when it fails to run
  • I’m stupid (yes this is a problem with cron)

All of these have led me to break cron lots of times and even more times I’ve had to try to figure out why a scheduled job isn’t running after someone else has broken it for me. Happy days.

KISS

Whenever I’m breaking something fairly critical too often for comfort, it’s time to Keep It Simple Stupid and the way I’ve tried to do that with cron is to never ever put anything complicated on a cron line.

Lets take a simple example:

1
* * * * * echo % some % percents % for % you %

Intuitively I’d just expect that to do what it does on the shell (echo’s back the string, which in cron would normally make it back to someone in email form), but instead the first % will start STDIN for the command, the remaining %s will get changed to new lines and you’ll end up with an echo statement that just echo’s a single new line out to cron as it’s not interested in the STDIN fed to it.

This creates a testing problem because now to test the behaviour of the cron line I need to wait for cron to run the cron line (there’s no way to immediately confirm the validity of the line).

If we instead place the behaviour we want in a script:

1
2
#!/bin/bash -e
echo % some % percents % for % you %

and call that from cron:

1
* * * * * /path/to/script

You can be reasonably confident that it’ll do exactly the same thing when cron runs it as when you test it on the terminal.

But % is ok when it’s simple

Some people tried to make the argument that a % is really ok when it’s actually really simple, e.g.:

1
* * * * * date +\%Y\%m\%d_\%H\%M\%S > /tmp/test

happens to work the same if you copy it to a terminal because the %s are escaped in the cron line and the escaping will happen to drop off in shell as well, but what if you want a quoted % – you’re stuffed.

Back to KISS again.

Other reasons to keep cron simple

If you’re editing cron via crontab -e it’s far too easy to wipe out your crontab file.

While this is mostly an argument for backups, if you keep your cron files simple it may not matter as much when they get nuked accidentally as now you’ve only lost scheduling information and not critical syntax :)

Summary

If I’m not 100% certain I can copy a line out of cron and run it on the terminal I think it doesn’t belong in cron.

Better XML Support in PHP

XML support in PHP is actually pretty good these days, but as with anything in PHP (why is that?) it has a few little quirks and corner cases that provide for continual facepalm moments.

Rather than just sit around and complain or try to get stuff in to the core (where there’s no way I’d be able to use it in real world projects until RHEL catches up, i.e. 3-4 years from now) I thought I’d see what I could do purely in PHP.

Turns out it’s quite a lot, so it’s up on Github: https://github.com/neerolyte/php-lyte-xml#readme

So if you have to deal with the XML in PHP fairly often consider taking it for a spin.

Git Stash That Won’t Make You Hate Yourself in the Morning

Git has a feature called stash that lets drop whatever you’re working on and put the working directory back to a clean state without having to commit or lose whatever deltas you had.

This is a great idea, but it’s sorely missing one core feature for anyone who works on more than one machine – the ability synchronise the stashes between machines, so if you’re like me (I work on the same code on up to about 4 individual machines in a week) you probably want some way to move stashes around.

So I’ve started git-rstash, as usual it’s written in terrible bash in the hope that someone will take enough offence at it to take the whole problem off my hands, in the mean time maybe you’ll find it useful too.

For the moment synchronising them is purely up to the user, but they are conveniently placed where the user can drop them in whatever cloud-syncy-like thing they’re already using (Unison, Ubuntu One, Dropbox, etc).

Too Many Ways to Base64 in Bash

I find myself writing little functions to to paste in to terminals to provide stream handlers quite often, like:

1
2
3
4
5
6
base64_decode() {
  php -r 'echo base64_decode(stream_get_contents(STDIN));'
}
base64_encode() {
  php -r 'echo base64_encode(stream_get_contents(STDIN));'
}

Which can be used to encode or decode base64 strings in a stream, e.g.:

1
2
3
4
$ echo foo | base64_encode
Zm9vCg==
$ echo Zm9vCg== | base64_decode
foo

which is fun, but I wanted to make it a little more portable, so lets try a few more languages…

1
2
3
4
5
6
7
8
9
10
11
12
# ruby
base64_encode() { ruby -e 'require 'base64'; puts Base64.encode64(ARGF.read)'; }
base64_decode() { ruby -e 'require 'base64'; puts Base64.decode64(ARGF.read)'; }
# python
base64_encode() { python -c 'import base64, sys; sys.stdout.write(base64.b64encode(sys.stdin.read()))'; }
base64_decode() { python -c 'import base64, sys; sys.stdout.write(base64.b64decode(sys.stdin.read()))'; }
# perl
base64_encode() { perl -e 'use MIME::Base64; print encode_base64(<STDIN>);'; }
base64_decode() { perl -e 'use MIME::Base64; print decode_base64(<STDIN>);'; }
# openssl
base64_encode() { openssl enc -base64; }
base64_decode() { openssl enc -d -base64; }

and now to wrap them all under something that picks whichever seems to be available:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
base64_php() { php -r 'echo base64_$1(stream_get_contents(STDIN));'; }
base64_ruby() { ruby -e 'require 'base64'; puts Base64.${1}64(ARGF.read)'; }
base64_perl() { perl -e 'use MIME::Base64; print $1_base64(<STDIN>);'; }
base64_python() { python -c 'import base64, sys; sys.stdout.write(base64.b64$1(sys.stdin.read()))'; }
base64_openssl() { openssl enc $([[ $1 == encode ]] || echo -d) -base64; }
base64_choose() {
  for lang in openssl perl python ruby php; do 
    if [[ $(type -t '$lang') == 'file' ]]; then
      'base64_$lang' '$1'
      return
    fi
  done
  echo 'ERROR: No suitable language found'
  return 1
}
base64_encode() { base64_choose encode; }
base64_decode() { base64_choose decode; }

great, now I can quickly grab some base64 commands on any box I’m likely to be working on in the foreseeable future.

Applying Settings to All Vagrant VMs

After upgrading from Ubuntu 12.04 (“Precise Pangolin”) to 12.10 (“Quantal Quetzal”) I lost the DNS resolver that VirtualBox normally provides on 10.0.2.3, meaning I couldn’t boot my Vagrant VMs.

Finding an answer that worked around it for singular VMs, I wanted something that worked for all VMs on the laptop (at least until I can fix the actually broken DNS resolver) and I’ve found one.

As per http://docs-v1.vagrantup.com/v1/docs/vagrantfile.html, it’s possible to specify additional parameters to Vagrant in ~/.vagrant.d/Vagrantfile – but it’s not exactly clear how, turns out you can just place them in a normal config block like so:

1
2
3
Vagrant::Config.run do |config|
    config.vm.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
end

Before applying the fix I was getting hangs at “Waiting for VM to boot” like so:

1
2
3
4
5
6
7
8
9
10
$ vagrant up
[default] VM already created. Booting if it's not already running...
[default] Clearing any previously set forwarded ports...
[default] Forwarding ports...
[default] -- 22 => 2222 (adapter 1)
[default] Creating shared folders metadata...
[default] Clearing any previously set network interfaces...
[default] Preparing network interfaces based on configuration...
[default] Booting VM...
[default] Waiting for VM to boot. This can take a few minutes.

after applying the fix, it continues on:

1
2
3
4
5
6
7
...
[default] Waiting for VM to boot. This can take a few minutes.
[default] VM booted and ready for use!
[default] Configuring and enabling network interfaces...
[default] Setting host name...
[default] Mounting shared folders...
[default] -- v-root: /vagrant

Edit: I just thought I’d add that the config has changed slightly with Vagrant 1.1+, so you now need a “v2 config block” (see: http://docs.vagrantup.com/v2/virtualbox/configuration.html):

1
2
3
4
5
Vagrant::configure('2') do |config|
  config.vm.provider "virtualbox" do |v|
    v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
  end
end

Deleting an Apt-snapshot Btrfs Subvolume Is Hard

This took me a while to figure out, so thought I’d write up.

If for whatever reason a do-release-upgrade of Ubuntu fails part way through (say because you’ve used 75% of metadata and btrfs doesn’t seem to fail gracefully) and you’ve got a working apt-snapshot, after you get everything back in order again it will have left behind a snapshot that you no longer want:

1
2
3
4
root@foo:/# btrfs subvol list /
ID 256 top level 5 path @
ID 257 top level 5 path @home
ID 259 top level 5 path @apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28

So you think to yourself, “ok I can just delete it now”:

1
2
3
4
root@foo:/# btrfs subvol delete @apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28
ERROR: error accessing '@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28'
root@foo:/# btrfs subvol delete /@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28
ERROR: error accessing '/@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28'

… hmm the obvious things don’t work.

Turns out when a subvolume (in this case “@”) is mounted the snapshot subvolumes aren’t mounted anywhere and you actually have to give a place where it’s visible as a subvolume (not a direct mount).

Because that sentence made no sense, here’s an example, this doesn’t work:

1
2
3
4
root@foo:~# mkdir /@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28
root@foo:~# mount -t btrfs -o subvol=@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28 /dev/mapper/foo-root /@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28
root@foo:~# btrfs subvol delete @apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28
ERROR: error accessing '@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28'

because I’ve mounted the subvolume I want to delete and I’m giving the top of a FS to the subvolume delete command.

Instead, here’s what does work (even with the FS already mounted on /), create somewhere to mount it:

1
root@foo:/# mkdir /mnt/tmp

Mount it:

1
root@foo:/# mount /dev/mapper/foo-root /mnt/tmp

Show that the subvolumes are all available as directories under the mount point:

1
2
3
4
5
root@foo:/# ls -l /mnt/tmp/
total 0
drwxr-xr-x 1 root root 292 Mar  8 10:26 @
drwxr-xr-x 1 root root 240 Feb 21 08:31 @apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28
drwxr-xr-x 1 root root  14 Mar  8 10:24 @home

Delete it:

1
2
root@foo:/# btrfs subvol delete /mnt/tmp/@apt-snapshot-release-upgrade-quantal-2013-03-07_21\:34\:28
Delete subvolume '/mnt/tmp/@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28'

Hooray! It didn’t error this time, it also actually worked:

1
2
3
root@foo:/# btrfs subvol list /
ID 256 top level 5 path @
ID 257 top level 5 path @home

Clean up:

1
root@foo:/# umount /mnt/tmp