Lyte's Blog

Bad code, bad humour and bad hair.

Deleting an Apt-snapshot Btrfs Subvolume Is Hard

This took me a while to figure out, so thought I’d write up.

If for whatever reason a do-release-upgrade of Ubuntu fails part way through (say because you’ve used 75% of metadata and btrfs doesn’t seem to fail gracefully) and you’ve got a working apt-snapshot, after you get everything back in order again it will have left behind a snapshot that you no longer want:

1
2
3
4
root@foo:/# btrfs subvol list /
ID 256 top level 5 path @
ID 257 top level 5 path @home
ID 259 top level 5 path @apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28

So you think to yourself, “ok I can just delete it now”:

1
2
3
4
root@foo:/# btrfs subvol delete @apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28
ERROR: error accessing '@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28'
root@foo:/# btrfs subvol delete /@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28
ERROR: error accessing '/@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28'

… hmm the obvious things don’t work.

Turns out when a subvolume (in this case “@”) is mounted the snapshot subvolumes aren’t mounted anywhere and you actually have to give a place where it’s visible as a subvolume (not a direct mount).

Because that sentence made no sense, here’s an example, this doesn’t work:

1
2
3
4
root@foo:~# mkdir /@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28
root@foo:~# mount -t btrfs -o subvol=@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28 /dev/mapper/foo-root /@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28
root@foo:~# btrfs subvol delete @apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28
ERROR: error accessing '@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28'

because I’ve mounted the subvolume I want to delete and I’m giving the top of a FS to the subvolume delete command.

Instead, here’s what does work (even with the FS already mounted on /), create somewhere to mount it:

1
root@foo:/# mkdir /mnt/tmp

Mount it:

1
root@foo:/# mount /dev/mapper/foo-root /mnt/tmp

Show that the subvolumes are all available as directories under the mount point:

1
2
3
4
5
root@foo:/# ls -l /mnt/tmp/
total 0
drwxr-xr-x 1 root root 292 Mar  8 10:26 @
drwxr-xr-x 1 root root 240 Feb 21 08:31 @apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28
drwxr-xr-x 1 root root  14 Mar  8 10:24 @home

Delete it:

1
2
root@foo:/# btrfs subvol delete /mnt/tmp/@apt-snapshot-release-upgrade-quantal-2013-03-07_21\:34\:28
Delete subvolume '/mnt/tmp/@apt-snapshot-release-upgrade-quantal-2013-03-07_21:34:28'

Hooray! It didn’t error this time, it also actually worked:

1
2
3
root@foo:/# btrfs subvol list /
ID 256 top level 5 path @
ID 257 top level 5 path @home

Clean up:

1
root@foo:/# umount /mnt/tmp

New and Simplified License From Samsung Kies

I’m sure something’s wrong here, but can’t quite put my finger on what…

Blank license agreement from Kies

… must be something to do with having to fire up a Windows VM to talk to a Linux based phone.

VM Build Automation With Vagrant and VeeWee

So I’ve finally done my second talk at a LUG (Melbourne Linux Users Group to be precise), I mainly put my hand up to do a talk because we were short this month and I wanted some practice. I decided to go the route of trying to generate a lot of discussion and demoing some of the stuff, rather than preparing and running through a linear set of slides. All in all, it seemed to go fairly well.

I decided to talk about Vagrant and VeeWee. I don’t claim to be any kind of expert, but I have been tinkering and thought maybe I could stir a little more local interest in these types of automation tools.

Audio

Talk audio is available (50MB OGG).

For the most part the videos below were what was demoed during the talk, but there are some questions discussed during the talk that may be helpful (and some that simply will not make sense if you weren’t there :p).

Videos

First up I go through  a standard Vagrant Init process, Vagrant is already installed but otherwise it just demos the minimum number of steps to get a VM up and running, then running something dubious from the internets and returning the system to a clean state.

apt-cacher-ng demo – shows what should now be the standard steps for trialling something with Vagrant (git clone, vagrant up). In this case apt-cacher-ng, which can be used to speed up system building by caching repositories of various types when you are rebuilding similar machines quite often (https://github.com/neerolyte/mlug-vm-demos):

Spree Install – a nice complex rails app that grabs things off all sorts of places on the internet, sped up slight by using the apt-cacher-ng proxy above, but of course with Vagrant we still just “vagrant up” (https://github.com/neerolyte/mlug-vm-demos):

VeeWee build of a Vagrant basebox with a wrapper script (https://github.com/neerolyte/lyte-vagrant-boxes):

Resources

Dropping Repoproxy Development for Apt-cacher-ng

Initially I started writing repoproxy because I thought there were a bunch a of features I needed that simply couldn’t be configured with apt-cacher-ng (ACNG), it turns out I was wrong.

The reasons I had (broadly) for starting repoproxy:

- shared caches - multiple caches on a LAN, they should share some how

- CentOS - CentOS uses yum, how could it possibly be supported by ACNG?

- roaming clients - e.g. laptops that can't always sit behind the same static instance of ACNG

- NodeJS - I wanted to learn NodeJS, I've learnt a bit now and don't feel a huge desire to complete this project given I've figured out most of the other features neatly

Below I cover how to achieve all of the above features with ACNG as it really wasn’t obvious to me so I suspect it might be useful to other too.

Shared Caches

I run a lot of VMs on my laptop, but also have other clients in the home and work networks that could benefit from having a shared repository cache. For me a good balance is having static ACNG servers at both home and work, but also having ACNG deployed to my laptop so that the the VMs can be updated, reimaged or get new packages without chewing through my mobile bandwidth.

This is actually natively supported with ACNG, it’s just a matter of putting a proxy line in /etc/apt-cacher-ng/acng.conf like so:

1
Proxy: http://mirror.lyte:3142/

Then it’s just a matter of telling apt to use a localhost proxy anywhere that ACNG is installed:

1
echo 'Acquire::http { Proxy 'http://localhost:3142/'; };' > /etc/apt/apt.conf.d/01proxy

This allows VMs on my laptop to have a portable repository cache when I’m not on a normal network, but also allows me to benefit from cache others generate and vice versa.

I’d like to at some point have trusted roaming clients (i.e. only my laptop) publish captured ACNG cache back to static ACNG cache servers. I’m pretty sure I can achieve this using some if-up.d trickery combined with rsync, but I haven’t tried yet.

I had considered trying to add something more generic to repoproxy that did a cache discovery and then possibly ICP (Internet Cache Protocol) to share cache objects between nodes on the same LAN, but there are some generalised security issues I can’t come up with a good solution for, e.g. If my laptop is connected to a VPN and another node discovers it, how do I sensibly ensure they can’t get anything via the VPN without making the configuration overly obtuse?

It seems like trying to implement self discovery would either involve a lot of excess configuration on participating nodes, or leave gaping security holes so for the moment I’m keeping it simple.

CentOS

I use CentOS and Scientific Linux sometimes and I’d like their repos to be cached too.

I had originally falsely assumed this would simply be impossible, but I read somewhere that there was at least a little support.

In my testing some things didn’t work out of the box, but could be worked around.

Essentially it seems like ACNG treats most files as one of volatile, persistent or force-cached and it’s just a matter of relying on tweaking URL based regexs to understand any repositories you want to work with.

Roaming Clients

When I move my laptop between home and work or on to a public wifi point I want apt to “just work” I don’t want to have to remember to alter some config each time I need it.

I found two methods described on help.ubuntu.com that were sort of on the right track, but running a cron every minute that insights network traffic when my network is mostly stable seems like a bad idea, as does having to reboot to gain the new settings (especially as Network Manager won’t bring wireless up until after I login, so it wouldn’t even work for my use case).

NB: I’ve already gone back and added my event driven method to the help.ubuntu.com article.

I suspect it would be possible to utilise ACNG’s hook functionality:

8.9 How to execute commands before and after going online? It is possible to configure custom commands which are executed before the internet connection attempt and after a certain period after closing the connection. The commands are bound to a remapping configuration and the config file is named after the name of that remapping config, like debrep.hooks for Remap-debrep. See section 4.3.2, conf/.hooks and /usr/share/doc/apt-cacher-ng/examples/.hooks files for details.

I couldn’t immediately bend this to my will, so I decided to go down a route I already understood.

I decided to use if-up.d to reset ACNG’s config every time there was a new interface brought online, this allows for an event-driven update of the upstream proxy rather than relying on polling intermittently or a reboot of the laptop.

Create a new file /etc/network/if-up.d/apt-cacher-ng-reset-proxy and put the following script in it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
#!/bin/bash

# list of hosts that the proxy might be running on
hosts=(
        acng.on.my.home.network
        acng.on.my.work.network
        acng.at.my.friends.place
)

set_host() {
        host=$1
        line='Proxy: http://$host:3142/'
        if [[ -z $host ]]; then
                line='# Proxy: disabled because none are contactable'
        fi

        # adjust ACNG configuration to use supplied proxy
        sed -i -r 's%^\s*(#|)\s*Proxy: .*$%$line%g' \
                /etc/apt-cacher-ng/acng.conf

        # if apt-cacher-ng is running
        if service apt-cacher-ng status > /dev/null 2>&1; then
                # restart it to take hold of new config
                service apt-cacher-ng restart
        fi
        exit 0
}

try_host() {
        host=$1
        # if we can get to the supplied host
        if ping -c 1 '$host' > /dev/null 2>&1; then
                # tell ACNG to use it
                set_host '$host'
        fi
}

# Run through all possible ACNG hosts trying them one at a time
for host in '${hosts[@]}'; do
        try_host '$host'
done

# no proxies found, unset upstream proxy (i.e. we connect straight to the internet)
set_host

Make sure to adjust the script for your environment and make it executable with:

1
chmod +x /etc/network/if-up.d/apt-cacher-ng-reset-proxy

Moving My Butter (Btrfs)

I recently had need to migrate a btrfs mount to a new physical device and I wasn’t immediately finding the answer with Google (shock horror!) so when I figured out the commands I thought I’d share them here.

So uhm, before doing anything dangerous to a drive you’ve taken a backup right?

Moving on…

I chose to use a process that let me do a live migration underneath the file system while there was still data being written to the file system… you know, for fun.

Adding in the new device is as simple as:

1
$ btrfs device add $new_device $existing_mount_point

Which is quick! Suddenly you have a whole bunch more data available.

Removing the old device takes a lot longer but the command is just as simple:

1
$ btrfs device delete $old_device $existing_mount_point

To get a vague idea of progress watch:

1
2
3
4
5
6
7
$ btrfs-show
Label: none  uuid: 05d0397c-2dfc-4c2b-a508-22bf23099776
        Total devices 2 FS bytes used 682.24GB
        devid    1 size 931.48GB used 688.29GB path /dev/sdb1
        devid    2 size 1.82TB used 13.00GB path /dev/sdd

Btrfs Btrfs v0.19

In this case /dev/sdb1 is the old drive and /dev/sdd is the new drive. The usage of the new drive will gradually grow until it catches up with the old one. In the capture above it’s about 19% (13.00GB / 688.29GB) complete.

Eventually when it completes /dev/sdb1 no longer shows in the device list:

1
2
3
4
5
6
$ btrfs-show
Label: none  uuid: 05d0397c-2dfc-4c2b-a508-22bf23099776
        Total devices 1 FS bytes used 682.04GB
        devid    2 size 1.82TB used 686.52GB path /dev/sdd

Btrfs Btrfs v0.19

SSH Agent Forwarding Is a Bug

At the very best Agent Forwarding is a feature that I want to make sure no one ever uses again… why?

It’s dangerous – you’re placing a socket that will happily answer cryptographic challenges that it shouldn’t be in the hands of unknown parties.

It’s only marginally easier than the alternative.

So what is it?

Well as usual the man page is fairly helpful here:

1
2
3
4
5
6
7
8
9
 -A      Enables forwarding of the authentication agent connection.  This can also
         be specified on a per-host basis in a configuration file.

         Agent forwarding should be enabled with caution.  Users with the ability
         to bypass file permissions on the remote host (for the agent's
         UNIX-domain socket) can access the local agent through the forwarded con‐
         nection.  An attacker cannot obtain key material from the agent, however
         they can perform operations on the keys that enable them to authenticate
         using the identities loaded into the agent.

The normal reason why you would want to forward your agent is because you’re connecting to some server via some other server (usually because a direct connection isn’t available).

The diagram below shows a fairly standard scenario where a DB server can’t be accessed via the public internet, but can be indirectly accessed via the web server:

What a lot of people do at this point is forward their key with something like:

1
2
3
yourlaptop$ ssh -A you@webserver
webserver$ ssh you@dbserver
dbserver$

Which is relatively simple (and can be simplified more so by turning on authentication agent forwarding in ssh_config), but it’s terrible unless you completely trust the security of your web server (please don’t do that).

Lets look at why it’s so terrible:

  1. Anyone else with sufficient access to the web server can use your forwarded agent to authenticate to anything you have access to.
  2. Most web servers have known vulnerabilities that you probably haven’t patched for yet.
  3. 1 + 2 = terrible

To be clear I’m not saying that we’re forwarding actual bits of key material to an untrustworthy host, just that we’re forwarding a connection to an agent that is able to answer authentication challenges to another host you may have access to. In practice one way attackers might use this is if they can see the environment variables your SSH connection came in via (either they have gained access to your account or root) then they could interrogate the SSH_AUTH_SOCK environment variable, add that in to their own environment and use your authentication agent to initiate authenticated connections to other hosts while you are still connected to the untrustworthy host.

So what’s the alternative?

ProxyCommand + nc.

ProxyCommand is a (seemingly little known) directive you can give to SSH to say “hey before you connect to where I’ve told you to, fire up a socket to something arbitrary first”. “nc” (or netcat) is something to give you an arbitrary connection.

Here’s an example (continuing on from above):

1
2
yourlaptop$ ssh -o 'ProxyCommand ssh you@webserver nc %h %p' you@dbserver
dbserver$

What this does is connects to webserver, authenticates using your key as you and then runs “nc” replacing in the hostname and port for the dbserver as it does. This will return a socket (running over the encrypted SSH connection) that can connect directly to the dbserver. SSH will then connect to the socket as you and authenticate with your key, but note that the encryption between you and the dbserver is end-to-end so you don’t need to make your authentication agent available in any potentially untrustworthy environments.

Ok that’s great and all but it’s too much to type

Put it in your ~/.ssh/config file, for the example above something like this should work:

1
2
Host dbserver
ProxyCommand ssh you@webserver nc %h %p

Once that’s in place you can just run:

1
2
yourlaptop$ ssh you$dbserver
dbserver$

What about a range of hosts?

The only real thing to watch out for here is that if you match a range of hosts and the host you’re routing via is in that matched range, SSH will quite happily fork bomb your machine:

1
2
3
4
5
# Be very careful to disable ProxyCommand on the host you are routing via or you will fork bomb yourself.
Host gateway.difficult.to.get.to
ProxyCommand none
Host *.difficult.to.get.to
ProxyCommand ssh you@gateway.difficult.to.get.to nc %h %p

What if I have to jump via multiple hosts?

Well firstly, redesign your network.

If for some reason that’s not a realistic option, you can just stack hops together:

1
2
3
4
5
6
7
Host hop2
ProxyCommand ssh you@hop1 nc %h %p
Host hop3
ProxyCommand ssh you@hop2 nc %h %p
Host hop4
ProxyCommand ssh you@hop3 nc %h %p
...

But it’s so slow…

In some circumstances doing this can be slower (e.g. when you spend all your time connected to an intermediate server connecting to other servers). This is because you’re now firing up multiple SSH connections each time you make a new connection instead of just one.

One pretty common work around for this is:

1
2
3
ControlMaster auto
ControlPath /home/you/.ssh/master-%r@%h:%p
ControlPersist yes

Run “man ssh_config” and search for “ControlMaster” to read up in more detail about this.

The TL;DR of it is – ControlMaster creates multiplexing magic to convert multiple connections down to one (meaning much less build-up / tear-down lag).

Similarity to onion routing

Using ssh like this is very similar to onion routing in that you have end-to-end encryption (and authentication!) by only trusting each mid-point to run up a socket (courtesy of netcat) for you. However please don’t assume that because the encryption going on here is similar to TOR that you can use it to alleviate the privacy concerns that TOR does.

Update: clarified that the agent and not the key is being forwarded, also explained a little more depth what that means.

My Falco’s Suicidal Side Stand

Once upon a time I baught a Falco

It took me to see a waterfall

in Toora

I parked it against a tree

I spotted windmills

and it took me there too

I parked it against a dumpster

I went to see a friend

I parked it against a pole

I went home and parked it against that too

I don’t understand why Aprilia would make such a great bike and then almost ruin it by putting far too much of it on the wrong side of the stand… but they did

fortunately I made mine a hybrid and now it stands on its own

Yeh, that’s my story.

P.s. for those who actually want technical information…

I swapped out the stock stand for one off a Honda CBR 1100 Blackbird. If you attempt this be prepared to completely and utterly disable the side stand kill switch.

A Brief Look in to Exploiting Binaries on Modern Linux

Recently I’ve decided to start poking and prodding some security related CTFs (Capture The Flags).

Mainly I worked on the Stripe CTF (now closed) and I’m playing with Smash the Stack’s IO.

I’ve already learnt a few things along the way that I thought would have been useful to me in a cheat sheet like form, so here it is…

GCC

When compiling little snippets of C locally on my laptop I’ve found that quite often I’ve had to turn off a fair bit of protection to get the snippets to compile in just the right way.

To get 32 bit binaries (CTFs so far have been 32bit): $ gcc -m32 ... Note: you may need to install gcc-multilib on Ubuntu.

To turn off stack protection (are these canary values?): $ gcc -fno-stack-protector ...

To enable debugging symbols for use with GDB: $ gcc -ggdb ...

Normally GCC will produce binaries with stacks that are not executable, to enable executable stacks: $ execstack -s %binary_created_with_gcc%

Shellcode

Once you’ve figured out what shellcode is and why you need it, you’ll still need some way to store it in a string (probably a program argument), here’s the two main methods I’ve been using.

Using perl (seems to be a favourite amongst the community): $ /some/binary "$(perl -e 'printf "\x90" x 100')"

Straight from bash: $ /some/binary $'\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90'

objdump Figuring out a binaries format: $ objdump -t /foo/bar | grep 'file format' /foo/bar: file format elf32-i386

Getting the address of “main” from the symbol table: $ objdump -t /foo/bar | grep main 080483b4 g F .text 00000067 main

Ruxcon 2011

My head’s just now starting to recover from 4 days of Ruxcon. 2 days training, 2 days conference, plus a lot of partying networking.

It’s the first properly security related thing I’ve ever been to, but I was amazed just how quickly I was able to catch on to many of the modern techniques being used to pen test systems out in the wild right now. I don’t think it’s because I’m super smart, I think it’s because most of the flaws that are exploitable are super obvious and you really only need one piece of broken logic to get in to somewhere you don’t belong. It was also quite impressive just how creatively a lot of the people there were able to look at 2 or 3 relatively minor bugs and escalate them in to a full system access hack.

Pen testing also isn’t all the low level buffer over flow stuff I was expecting either, sure there are guys there that can do that and other much fancier stuff to get out of buffer overflow, smash your stack and completely own your box, but there’s still the good ol’ “hmm that acl doesn’t look quite right”.

I managed to catch both talks by Adam and James from Insomnia, what was impressed on me most was that I understood almost everything these guys were doing. Further more a lot of what James was talking about to escalate privileges in a Windows environment was stuff that I was tinkering (sometimes a little too successfully with) back at high school :/

Another pretty interesting stream is just how prolific SQL injection is. I thought (before Thursday) that parameterised SQL queries had solved all the injection problems and the world of IT Security had moved on to harder targets. Turns out I was wrong. This seems to be mainly because half of programmers out there still haven’t actually heard about SQL injection :( It’s compounded by the fact that those that have probably don’t realise just how easy tools like sqlmap make it. With sqlmap you can take a blind SQLi (SQL injection) attack and have it fire up a SQL prompt for you that lets you write arbitrary select queries, or dump the DB to CSV… wow. Better yet, even parameterised SQL normally can’t parameterise the fields in the “ORDER BY” so a lot of programmers will be utilising parameterised SQL thinking it makes them completely safe, when in fact they’ve just made it fractionally harder.

There was way too much other cool stuff to talk about, I’m definitely going to try and go again next year.

Subversion My Home Directory

Every now and then I see someone asking about an alternative to using Dropbox to manage their dot files in their home directory.

A fair while ago (at least 2 years) I started using Subversion to do this. At some point I wrote a script that can be put in my user’s cron file to automate updates and commits.

I figured I would finally publish it so that others can use or improve (or simply abuse) it.

I also decided to finally set up a github account, so it can be found there there.

To use it I put lines like this in “crontab -e”:

1
2
# pidgin - 1 day of lag ok
49 */2 * * * . /home/foo/.ssh/agent; /home/foo/path/to/subversion_auto_sync --max-lag=$((24*60*60)) /home/foo/.purple

Which every 2 hours runs subversion_auto_sync on my home directory with my ssh-agent environment loaded to allow automated (but secure) key based authentication. This example will not run an update for up to 24 hours since the last update, but it will always commit if local modifications have occurred. This provides gradual synchronisation of my Pidgin logs between all Desktops/Laptops I use.

I’ve got the vast bulk of my important home directory files either in Subversion using something similar to what’s above, or being synchronised with Unison.