# gsandie online notebook

## Locales... (debian/ubuntu)

Remember to use locale-gen with a list of locales to generate if getting locale warnings.

 # locale-gen en_US en_US.UTF-8

Then dpkg-reconfigure locales.

## fuser

Remember that fuser can be used to display open files. A whole lot more.

There is of course lsof.

## Get UUID of disks

You can use blkid, plenty of good options in the man page.

Also /dev/disk/by-uuid (the other /dev/disk/by-* directories are quite interesting too).

## Remote reboot windows

From a remote desktop you can run “shutdown -i” and then go through a reboot menu etc.

## Vmware server windows

Notes on vmware server on Windows 7 Ultimate x64:

## Install

• If you select all the defaults during install, then the UI will end up on http://127.0.0.1:8308/ui

• To login to the UI use a windows username and password, in my case my standard user account

• To get the console you’ll need to install a plugin, in my case it didn’t seem to work on chrome or firefox, but was fine in IE

## Cloning vms

While not supported in the UI, you can clone vms

See this post

Note: cloning a machine with snapshots doesn’t seem to work. You can only clone the base image. With virtual box you could get round this by merging all the snapshots into the main disk before cloning, not sure if this is possible with vmware.

## Openssl checking certs

You can check connections with:

openssl s_client -connect HOST:PORT

If you need to provide a CA pass it in with the absolute path:

openssl s_client -connect HOST:PORT -CAfile /path/to/ca

Get information about an x509 cert:

openssl x509 -in /path/to/cert -text

Remember man openssl

Recently I’ve found myself playing with Fog[1] quite a lot, if you don’t know it, it is a really nice library for working with different cloud providers. If you’re coding in Ruby and using multiple cloud computing vendors, you should check it out.

One of the nice things about fog is that is supports Amazons S3 multipart uploads[2]. This is a feature of S3 that Amazon recommends you use if the files you want to upload are greater than 100Mb. It just so happened I had a bunch of files that fit the bill.

Multipart uploads are neat as there is no expiry of the upload, you need to either complete it or abort it. This would let you schedule part uploads during times when your network traffic is quiet. You are also able to recover from a single part failing without it affecting the whole file upload.

## How do multipart uploads work?

The basic steps are:

• get a file

• split the file into chunks, each part except the last part must be at least 5Mb in size

• get the Base64 encoded MD5 sum of the part

• upload each part, identify it with a part number and the upload id, saving a tag of part

• if you are happy, use the tags and the upload id to complete the upload

• if you are NOT happy you have to abort the upload

After some hacking about I had a basic script that would take a file, split it, get the Base64 encoded MD5 of the parts and upload them (The hacky results are in https://gist.github.com/907430). This worked well, however I really wanted to upload multiple parts at once to increase the speed so I investigated threading in ruby.

## Results

The results are presented below as a proof of concept script. The main thing that had me scratching my head was completing the upload. Originally I had been pushing the ETag from each part onto an array, however as the threads can run in different orders and finish in different times there was no guarantee for the order of the tags in the array. Once I realised this I explicitly set array element to its corresponding tag and the uploads would complete.

The above is far from perfect but it is working for me and I hope it gets the general idea across. I now plan on taking base and turning it into a system that can perform a single upload on small files, and a multipart upload on large files.

[1] - http://fog.io

— Gavin

## Notes on reconnoiter graphs math

I plan on doing some further work on this once I’ve got internet at home again. But for now.

The source and display math uses RPN. There is a list of the functions on this mailing list post:

http://lists.omniti.com/pipermail/reconnoiter-users/2010-July/000467.html

Or you can check:

https://labs.omniti.com/labs/reconnoiter/browser/trunk/ui/web/lib/Reconnoiter_RPN.php

So far I’m working on the ones I can see in Theo’s OSCON presentation video:

• | auto | Auto scales the numeric values. e.g. 1000 becomes 1k |

• | round | Round to the decimal place, takes a numeric argument e.g. 2,round |

As I continue to play with the graphs in reconnoiter I will add more examples.

## Execute user data in AWS

Using the ubuntu ec2 images from canonical you can have user data execute as a script. Simply start the data with a #!. If you want to log what happens you can log to a file at the start of your script.

E.g

#!/bin/bash
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
apt-get update
apt-get -y install build-essential


Or can go to syslog with the appropriate logger command.

References:

http://alestic.com/2009/06/ec2-user-data-scriptshttp://cloud.ubuntu.com/2010/12/logging-user-data-script-output-on-ec2-instances/

## debian package seeding notes

Remember about debconf-get-selections (part of debconf-utils). Can use it in conjunction with debconf-set-selections to setup configuration seeds for files.

Very useful when installing the Sun Java JRE.

$cat sun_java.seed sun-java6-bin shared/accepted-sun-dlj-v1-1 boolean true sun-java6-jre shared/accepted-sun-dlj-v1-1 boolean true sun-java6-jdk shared/accepted-sun-dlj-v1-1 boolean true ……$ sudo debconf-set-selections sun_java.seed
\$ sudo apt-get install sun-java6-jre

This sort of stuff can be used with chef, in the package resource, using the response_file attribute.