Sitecreator

I've just spent a few days using up spare holiday, which means I've been making things for work that work doesn't want but I do. This time it's sitecreator, a tool for configuring websites and all their dependencies (Unix users, databases, ssh keys, DNS records etc.) on servers.

Since there's so many possible things for the site to rely upon, and I'm not *that* fond of reinventing the wheel, all it really does is generate passwords and call scripts. There's a configuration file that tells it how many passwords to generate, how to work out what the username should be and perhaps to generate a couple of other things (like database names) if needed. Another bit of the config then explains which scripts to call and with which arguments (including these recently-generated passwords and usernames), and at the end it tells you what it thinks it did. I've written a few scripts for it already (mirroring what I want to do with it).

For example, here's a relatively simple config file with some explanation of what's going on, and some output with that configuration:

avi@amazing:~$ sitecreator example.com
MySQL:
        username: example
        password: gN?@c6$Y7}Y{yg
        database: example

SSH:
        username: example
        password: r;x6kEgO!

MySQL dev:
        username: example_dev
        password: vA!)9WIMo&by}'
        database: example

avi@amazing:~$

And there's at least another example config file in etc/config/. Anyway, hopefully this'll be useful to somebody else who isn't quite into automation enough to have already done this (or to have started using puppet or similar), but does have enough users or systems to configure that some automation would be good.

Oh, it's not very tested yet, and I've still not come up with a sane thing to do with the output from the scripts :)

Converting from Apache1-style to (Debian-style) Apache2-style vhosts

Yeah, some of us are still doing that migration.

Anyway, historically Apache vhosts are all in one file at /etc/apache/httpd.conf or if you're really lucky something like /etc/apache/vhosts.conf.

Apache2 in Debian uses two directories - /etc/apache2/sites-available and /etc/apache2/sites-enabled. sites-available contains one file for each vhost and in order to enable them they're linked to from sites-enabled. This is all fairly nice an elegant and human friendly, but tedious to migrate to from Apache1.

Since this one's coincided with a feeling that I should know more awk here's how I just did this one:

cp /etc/apache/vhosts.conf /etc/apache2/sites-available
awk '/^"vhost" n }' vhosts.conf
for i in $(ls vhost*); do name=$(grep -i ^ServerName $i | awk '{print $2}'); mv $i $name ; done
rm /etc/apache2/sites-available/vhosts.conf

Yeah, the name should be doable in the initial awk, but by that point I sort-of just needed to get it done.

Unattended Virtualmin installs

A while ago I was asked to concoct a fire-and forget script to install Virtualmin without prompting.

It's really easy:

#!/bin/bash
if [ -z $1 ]; then
        echo "Usage"; echo "  $0 [hostname]"; echo ""; exit
fi
wget http://software.virtualmin.com/gpl/scripts/install.sh -O install.sh
export VIRTUALMIN_NONINTERACTIVE="1"
chmod +x install.sh
./install.sh -f -host $1
rm install.sh

And then you call it like so:

./virtualmin.sh virtualmin.vm.avi.co

Fail2Ban and date formats

Fail2Ban is utterly daft in at least one respect. Here's me testing a regex on a date format it doesn't recognise:

# fail2ban-regex '2010-12-14 15:12:31 - 80.87.131.48' ' - <HOST>$'
Found a match but no valid date/time found for 2010-12-14 15:12:31 - 80.87.131.48. Please contact the author in order to get support for this format
Sorry, no match

And on one that it does:

fail2ban-regex '2010/12/14 15:12:31 - 80.87.131.48' ' - <HOST>$'

Success, the following data were found:
Date: Tue Dec 14 15:12:31 2010
IP  : 80.87.131.48

Date template hits:
0 hit: Month Day Hour:Minute:Second
0 hit: Weekday Month Day Hour:Minute:Second Year
1 hit: Year/Month/Day Hour:Minute:Second
0 hit: Day/Month/Year:Hour:Minute:Second
0 hit: TAI64N
0 hit: Epoch

Benchmark. Executing 1000...
Performance
Avg: 0.10257935523986816 ms
Max: 0.125885009765625 ms (Run 8)
Min: 0.10085105895996094 ms (Run 780)

Ignoring for the moment the fact that it doesn't recognise 2010-12-14 15:12:31 (Seriously?)1 , the only way to get that list of date formats is by happening to pick a correct one. As soon as you no longer need a list of date formats you may use, it presents you with one.

What?

So, as an attempted fix for this situation, see above for a list of compatible date formats.

  1. It's worth noting, too, that the author is of the opinion that specifying your own date format is too much like hard work, so if you want support for any date format other than those already supported, you've to patch it yourself. Which is obviously way easier than just having a date regex in the config file []

Massive dumps with MySQL

hurr. *insert FLUSH TABLES joke here*

I have a 2.5GB sql dump to import to my MySQL server. MySQL doesn't like me giving it work to do, and the box it's running on only has 3GB of memory. So, I stumbled across bigdump, which is brilliant. It's a PHP script that splits massive SQL dumps into smaller statements, and runs them one at a time against the server. Always the way: 10 lines into duct-taping together something to do the job for you, you find that someone else has done it rather elegantly.1

In short, we extract the directory to a publicly http-accessible location, stick the sql dump there and tell it to go.

In long, installation is approximately as follows:

  1. avi@jup-linux2:~$ cd www
  2. avi@jup-linux2:~/www$ mkdir bigdump
  3. avi@jup-linux2:~/www$ chmod 777 bigdump
  4. avi@jup-linux2:~/www$ cd bigdump/
  5. avi@jup-linux2:~/www$ wget -q http://www.ozerov.de/bigdump.zip
  6. avi@jup-linux2:~/www$ unzip bigdump.zip
  7. avi@jup-linux2:~/www/bigdump$ ls
  8. bigdump.php bigdump.zip

Where ~/www is my apache UserDir (i.e. when I visit http://localhost/~avi, i see the contents of ~/www). We need permissions to execute PHP scripts in this dir, too (which I have already). We also need to give everyone permissions to do everything - don't do this on the internet!2

Configuration involves editing bigdump.php with the hostname of our MySQL server, the name of the DB we want to manipulate and our credentials. The following is lines 40-45 of mine:

  1. // Database configuration
  2.  
  3. $db_server = 'localhost';
  4. $db_name = 'KBDB';
  5. $db_username = 'kbox';
  6. $db_password = 'imnottellingyou';

Finally, we need to give it a dump to process. For dumps of less than 2Mb3, we can upload through the web browser, else we need to upload or link our sql dump to the same directory as bigdump:

  1. avi@jup-linux2:~/www/bigdump$ ln -s /home/avi/kbox/kbox_dbdata ./dump.sql

Now, we visit the php page through a web browser, and get a pretty interface:

BigDump lists all the files in its working directory, and for any that are SQL dumps provides a 'Start Import' link. To import one of them, click the link and wait.

  1. Yes, you Perl people, it's in PHP. But it's not written by me. So on balance turns out more elegant. []
  2. Those permissions aside - anyone can execute whatever SQL they like with your credentials through this page. Seriously, not on the internet! []
  3. Or whatever's smaller out of upload_max_filesize and post_max_size in your php.ini []

Generating Fluxbox menus for VNC (Vinagre) connections

One of the lovely things about Fluxbox is the text-driven menu. One of the nice things about Vinagre (Gnome's VNC client) is the xml-based bookmarks file. Here's a handy script to create a Fluxbox submenu out of your Vinagre bookmarks:

  1.  
  2. #! /usr/bin/perl
  3.  
  4. use strict;
  5. use warnings;
  6. use XML::Simple;
  7. my $HOME = $ENV{ HOME };
  8.  
  9. my $bookmarks_file = "$HOME/.local/share/vinagre/vinagre-bookmarks.xml";
  10. my $menu_file = "$HOME/.fluxbox/vnc_menu";
  11.  
  12. my $xml = new XML::Simple (KeyAttr=>[]);
  13. my $data = $xml->XMLin("$bookmarks_file");
  14.  
  15. open(MENU, ">$menu_file") || die "Error opening \$menu_file: $menu_file $0";
  16.  
  17. print MENU "[begin]\n";
  18.  
  19. foreach my $b(@{$data->{"item"}}){
  20. print MENU "[exec] ($b->{name}) {vinagre $b->{host}:$b->{port}}\n";
  21. }
  22. print MENU "[end]\n";
  23. close MENU;
  24.  

PHP error on fresh install of PHPWiki:
Non-static method _PearDbPassUser::_PearDbPassUser() cannot be called statically

I've just installed PHPWiki 1.3.14-4 from the debian repositories and out of the box I got the following message on trying to log in to it:

Fatal error: Non-static method _PearDbPassUser::_PearDbPassUser() cannot be called statically, assuming $this from incompatible context in /usr/share/phpwiki.bak.d/lib/WikiUserNew.php on line 1118

The problem appears to be that, as of PHP 5.something, you're not allowed to have a function with the same name as a class. Apparently it's been a failure in E_STRICT mode for a while.

Anyway, the solution is to rename _PearDbPassUser() to something else, and then replace all calls to it with this new name.

I've done this and, so far, everything appears to work.

The function is defined in /usr/share/lib/phpwiki/lib/WikiUser/PearDb.php:

  1.  
  2. jup-linux2:/usr/share$ diff phpwiki.bak.d/lib/WikiUser/PearDb.php phpwiki/lib/WikiUser/PearDb.php
  3. 18c18
  4. < function _PearDbPassUser($UserName='',$prefs=false) {
  5. ---
  6. > function _PearDbPassUserFoo($UserName='',$prefs=false) {
  7.  
  8.  

and is called in /usr/share/WikiUserNew.php:

  1.  
  2. jup-linux2:/usr/share$ diff phpwiki.bak.d/lib/WikiUserNew.php phpwiki/lib/WikiUserNew.php
  3. 1118c1118
  4. < _PearDbPassUser::_PearDbPassUser($this->_userid, $this->_prefs);
  5. ---
  6. > _PearDbPassUser::_PearDbPassUserFoo($this->_userid, $this->_prefs);
  7. 1157c1157
  8. < _PearDbPassUser::_PearDbPassUser($this->_userid, $prefs);
  9. ---
  10. > _PearDbPassUser::_PearDbPassUserFoo($this->_userid, $prefs);
  11. 2120c2120
  12. < function PearDbUserPreferences ($saved_prefs = false) {
  13. ---
  14. > function PearDbUserPreferencesFoo ($saved_prefs = false) {
  15.  

Munin plugins are really easy to write

Munin plugins basically need to output variable names and values, and a little bit of config. They're tremendously easy to write.

My plugin is mostly useless - it graphs the value returned by /dev/urandom, and the random constants from debian and dilbert. Current graph is here and the code is as follows:

  1. #! /bin/bash
  2.  
  3. case $1 in
  4. config)
  5. cat < < EOF
  6. graph_category amusement
  7. graph_title Random numbers
  8. graph_vlabel value
  9. debian.label debian
  10. dilbert.label dilbert
  11. urandom.label /dev/urandom
  12. graph_scale no
  13. EOF
  14. exit 0
  15. esac
  16.  
  17. urandom=$(cat /dev/urandom | tr -dc '0-9' | head -c2)
  18.  
  19. echo "urandom.value " $urandom
  20. echo "debian.value 4"
  21. echo "dilbert.value 9"
  22.  

Munin's plugins live in /etc/munin/plugins/, most of which are symlinks to scripts in /usr/share/plugins/. On restart, munin-node rechecks the plugins directory and loads any new plugins.
For a plugin called foo, munin-node will run foo configure first to get the configuration of the graph (which is passed to munin-graph), and then foo. For information as to graph configuration, see here.
It takes about 15 mins of collection for it to start making a graph, and you'll get more data every 5mins thereafter.

The script itself is mostly self-explanatory, except for:

- The values and the labels are linked by what occurs before the dot. If you define foo.label in the config output, that is what will be used to label the number that comes after foo.value in the 'normal' output. The munin tutorial sort-of hints at this, but only uses one variable.

- Munin doesn't care what order the variables come out in, it uses the labels to determine who's who. Similarly, it doesn't seem particularly fussed as to which flavour of horizontal whitespace is used.

Some notes from configuring rTorrent

In anticipation of the new *buntus tomorrow, I'm configuring one of my servers as a torrent node for it, and for reasons unknown I've settled on rtorrent which is in the repos. The documentation is a little lacking (but does tell you how to do things like download torrents. Read it), and the most popular HowTo is quite verbose.

~/.torrentrc is the config file for rtorrent. The one from the Debian repos puts an example in /usr/share/doc/rtorrent/examples/rtorrent.rc, and apparently so does *buntu. You probably want to edit this. Path variables can be absolute or relative, and are generally relative and inside ./ by default. I've two directories, ~/torrents/ for the torrents and ~/.rtorrent/ for rtorrent's working directories.
Some variables I found to be handy:

  1. scgi_local="~/.rtorrent/socket/rpc.socket"

Defines the socket file for scgi communication, which you only really need if you want external stats from it. I did, but haven't yet got round to using them.

  1. session = ~/.rtorrent/sessio

The session directory, which lets multiple instances of rtorrent remember where they left off. Cannot be shared between instances.

Scheduling
rtorrent supports command scheduling, the syntax for which is occasionally documented. The syntax is approximately:

  1. schedule = a,b,c,d

Where
a: What you want to call this scheduled command1.
b: How many seconds after rtorrent starts you want to first execute the command.
c: The interval for subsequent executions.
d: the command you want to execute.
This comes in most handy for auotmatically starting torrents in a directory:

  1. schedule = watch_directory,5,5,load_start=~/torrents/*.torrent

Exactly what that does is left as an excercise to the reader (the example .rtorrent.rc explains, too).

  1. I don't know where this comes in handy later on []