Spoilt Vote != Wasted Vote

A spoilt vote does not automatically translate to a wasted vote. Nor is a spoilt vote equivalent to not voting at all.

Another important note to make early on is that I’m not writing this to encourage anyone to spoil the vote – but I would hope you would come out of reading this feeling encouraged to vote, in whatever form.

This morning sees the opening of UK Elections for the European Parliament. As has been increasingly predictable, much fuss has been made of the “threat” parties such as UKIP pose to the established mainstream parties (namely Conservatives, Labour and the Liberal Democrats).
The resulting social and conventional media storm has been somewhat hectic and infuriating. Understandable obsession that people may feel dejected with mainstream politics and forced into voting for one of the more extreme options; frustration at the apparent disproportionate airtime Farage and UKIP have been given; simultaneous over-reaching to shout racist and disregard anyone who would vote UKIP as being dated, past it, out-of-touch, wrong – yet still being scared of them winning a large portion of the votes.

I didn’t start writing this to debate the merits (or lack of) in UKIPs campaign, nor the fact they seem to lack any depth or breadth of policies (and they’re not alone in that, by the way), but I needed to mention it because one thing that crops up regularly is a comment along the lines of “worse thing to do is not vote, or spoilt ballot” or “there’s always something to vote against”. Presumably, you can already tell I disagree with this sentiment, at least the latter part of it.
I would be the first to agree that not voting is a terrible option, and doesn’t actually bring anything to the table, but why then do I believe a spoilt vote is any more valid an option?

The reasoning is actually pretty simple, aided by the way UK voting works:

  • If you don’t vote, the elected individuals can, with some justification, attempt to disregard your opinion in society as invalid – it’s nearly impossible to differentiate between apathy and indifference, or disillusion with the options available / protest no-votes. Simply put, if you don’t vote you’re not someone they can “win” a vote from, so they have no reason to bother trying.
  • In the UK, spoilt votes have to be counted and, often, announced. Just think about that for a second. That means that, along with every vote for a certain party, every spoilt ballot is also recorded. A spoilt ballot is as close to a “true” protest vote as we get.
  • Where spoilt ballots are in the tens or low hundreds, it can be easily put down to voter error. Where the numbers are higher it demonstrates genuine dissatisfaction with what the parties available offer.
  • In the previous Euro elections, UK turnout was just under 35% (15.1 Million) of the eligible population. In the last general election it was just over 65% (29.7 Million). It would be lazy of me to assume everyone who doesn’t vote cares enough to actually spoilt their vote, but it’s fair to assume that a reasonable percentage of those who choose not to vote do so through disillusion, frustration, and a feeling that their vote is irrelevant as it won’t change anything. To those people, I would urge you to consider spoiling your vote (as opposed to not voting at all).
  • Let’s take the last Euro Election:
    • Roughly speaking, 28.04 million eligible voters did not vote at all in the UK elections. 15.99 million didn’t in the General Election a year later. If we assume the everyone who didn’t vote in the last General Election won’t vote in any, that leaves us with 12.05 million no-votes to work with.
    • Presumably, if those 12.05 million, at lease some chose not to vote because they “didn’t see the point” or “didn’t think it would make a difference”. I’m no statistician, so I’m pulling numbers from the air here when I suggest that, maybe, 5% of those non-voters wanted to vote but didn’t think there was anyone to vote for. (Note: This is different to those who chose not to vote as a misguided EU protest / frustration with their “usual” part / etc.). That’s over 600,000 potential spoilt ballots – more votes than the SNP, who won 2 seats.
    • Even at that low (in my opinion) estimate of potential spoilt ballots, that is a huge figure, and registers that disillusion / frustration / anger officially.
  • It may not change the election outcome, but it does send a clear message that there is a considerable body of people out there in your electorate who are disillusioned with the political options they’re presented with- and care enough to actually go out to vote on election day and register that frustration.

Ultimately, I don’t care how someone chooses to vote come the day, come the hour. But I would urge you all to vote if you’re eligible. Without it, there is virtually no reason for your government to care about your opinion.

And, please, don’t be so quick to disregard those who choose to spoil their vote.

Home NAS & Server (5): Tweaks, Quick Tests, and Data Transfers

This is Part 5 of a series of blog posts on building a Home NAS & Server using Linux. For the full index and other details, please see the Index


Once things are set and up and running, the first step really is to make all those tweaks and short tests that we want to do to ensure things are running as we want, and then migrating our existing data over to the new system.

Whilst prepping this particular post, I went back to try and find my references – particularly for the tweaks. As Murphy may well have predicted, I couldn’t find them. I’ll do my best on each point to either find them after the fact, or at least state where I saw them and explain them as best I remember, but I apologise for any I’ve missed or miscredited.

Limiting ARC

There’s a bunch of decent references for this on Solaris Internals ([1] and [2]) and some good threads covering how to do it on the zfs-discuss list.

There are a few reasons you might want to do this but it’s worth noting that the defaults in ZFS on Linux were changed from those Solaris used so that it defaults to 50% system memory – https://groups.google.com/a/zfsonlinux.org/forum/?fromgroups=#!searchin/zfs-discuss/arc/zfs-discuss/4iNT8aXouzg/UL6xf69HkjMJ

Additionally, this thread explains how you might make such changes, and introduces a couple of other options you may consider, particularly if you’re planning on running zvols (for block devices) and so on.

At this point I’m not making any of these ARC changes on either build until I’ve monitored how things pan out performance-wise, but they’re good to know about.

Disabling / Limiting Swap

Reference: https://groups.google.com/a/zfsonlinux.org/d/msg/zfs-discuss/PzBAABrZCb4/5Z-tscgBBaQJ amongst other places.

Anyway, the basic premise here is that “normal” swap is your enemy – and your weak link in the chain – in terms of data protection and checksumming in  a ZFS on Linux capacity.
Why? Well, the basis of it is down to the difficulties in easily running ZFS root under Linux. I haven’t tried to do this yet, but the difficulty comes in being much more careful on how you upgrade things where the root is concerned – especially if you want to keep the experience pain-free. Because of this, a lot of people will install root on “normal” drives, including swap. Once that’s done, if you ever use that swap space (to help avoid out-of-memory conditions) you’re sacrificing all the hardwork and additional money you put into ECC RAM – your swap has become a single point of failure.

You can, of course, create a small ZFS filesystem to use as your swap partition. Or have a large amount of RAM that (like in my position) you may not touch for a long time, but an easier method may be to just turn “swappiness” down to 0. I’ll let you reference the various pages about this yourselves but, effectively, this will all but disable swap unless the system really, really needs to use it (to avoid out-of-memory). The easiest way I found to do this (and make it permanent) was to add a file (swappiness.conf) in /etc/sysctl.d/:

~$ cat /etc/sysctl.d/swappiness.conf
# Only swap to avoid out-of-memory
vm.swappiness = 0

This will get enacted on the next reboot, or you can run sysctl -p /etc/sysctl.d/swappiness.conf to force a read of the options in the config file instantly. Similarly you can echo the “0” straight into /proc. The important thing is to make it persistent across reboots.

Copying Data from the QNAP

Going into this, I barely thought about the data migration. It seemed simple – rsync, right?

It made sense. The QNAP can act as a rsync server with ease (Administration Panel > Applications > Backup Server) and, once done, there was no problem installing rsync on the new NAS and connecting to it with:

# rsync -avz admin@nas:/share/Public/ISO/ /srv/multimedia/ISOs

A quick prompt for my QNAP admin password and it starts syncing, compressing the stream and maintaining attributes.

There was one major issue though – it took bloody ages. Seriously, a long time. Almost a day. And that was just for the music directory, which is only around 110GB in size (Films alone occupy more than 1TB). The average stats rsync reported at the end of the transfer happily told me it had averaged a mighty 3MB/second. Put simply, that was not the performance I was looking for.

I freaked out a bit. Raidz2 couldn’t really hit performance that badly, could it? None of the research I’d read suggested it would. I ran a dd to one of the newly created filesystems, just to check the disks were behaving alright. Performance was good. I didn’t write down the figure, but it was around 800MB/s – the disk layout isn’t the issue.
It shouldn’t be compression (lz4 is enabled by default across the pool) – all the supporting documentation suggests the overheads are minimal and, with modern hardware, there’s little reason not to enable it pool-wide, even if the gains are minimal for some pools.
Is it the network? I’ve got a fairly respectable Zyxel GS-2200 Gigabit switch that, if anything, is massively under-utilized. Besides, nothing else but my desktop was using the network at the time. It seemed unlikely that was the issue either.
I tested without the “-z” (compression) – no joy. I tested a transfer of some extra ISOs from my Windows desktop over SMB – I got a respectable 100MB/s out of it, even with the SMB overheads. Strange.

I researched a little bit and came across a couple of posts that pointed to SSH and the encryption overheads as being likely to slow things down. I doubted they were doing much to tax the new system but it seemed reasonable that they would potentially kill the TS-219P, which – as we covered earlier – is not exactly the meatiest of machines.

As my TV Shows transfer was still running (over the default method, SSH) I figured I’d try the points raised in those posts and mount the NAS locally over NFS and then rsync “locally” from the mountpoint to the ZFS volumes. Comparatively speaking, it flew out the door. Here’s a couple of quick rsync summaries:

rsync TV-Shows folder (SSH):
sent 55932 bytes received 674416624286 bytes 5175538.67 bytes/sec
total size is 674334091593 speedup is 1.00

rsync Live-Shows folder (NFS):
sent 18278740591 bytes received 660 bytes 54644966.37 bytes/sec
total size is 18276506379 speedup is 1.00

It may not be the most scientific of tests, but it’s around 10 times faster. Off the back of that I’m now confident enough to try and move the Films all in one go, with “-z” added back into the mix. Hopefully, it won’t still be running this time next year.

In transferring everything with rsync, I noticed another little issue I hadn’t noticed / been aware of before – a massive amount of “.@__thumb” directories. A quick search verified their source – the TwonkyMedia server running on the QNAP.

I don’t intend to use Twonky for DLNA on the new build, nor do I have any desire to keep a bunch of useless hidden folders lying around my filesystem. I knew I could get rid of them using find, but wasn’t entirely confident on my syntax. This page helped, and in the end I was able to do it comfortably with:

find /srv/multimedia/ -name ".@__thumb" -exec rm -r {} \;

It had soon whipped through even the more verbose directories (like the Music folder) and removed all the junk. As far as I can tell, no genuine files were harmed in the running of that command – nor can I find any reason why they should have been.

With the exception of the films currently transferring, the data migration has been painless and straight-forward. The slow speeds were obviously a concern but at least the result of that was application- / protocol- based rather than something more fundamental. Right now though, the basic data is in place and sitting pretty:

:~$ sudo zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 946G 4.26T 232K /pools/pool1
pool1/backup 28.8M 4.26T 198K /pools/pool1/backup
pool1/backup/backuppc 28.6M 4.26T 28.6M /var/lib/backuppc
pool1/home 239M 4.26T 198K /pools/pool1/home
pool1/home/dave 239M 4.26T 239M /srv/home/dave
pool1/multimedia 946G 4.26T 198K /pools/pool1/multimedia
pool1/multimedia/Audio 107G 4.26T 107G /srv/multimedia/Audio
pool1/multimedia/Books 267K 4.26T 267K /srv/multimedia/Books
pool1/multimedia/ISOs 2.54G 4.26T 2.54G /srv/multimedia/ISOs
pool1/multimedia/Photos 2.59G 4.26T 2.59G /srv/multimedia/Photos
pool1/multimedia/Video 834G 4.26T 834G /srv/multimedia/Video

Home NAS & Server (4): Setting up ZFS

This is Part 4 of a series of blog posts on building a Home NAS & Server using Linux. For the full index and other details, please see the Index


EDIT – April 2013: Since originally writing this segment, ZFS on Linux has reached it’s first non-RC version – 0.6.1. With that release, a Debian Wheezy repositories are available that handle the install process. If upgrading from these instructions, the previous packages must be purged fully before installing from the repo (dpkg –purge zfs zfs-devel zfs-dracut zfs-modules zfs-modules-devel zfs-test spl spl-modules spl-modules-devel). Up-to-date Debian-specific instructions are at http://zfsonlinux.org/debian.html)

Fortunately, the good folks involved with ZFS on Linux have got both useful instructions on compiling your own packages along with a lovely Ubuntu PPA repository for both stable and daily releases. Conveniently the Lucid PPA works perfectly with Debian Squeeze, although I’ve settled on Wheezy for the HP ProLiant Microserver, so for that I’m building from source.

Compiling and installing ZFS

In testing (within a virtual environment) I followed the deb building advice for my Squeeze machine, and found the only thing I needed to do was run update-rc.d zfs defaults after install to ensure that the pool is automounted on boot. They were just as painless for Wheezy, and needed no trickery to automount the zpool on boot.
Using the PPA on Ubuntu I had no such concerns.
The PPA pages detail all that is really needed to add the repository to Ubuntu, and installation is as simple as aptitude install ubuntu-zfs but there were a couple of different steps needed for the Debian Squeeze system. As Wheezy shouldn’t be too far off become the new stable, I won’t spend any time on those steps – Google is your friend.

Once it’s finished installing and the kernel modules are compiled you should be able to run zfs and zpool commands.

Preparing zpools

I won’t be using hot swap / external loading drives on either of these builds, at least not to begin with, so having a relatively painless way to identify which disk is which is relatively important to me. To that end, I decided to use the vdev_id.conf file to allow me to assign human-readable names to all the installed disks. As detailed on the project page, this technique is really more useful in much larger environments to ensure you can quickly and easily identify separate controllers for greater redundancy in your pools. In my case it’s more so I can quickly and easily identify which disk is broken or misbehaving when that time comes. A quick cross-reference of the disk serial numbers before I inserted them into the bays with the device assignments once booted helped me confirm the correct PCI addresses. The naming I decided on was:

~ cat /etc/zfs/vdev_id.conf
#
# Custom by-path mapping for large JBOD configurations
# Bay numbering is from left-to-right (internal bays)
#
#<ID> <by-path name>
alias Bay-0 pci-0000:00:11.0-scsi-0:0:0:0
alias Bay-1 pci-0000:00:11.0-scsi-1:0:0:0
alias Bay-2 pci-0000:00:11.0-scsi-2:0:0:0
alias Bay-3 pci-0000:00:11.0-scsi-3:0:0:0

After doing the above, running udevadm trigger will cause the file to be read and the symlinks to the correct devices will appear in /dev/disk/by-vdev/. The benefits should become clearer once the pool has been created.

For reference, Bay-0 is my root drive – I’ve not opted for ZFS on root, nor any real resilience due to space constraints (I may work around this in the future). For the rackmount build – as mentioned – I’ll be looking at small 2.5″ drives mirrored using mdadm for the root files.

At this point, all that’s left to do is create the pool. As mentioned, Bay-0 is my root disk, so the disks I’ll be looking at using are Bay-1Bay-2, and Bay-3 – configured as raidz. To confirm things, I ran zpool create with the -n flag to verify what would be done. Once happy, the -n can be removed and the command run:

zpool create -f -o ashift=12 -O atime=off -m /pools/tank tank raidz Bay-1 Bay-2 Bay-3

A couple of notes on this:

  • I used -f because I received the following error without it:
    invalid vdev specification
    use '-f' to override the following errors:
    /dev/disk/zpool/Bay-1 does not contain an EFI label but it may contain partition
    information in the MBR.

    I’ve seen this error in my testing as well, and found a few threads on it, but nothing convincing. I confirmed with fdisk that the disks do not, in fact, contain any partition tables, and opted for -f to force past it.
  • ashift=12 is per recommendation (drives with 4k blocksizes, per http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives
  • atime=off as a default setting for the pool filesystems as I can’t see any real reason to enable it across them all.
  • -m allows me to specify a mountpoint – whilst I’ll only have one pool, I figured specifying a specific pools directory would be a good habit to form if I ever want to import other pools in the future. The directory must exist before this command is run.

Once created, things can be confirmed with zpool status:

~ zpool status
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
Bay-1 ONLINE 0 0 0
Bay-2 ONLINE 0 0 0
Bay-3 ONLINE 0 0 0

At this point I’m ready to do pretty much what I want and can verify my pool is online and mounted with df.

Home NAS & Server (3): Hardware Choices

This is Part 3 of a series of blog posts on building a Home NAS & Server using Linux. For the full index and other details, please see the Index


Once the software choices had been settled on, it’s easier to get a clearer picture on what I want to get from the hardware along with quite what spec I need.

Case

Having initially wanted to make the new case fit into a 3U max space (which is what I happen to have free in my cabinet at the moment) I was getting a bit nervous when trying to find options. What I began to find was that the vast majority of the available options will easily fit in 2U or 3U, but most end up requiring 26″ depth, which I don’t have. The smaller cases were largely ruled out as they required boards that couldn’t take the amount of RAM I’d want, or didn’t have the requisite disk / expansion slots to cater for the storage side of things.

For a long time I was looking at the Antec rackmount cases (2U/3UEPS systems) but found them hard to source and I was concerned about getting tied in to specific (expensive to replace) PSUs. As time’s gone on it seems they’re hard to find because they’re being wound back, if my recent visit to the Antec website is anything to go on. I later shifted to looking at the Chenbro RM-22300 because it’s seriously cheap, and meets the form-factor requirements. As an additional bonus it also fits standard PSUs, although it looks a bit of a cludge in how its cabled. It’s big drawback is that whilst it would fit the bill today, it lacks room for any real expansion.

Looking back at things more recently, I’ve figured I can safely sacrifice my 2U shelf in order to make room for a 4U case. Whilst I don’t need anything that size right now, it pretty much allows for a full-sized desktop case which means normal PSU, lots of drive bays, and plenty of easy-to-facilitate expansion. Provided I get some fixed rails to support the weight, there’s no real reason not to look at it.
As I liked the look (and reputation) of them already, I went back to the Antec 4U22EPS650 option which comes with a PSU and has plenty of room and options. One nice point is it can potentially be expanded to have 3×5.25″ slots available on both sides of the rack, allowing for hot-swap bays to be added fairly painlessly. It’s easier to find than the other options and will easily address any future expansion concerns.

Hot Swap Drive Bays

Whilst not exactly a key requirement to be in place on the new build from the get-go, easily accessible drive bays is something I’d definitely like to have in place.

I’ve been looking at a few possibilities, but the one that still leads at the minute is on of the 4.35″ in 3×5.25″ options from IcyDock. Not being in a position to get one yet, I haven’t quite decided whether I want to go fully tool-less or not, but price will probably be the main deciding factor. Otherwise, they’ll all do SATA3 which is my only real concern.

Processor

As I’ve already covered, one thing I am keen for this box to do is cater for other projects beyond the scope of a “standard” NAS, including acting as a hypervisor and a PVR backend. So it could do with a bit of oomph in the CPU department, whilst at the same time being conscious of the fact it will be on most (if not all) hours of the day. Given that I’ll be using ZFS for my storage, it also needs to be (ideally) able to work with ECC memory. The more I looked at available desktop options, the more I came to realize that using one of the newer i3 / i5 / i7 Intel chips would mean sacrificing ECC or paying a lot more for enterprise boards – which I can’t justify at the minute. ASUS, on the other hand, offer consumer boards that support ECC merrily in their AMD line.
So I got looking to AMD CPUs. Looking at the stats, the energy efficient ones have considerable savings, but were hard to find in the UK. Not impossible though, and I managed to pick up a 615e on import. I just needed a Heatsink / Fan for it, which eBuyer helped with.

Motherboard

Having already come down on the side of an AMD CPU and ECC RAM, I was looking fairly firmly at Asus to help fulfil the motherboard requirement, and I don’t think they’ve disappointed. The M5A99X EVO ticks all the right boxes and caters for a good number of hard drives (6 using SATA 6Gb/s ports and 2 using 3Gb/s) along with ECC RAM support.
As with the CPU decision, power usage here won’t be as low as I might have wanted from a “pure NAS” box, but it will should have more than enough grunt for the additional tasks I want it to perform.

Memory

The MB can take up to 32GB ECC RAM, but I found 8GB ECC Unbuffered DIMMs hard to come by. All the main manufacturers’ sites have them, but finding them at suppliers was more of a challenge.

In the meantime then, I’ve opted to settle on 4x4GB DDR3 PC3-10600DIMMs from Kingston, KVR1333D3E9S/4G. The pricepoint is pretty good for my plans, and 16GB should give me plenty to work with.

Longer term, 32GB is the plan, using 4 8GB DIMMs, but I can expand to that when needed.

Hard Drives

At the same time as I first started thinking about doing this build, Western Digitial released their Red drives, aimed specifically at the SOHO NAS market. I’ll be honest, I don’t know a vast amount about the intricacies of hard drive technology – and I do appreciate that “Enterprise” / “Server” drives will always be more substantially tested than SOHO / Desktop equivalents – but in the reading I did around these, the implication seems to be that they’re worth a go. Firmware improvements, longer-than-normal warranty and being pitched (and tested) as always-on seems to make them a reasonable choice. I guess time will tell.

When I first set about this, I sort of default assumed I’d be looking at the 2TB drives as a price compromise between safety and resilience, either in a mirror or raidz (RAID-5, effectively) configuration – so tolerable of 1 disk failure. However, recent discussions haveencouraged me to look towards using the 3TB disks, but upping to raidz2 (double parity).

For the boot disk, I’ve opted to go with a single 2.5″ SATA2 drive from WD. I might eventually up this to a software mirrored RAID option, but as all the important details will be backed up to the ZPool anyway, downtime for a single disk failure isn’t the end of the world (at this point).


Side Note: The Other Build

As noted in passing earlier, I am also building a similar (but slightly lower spec / requirements) machine for my parents. The intention is this will serve both as a central store and backup machine along with serving as an off-site backup for myself using snapshot send / receive.
This machine wouldn’t need to do the extra stuff I want the rackmount box to do. It doesn’t need to PVR and it doesn’t really need to host VMs. It needs to hold data and run some relatively light services for the rest of the network.

Given the £100 cashback offer that HP are still running, I opted for a HP ProLiant N40L Microserver, replacing the 2GB RAM with 2 x 4GB ECC RAM modules and adding a StarTech PCI-E Dual-Port Network Card to add the additional ports the box will ultimately need. For disks, I used three Western Digital Reds, 2TB each, and kept the 250Gb that came installed in the system as a root drive.

Home NAS & Server (2): Software Choices

This is Part 2 of a series of blog posts on building a Home NAS & Server using Linux. For the full index and other details, please see the Index


Given the scope of what I want the new build to do, it feels wise to decide what software I want to use to provide those capabilities, as it’s more than likely that some of those choices will effect the hardware specifics they need to run upon.

Looking back over the previous list I feel there’s a few key points to cover, namely Operating SystemFilesystem / RAID, Virtualization, and DVR. There’s probably more besides that I’ll think of later.

Operating System

It’ll be no surprise that the OS is going to be a flavour of GNU/Linux here (it’s sort of the point of the project), but I should probably justify both why, and which flavour I’ll be looking at using.
Using Linux is the obvious choice to my mind based on a number of factors, not least of which is familiarity (as a server platform) combined with my general preference for Open Source software. It’s cheaper too, of course, although to be honest that’s less of a major factor given that Home Server versions of Windows aren’t massively expensive these days. The thing is, I’ve never really liked Windows Server stuff, even now I’m in a position to have access to them through labs at work, they just feel clunky. Solaris / OpenSolaris / Illumos, too, I disregarded as I’m not as familiar with it and it, frankly, doesn’t feel as flexible as my experiences with Linux.
I considered BSD-based solutions which, whilst viable, I’m less sure I can get things like the PVR-backend functionality (and TV Tuners) working on it without considerable (additional) messing around. I’m going to try and get more familiar with BSD, but I can’t justify this build as being the time to do it – given the Swiss Army Knife approach.

In terms of the distribution, I originally came down on the side of Ubuntu 12.04 LTS Server Edition, with a view to moving to Debian 7.0 (“Wheezy”) when it’s released. However, with Wheezy’s release being so close, I figured it’s reasonable to start with it straight out of the blocks.
Debian’s what I’ve done most of my playing around with and I’m a big fan of APT for package management. However, I’m less confident of getting TV Tuners working with Squeeze, unless I start to draw more heavily on Backports – if I’m going to do that, I’d rather use Ubuntu LTS in the interim, and jump to Wheezy when it’s “new”.

Filesystem / RAID

When I set off down this path of building my own NAS I genuinely hadn’t given much thought to how I was going to handle this aspect. In my head, it was obvious – mdadm software RAID (I can’t really force any justification for using hardware RAID), most likely with LVM on top of it and most probably ext4 as the filesystem.

The more I got to thinking about it though, the more I got comparing some of my desired features with the kit I get to play with at work (NetApp filers of varying size and shape). Whilst I had previous exposure to snapshots through ZFS on Solaris, I never fully appreciated quite how powerful and useful they are. I like the idea of being able to do relatively easy, incremental backups to a remote location; I like the scalability of it (even if the scale is way beyond anything I’d need for a home NAS); and I like the extra steps it takes in checksumming and data integrity. Given the way ZFS handles disks, it effectively takes care of the software RAID side of things at the same time.

The downside is that ZFS isn’t really native on Linux (I’ll let Google help you, but fundamentally it’s a licensing issue). There’s a ZFS-FUSE project but as you might expect, all the reports suggest performance is sluggish to say the least. Additionally, I’ve already ruled out using Solaris / BSD as a base. There is, however, a native port maintained as ZFS on Linux which has a pretty good implementation working (albeit not quite with all the features of the very latest ZFS). I’ve been following the project for a while and there’s some healthy activity and, generally, I’ve been impressed.

To keep things native, I also looked at BTRFS which seems to be showing a massive amount of promise but, as of yet, doesn’t quite tick all the boxes to me on older / non-bleeding edge kernels. It’s something I definitely want to keep an eye on and test further though, as it seems to have the potential to surpass ZFS as time goes by, especially with the continued uncertainty after Oracle’s takeover of Sun and whatnot.

So, whilst it flies a little in the face of some recommendations from friends, I’m deciding to trust a collection of things I read on the internet and going with ZFS On Linux at this stage.
The key point for me was send / receive snapshots (on a relatively stable kernel version) – as I’m planning to build a similar device at my parent’s place, it painlessly addresses my off-site backup desire.

Virtualization

Given the way I seem to be wired, when I first considered having this build server as a platform for a few VMs (both for “production” use and for testing environments) I got way ahead of myself in considering whether running the host as dom0 for a Xen hypervisor, or using KVM. In both cases, I was heading down a road of working out whether running the hypervisor kernel would have knock-on effects on the systems desired output (NAS / PVR) and how to mitigate that.

Eventually, I realized I was being ridiculous. At most, I’m going to be running between 3 and 5 VMs near-constantly, with maybe a couple of extras fired up for lab purposes. None of them are going to be particularly resource-intensive, nor should they be under high load. As much as anything, they exist to provide some sandboxing and allow me to mess with config management and other tricks and tips that can ultimately be useful and applicable in a work environment.
Before anyway chooses to state the obvious – I can appreciate that Xen / KVM would be pretty good skills to have, but shoe-horning them into my domestic NAS environment strikes me as overkill (and if it *strikes* me as overkill, it *definitely* is overkill).

In the end I think I’ve setted on using VirtualBox, at least in the interim, for a few reasons. I’ve got some experience with it already in headless mode (albeit, not enough); it provides enough of the passthrough features that make it versatile-enough for my needs (I run my current labs in VirtualBox on my desktop); decent management tools exist; it can do some fun stuff to aid the integration with the neater features of ZFS; from what I can gather, the key limitations really only manifest themselves at a business-level, rather than home-use.

DVR

As has been hinted at previously, I really have no firm alternatives to MythTV for this purpose.
I first saw MythTV in action first-hand at a LUGRadio meetup many moons ago and I was wowed. Whilst I don’t doubt that there are other alternatives available (most, to my knowledge, need Windaz) – and certainly others that are quicker to configure – the sheer scope of what MythTV has achieved appeals to me massively. And I want to be – in some way – involved.

But, I’ll be honest, DVR had taken a bit of a backseat for me in short-term plans due to the difficult integration with XBMC but then these two posts ([1] and [2]) got me excited about it again. The only difficult part will be choosing a decent, supported tuner.

Home NAS & Server (1): Need vs. Want

This is Part 1 of a series of blog posts on building a Home NAS & Server using Linux. For the full index and other details, please see the Index


As may be apparent from the index, one key feature of this new build is that it needs to be a bit of a Swiss Army Knife. Whilst I’m well aware I could quite easily build a lower power, (probably) cheaper and quieter machine that would fulfil the primary requirement of providing more storage space, it wouldn’t be as capable of some of the other tasks I’d ultimately like this unit to do. For what it’s worth, if it was just about storage, I’d probably seriously consider another QNAP if I was going to go for a pre-bought option, I really have liked my TS-219P

Let’s start by covering what I currently use the QNAP for and what benefits it provides, so the bare minimum that this new build has to offer:

  1. Video storage – My own DVD rips and the like, centrally stored for access by XBMC installs
  2. Photo storage – Finally, I shifted them off my aging USB hard drive. Hopefully, the number will begin to increase again before too long
  3. Music storage – Accessed by various means, including a Subsonic server
  4. Miscellaneous files – a pretty lax (if I’m honest) and ad hoc approach towards backing up what I consider to be my “important” stuff
  5. Interoperability with *nix and Windows clients (feasibly this should stretch to Mac, although I have no immediate need for that. Pedants, hush)
  6. Decent (local) backup and resilience – alright, it’s just mirrored disks in a little box, but it’s been sufficient so far
  7. A stable platform – It’s been up almost a full year now and shows no sign of causing any stability issues. It gives me my files when I want them, and writes what I want to write when I want to write it (until recently, see “space consumption”)
So I have the minimal expectations. As you can see from the list there, the only thing I actually need to address the shortcomings of my current solution is more storage space, which I could easily do with either a larger (4-disk) QNAP device, or a mini- / micro-ATX board, 1U housing and a few disks slapped into a software array with mdadm.
But… as I’ve hinted at plenty of times already, I’m looking to kill multiple birds with one stone. I’ve already accepted that this is going to mean the system won’t be quite as lean on power consumption as I might have hoped for, but with the various advances in “green” solutions in more recent components I’m fairly confident this can be somewhat mitigated against. Worst-comes-to-worst, a lot of what I want the system to do doesn’t require it to be powered on all the time, so making allowances for that is an option.

But enough of that, what do I want this build to do and what do I need it to do?

Need Want
  • Fault-tolerant storage
  • Multiprotocol (Windows, Linux, Mac) file sharing
  • Usable capacity of 3TB or more (minimum 50% increase on current capacity)
  • Command-line access
  • UPS integration
  • Power-saving features (schedules, WOL)
  • Able to perform as a central backup platform for other systems
  • Stability
  • Rackmountable (max 3U, max depth 22″)
  • (relatively) Quiet
  • Expandable
  • Configurable with (moderate) ease
  • Hot-Swap data disks
  • Capable of acting as an iscsi target
  • Network Interface bonding
  • PVR / DVR backend
  • Able to host a few virtual machines simultaneously
  • Easy remote (off-site) backup options – ideally incremental

And that is just about that.

Linux-Based Home NAS & Server

Shamelessly inspired by this brilliant series of blog posts – http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/ and http://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/ – both of which contain a great many more details than I will likely cover.

Before moving up to Newcastle, I picked myself up a QNAP TS-219P with a couple of 2TB drives mirrored. At the time, I ummed and ahhed about paying a bit more for a four disk version from an expansion perspective, but couldn’t justify it for a few reasons:

  • Extra cost – I didn’t have any present *need* for the extra capacity, and there was a reasonable step-up in price for a single-function box I didn’t need the full capabilities of.
  • Format – I knew if I was going to get a 4-disk-or-bigger version I would – longer term – want something rack-mountable, but at the time I had no rack to put it in. Without that, a rack-mountable NAS would just be silly.

Now, two years on, a few things have happened. Firstly, I’ve moved into my own place, complete with Cat6 wiring to as many places as I felt it practical and a rack to boot. Secondly – and perhaps more importantly in the context of these posts – I’ve reached the limits of my QNAP 2-disk solution. With an increasing multimedia collection, a refuelled desire to start taking more pictures again, and additional space for making a concerted effort to centralize my other data, more space is going to be needed.

However, as the next few posts should reflect, there’s a few other things I want the system to do, so the considerations (and cost) expand a bit beyond the remit of “NAS”.

Contents

Setting up the Sheevaplug as a Weather Station

UPDATE 25/5/2011:

$ uptime
07:42:35 up 94 days, 14:00,  1 user,  load average: 0.74, 1.32, 0.68

Pretty happy with that so far!


A few months back I got hold of a Marvell Sheevaplug from NewIT with the express intention of it replacing the aged, noisy desktop PC with a more efficient, lower-power and quieter solution. It’s sole intention was to run the open source weather station solution Wview – a tidy little collection of daemons that records the weather statistics provided by a variety of different weather station hardware and then allows you to do a bunch of things to it (generate your own HTML templates, send the data to Wunderground, Weather for You, and so on) .
With that in mind, my key spec and concerns for the device were as follows:

  • Run Debian Squeeze
  • Fetch data from an Oregon Scientific WMR968 via a USB-to-Serial adapter
  • Record the data consistently using WView
  • Given the risk of the SD Card failing, regularly dump a backup of the data to a remote store
  • Take steps to minimise the effect of regular writes to the SD guard and so prolong its life

When I initially got the SheevaPlug from NewIT, it already did run Debian Lenny, and I started taking steps to stick things like Flashybrid on there. However, I was hasty, and it ended up breaking the install.
Given that I was going to have to do it all again, I thought I’d best document it, especially as I could only find the information I used spread out over 4 different websites (see References, bottom). As such, these details are pretty specific to the exact steps I took, rather than the more general (and detailed) information that can be found at those references.

PLEASE NOTE: My installation environment was onto a standard SheevaPlug (no ESATA port), installing from a TFTP server onto the SD Card, and making that SD Card bootable. If you’re planned install environment differs to that, use Martin’s site to see what you need to do differently.
For all codes and commands, I have used
#~ to indicate the commands are being input on a new line.

Continue reading

Pastures New

Guess it’s time for a quick catchup of all that’s gone over the last few weeks. The “technical” bits of this post will be dealt with in more detail in their own posts, but I’ll reference them here regardless!

End of Work

The last few months were pretty tough in terms of approaching the known end of employment at LUNS. Lancashire County Council were taking the CLEO contract on themselves under the auspices of their new “Strategic Partnership” with BT and I was fairly adamant I didn’t want to work for them so, with LUNS scaling back to an affordable staffing level while pursuing new contracts, I decided to exercise my laughable “right” to opt-out of TUPE and treat the situation as an opportunity to set out and find something new. In this instance, that meant moving to Newcastle, picking up the gauntlet of unemployment and subjecting myself to the tedious process of job- and house-hunting. Which is where I find myself now.

The process of TUPE is an interesting one. Apparently, it’s envisaged as a “right” on my part. Something I should be proud of, thankful for. But in my case it’s anything but that. What it actually does is force me to resign, rather than be made redundant (which is perhaps a more appropriate term for my actual position) based purely on the fact that I don’t agree to a change of employer that’s out of my control. As with so many laws and rules, it strikes me as something that was implemented with lofty goals and dreams that is, in reality, a heavy-handed tool that does nothing to appreciate or attempt to differentiate along the grey lines of real world situations.
As you might be able to tell, I don’t like it.

So that was that. The last few days drifted by and the new job hunt picked up. It had been a good two and a half years at LUNS, and I genuinely hope and reckon they’ll be sorted in no time. The LCC Strategic Partnership on the other hand… let’s just say I have no love for it.

New Location

As mentioned above, along with finishing work I decided to set out and move to Newcastle to find new work. The reasoning was two-fold:

  1. I’ve spent quite a bit of time up this way over the last year and a half that I’ve been seeing Rach, so I know I like the place and know enough people to not feel completely out of my depth.
  2. I had my mind made up for a good while that when work at LUNS finally finished for me, I wouldn’t be looking for any more work in the Lancaster area. Not because I don’t like it round there – I think it’s great – but because I wanted to try something different. In my head I had it down to a choice between Newcastle or Manchester. For obvious reasons (see 1), I chose Newcastle.

So far, it’s not been going too bad. I don’t have a job yet, but I have got a number of applications in and have plenty to keep me occupied outside of it (including the “Tech Tasks” below).
Managing to get some cycling and climbing in, and hoping to get my head stuck into PHP and SQL a bit more once I can find a decent resource to help with it. The job hunting is tedious, as it ever is, and the house hunting isn’t much better but, feck it, it’s got to be done.

Generally, I’ll be much happier when I’ve got some work to pay the bills with but things are good and I reckon I could get used to it up here without too much hardship.

Tech Tasks

In moving up here, it meant I – obviously – wasn’t going top be around at home as much to keep putting-off doing work I’d planned to do, such as moving the Weather Station software away from the old, noisy, knackered re-housed box that was humming away under the stairs to running on one of them there new-fandangled SheevaPlugs that I’ve had in my possession for far too many months after managing to break the first version I had.
It’s pretty neat, and currently sits there running silently off a single SD Card under the newly released Debian Squeeze. Full details of how I got it working will follow in another post, as it included a bunch of different bits of advice dotted around the place.

On top of that, the fresh announcement that Greenpois0n RC6 now worked with the AppleTV meant that much of this weekend was spent with finally jailbreaking my AppleTV 2G so I could stick XBMC on there. Again, I’ll probably do another post on that to combine in all the bits that I’ve done to get it working how I want (for my own easy reference as much as anything else) but so far I’m pretty impressed, and had it happily streaming a plethora of films and TV Shows from my shiny new QNAP NAS, and HD iPlayer over t’interwebs without issue.

So I guess overall I can say things are good, could-be-better, but plenty to keep me occupied and keep me busy. Hopefully I’ll also get back into the swing of reading more now I’m away from other distractions.

Time will tell.

Electoral Blues

http://www.bobdylan.com/#/songs/political-world

So, Brown’s resigned his Labour Leader post; rumours abound about a Con-Lib deal and a PM has formally resigned; tempers are flared; fuses are short; everyone in the country (it seems) is blaming anyone they can; and I’m a good number of days late in writing anything about a British Election that – frankly – bored me silly until the closing weeks, but now has me pretty intrigued. All at the time of writing, of course.

I may as well be honest and state from the outset that party and statewide politics bore and irritate me. It’s not something I support and in my own little idealist way can’t until the time comes that we can cast off those shackles towards real freedom. But, whatever protestations I may have, it’s the system I find myself under, and as such find I need to be realistic about it. Which probably explains my interest.

Anyway, another thing I should get out of the way is which way I voted which, in honesty, is that I voted for this. A hung parliament. And, much to my own surprise, one that would result in a Con-Lib coalition. Why? Because I honestly believe that of all the myriad (ok, four) possibilities being mooted about the place, that is the only combination that could actually introduce any real (and in my view, necessary) changes to the country that most people I think would agree needs to happen, current economic climate or no.
On the topic of Manifestos, I can admit that the only one I read in full was the Liberal Democrat one, and I spent some time scanning the Tory manifesto for points I agreed / disagreed with, and similarly scanned the key Labour points (as I saw them).
So, whether its agreed with or not, I feel I’ve done my bit to make the decision I wanted to make, and made it based on choosing the party I disagreed with least at a Manifesto and personality level. And so its made. So it remains to be seen what the parties finally decide.

A bunch of things throughout the campaign, and after have annoyed me from all sides, so I thought I’d choose a few as I remember them:

ConLib / LabLib / Rainbow Squabblings

The whole episode of the last few days seems to have been dominated by the extremes on both sides (and may yet still be decided from them). Hardline Liberal Democrats appalled to be dealing with the Conservatives, and clinging desperately to their failed notion that New Labour are really as “Lefty” a party as they like to think they are; Tories disgusted to be seen to be “lowering” themselves to dealing with 3rd place when “they got more votes anyway” (irony, anybody?); and Labour supporters uncertain over what they want more – to see Gordon Brown away from the leadership or to keep themselves in power. All of which, frankly, are ridiculous and missing the point completely.
Lab-Lib would have been a mistake, whichever way you cut it. At a time when there’s a genuine opportunity for change, having two parties align that have ‘historic’ ties amongst back-benchers and party members would have inevitably been frail and wouldn’t have lasted (by my reckoning) more than 2 years. Similarly with a “Rainbow” coalition, although my money’s on that having lasted even less time. As it is, I’d be fairly confident that a ConLib agreement could be mutually beneficial, and potentially provide the right checks and balances between two quite different parties and policies. I jsut hope the Lib Dems can approve the deal by more than 75% to allow Clegg to pursue it, then let everyone see what the details are and what we think can happen. As many seats as they do have, I don’t think the Tories can last if they’re made to go it alone.

Clegg’s Pimping Himself Out and LibDems Not Doing as Well as People Thought

Two myths, right there, if you ask me. Firstly, to the actual election results, which near enough tied in with my expectations (alright, I didn’t expect the LibDems to actually lose seats, but I didn’t expect them to gain any). yet there was such a public sense of defeatism from those people who foolishly thought their LibDem votes were going to result in a landslide. It was never going to happen. And the reason it would never happen is for exactly the reasons the party had espoused beforehand – electoral reform. So why be surprised?
And secondly, the recent frustrated notion that Clegg has been pimping himself out to any party in an eager attempt to get power. Two issues with this: 1) What’s the problem? He’s a politician. He can’t, really, ever achieve anything noteworthy if he stays in opposition (which, realisitically, is what the LibDems risk facing if they don’t take the opportunity now while it presents itself) and (2) Why is it “pimping yourself out” to go and hear both possible deals that are on the table, and making your decision accordingly. I’ve said it already earlier today, but I’d be many times more disappointed in anyone who clung desperately to the first offer they received rather than taking their time and weighing up the choices. I even heard one person on the radio today saying they were disgusted with Clegg and that they only voted LibDem to get Labour out… well… that comes at a risk doesn’t it? And that risk is that you’ll,  most likely, get Tory, in some incarnation. Deal with it. And stop being silly.

Unlock Democracy and Proportional Representation

A massive sticking pointing for the LibDems (and it remains to be seen if it will get passed them, and in what form), but the news all week has been obsessed with this question of PR, and what it means. Unlock Democracy, as much as I can applaud them for the speed with which  they put together their protests. And, I agree, First Past the Post is a ridiculous system and one that does need to be changed. However, as with so many protesters, I can’t agree that their expected timescale is correct, let alone feasible. Expecting a referendum, and for that to then be enacted within a year – in a coalition government (a coalition, incidentally, that whichever way it was formed, would have resulted in disagreements in exactly how far to go with PR) – is optimistic, to say the least. Push for it once the coalition’s in place. And support the coalition with the most contrasting views – its the better way to ultimately get the more dramatic changes you want.
I remain quietly optimistic that PR will happen, but it won’t happen if we don’t have  strong coalition government. I never thought I’d say it, but I think Con-Lib is the strongest coalition we can get at this time, and also the one most susceptible to enacting change.

Clegg’s “Two Horse Race” Moment…

… surprised me. Whilst I’m all in favour of being optimistic, this moment struck me as a public show of unmerited over-confidence that, frankly, wasn’t needed. I’m not saying I think they would have done better had he not come out and said this, but I’m sure it didn’t help sway those who were on the edge of deciding which way to go.

LibDem’s “Losing”

A big deal seemed to be made of the LibDems losing seats, both on the public’s part, and their own. I still can’t help feeling this is misplaced. They gained votes. They gained quite a few votes. Yet they lost seats. Is that not, in essence, what PR aims to resolve? If anything it adds fuel to the fire that electoral reform is needed and confirms one of their strongest policies and arguments right the way through.

Tories “Winning”

Not a right lot to say on this except it goes back to arguments about first past the post. This was sadly inevitable, so deal with it. In honesty, I expected them to win outright by a very slim margin, which would have been a much worse situation than we now find ourselves in. Equally, even with the Coalition, its no surprise that Cameron took the PM spot (seems justified). Until late last night I remained optimistic that balance could be restored and Osborne would be replaced as Chancellor by Cable, but it seems that hasn’t happened, and won’t happen this time round. Those were really the only two posts that interests me significantly at this time. Clegg as Deputy could be interesting, guess we’ll see what happens.

Brown as “Unelected PM” Possibility

This kept coming up again and again, especially when the LibDems first said they were going back to talk to Labour. Unfounded fear-mongering at its best. We don’t elect Prime Ministers. Parties elect their leaders, and we elect Parties. Parties decide policies (naturally, a strong party leader can influence those policies). It really is that simple. Stop whining.

So there’s a few, I’m sure there are more, and I’ll probably add them later. But for now, I’m going to go back to waiting to see what the final results are, and what the LibDems side on.

EDIT 1: So, Cameron’s Prime Minister. And the LibDems – to my surprise – have accepted the terms offered in a Coalition. It will be interesting to see what the terms are. Can’t say I’m overwhelmingly pleased with Osborne as Chancellor, but I guess it was to be expected.