Home NAS & Server (5): Tweaks, Quick Tests, and Data Transfers

This is Part 5 of a series of blog posts on building a Home NAS & Server using Linux. For the full index and other details, please see the Index


Once things are set and up and running, the first step really is to make all those tweaks and short tests that we want to do to ensure things are running as we want, and then migrating our existing data over to the new system.

Whilst prepping this particular post, I went back to try and find my references – particularly for the tweaks. As Murphy may well have predicted, I couldn’t find them. I’ll do my best on each point to either find them after the fact, or at least state where I saw them and explain them as best I remember, but I apologise for any I’ve missed or miscredited.

Limiting ARC

There’s a bunch of decent references for this on Solaris Internals ([1] and [2]) and some good threads covering how to do it on the zfs-discuss list.

There are a few reasons you might want to do this but it’s worth noting that the defaults in ZFS on Linux were changed from those Solaris used so that it defaults to 50% system memory – https://groups.google.com/a/zfsonlinux.org/forum/?fromgroups=#!searchin/zfs-discuss/arc/zfs-discuss/4iNT8aXouzg/UL6xf69HkjMJ

Additionally, this thread explains how you might make such changes, and introduces a couple of other options you may consider, particularly if you’re planning on running zvols (for block devices) and so on.

At this point I’m not making any of these ARC changes on either build until I’ve monitored how things pan out performance-wise, but they’re good to know about.

Disabling / Limiting Swap

Reference: https://groups.google.com/a/zfsonlinux.org/d/msg/zfs-discuss/PzBAABrZCb4/5Z-tscgBBaQJ amongst other places.

Anyway, the basic premise here is that “normal” swap is your enemy – and your weak link in the chain – in terms of data protection and checksumming in  a ZFS on Linux capacity.
Why? Well, the basis of it is down to the difficulties in easily running ZFS root under Linux. I haven’t tried to do this yet, but the difficulty comes in being much more careful on how you upgrade things where the root is concerned – especially if you want to keep the experience pain-free. Because of this, a lot of people will install root on “normal” drives, including swap. Once that’s done, if you ever use that swap space (to help avoid out-of-memory conditions) you’re sacrificing all the hardwork and additional money you put into ECC RAM – your swap has become a single point of failure.

You can, of course, create a small ZFS filesystem to use as your swap partition. Or have a large amount of RAM that (like in my position) you may not touch for a long time, but an easier method may be to just turn “swappiness” down to 0. I’ll let you reference the various pages about this yourselves but, effectively, this will all but disable swap unless the system really, really needs to use it (to avoid out-of-memory). The easiest way I found to do this (and make it permanent) was to add a file (swappiness.conf) in /etc/sysctl.d/:

~$ cat /etc/sysctl.d/swappiness.conf
# Only swap to avoid out-of-memory
vm.swappiness = 0

This will get enacted on the next reboot, or you can run sysctl -p /etc/sysctl.d/swappiness.conf to force a read of the options in the config file instantly. Similarly you can echo the “0” straight into /proc. The important thing is to make it persistent across reboots.

Copying Data from the QNAP

Going into this, I barely thought about the data migration. It seemed simple – rsync, right?

It made sense. The QNAP can act as a rsync server with ease (Administration Panel > Applications > Backup Server) and, once done, there was no problem installing rsync on the new NAS and connecting to it with:

# rsync -avz admin@nas:/share/Public/ISO/ /srv/multimedia/ISOs

A quick prompt for my QNAP admin password and it starts syncing, compressing the stream and maintaining attributes.

There was one major issue though – it took bloody ages. Seriously, a long time. Almost a day. And that was just for the music directory, which is only around 110GB in size (Films alone occupy more than 1TB). The average stats rsync reported at the end of the transfer happily told me it had averaged a mighty 3MB/second. Put simply, that was not the performance I was looking for.

I freaked out a bit. Raidz2 couldn’t really hit performance that badly, could it? None of the research I’d read suggested it would. I ran a dd to one of the newly created filesystems, just to check the disks were behaving alright. Performance was good. I didn’t write down the figure, but it was around 800MB/s – the disk layout isn’t the issue.
It shouldn’t be compression (lz4 is enabled by default across the pool) – all the supporting documentation suggests the overheads are minimal and, with modern hardware, there’s little reason not to enable it pool-wide, even if the gains are minimal for some pools.
Is it the network? I’ve got a fairly respectable Zyxel GS-2200 Gigabit switch that, if anything, is massively under-utilized. Besides, nothing else but my desktop was using the network at the time. It seemed unlikely that was the issue either.
I tested without the “-z” (compression) – no joy. I tested a transfer of some extra ISOs from my Windows desktop over SMB – I got a respectable 100MB/s out of it, even with the SMB overheads. Strange.

I researched a little bit and came across a couple of posts that pointed to SSH and the encryption overheads as being likely to slow things down. I doubted they were doing much to tax the new system but it seemed reasonable that they would potentially kill the TS-219P, which – as we covered earlier – is not exactly the meatiest of machines.

As my TV Shows transfer was still running (over the default method, SSH) I figured I’d try the points raised in those posts and mount the NAS locally over NFS and then rsync “locally” from the mountpoint to the ZFS volumes. Comparatively speaking, it flew out the door. Here’s a couple of quick rsync summaries:

rsync TV-Shows folder (SSH):
sent 55932 bytes received 674416624286 bytes 5175538.67 bytes/sec
total size is 674334091593 speedup is 1.00

rsync Live-Shows folder (NFS):
sent 18278740591 bytes received 660 bytes 54644966.37 bytes/sec
total size is 18276506379 speedup is 1.00

It may not be the most scientific of tests, but it’s around 10 times faster. Off the back of that I’m now confident enough to try and move the Films all in one go, with “-z” added back into the mix. Hopefully, it won’t still be running this time next year.

In transferring everything with rsync, I noticed another little issue I hadn’t noticed / been aware of before – a massive amount of “.@__thumb” directories. A quick search verified their source – the TwonkyMedia server running on the QNAP.

I don’t intend to use Twonky for DLNA on the new build, nor do I have any desire to keep a bunch of useless hidden folders lying around my filesystem. I knew I could get rid of them using find, but wasn’t entirely confident on my syntax. This page helped, and in the end I was able to do it comfortably with:

find /srv/multimedia/ -name ".@__thumb" -exec rm -r {} \;

It had soon whipped through even the more verbose directories (like the Music folder) and removed all the junk. As far as I can tell, no genuine files were harmed in the running of that command – nor can I find any reason why they should have been.

With the exception of the films currently transferring, the data migration has been painless and straight-forward. The slow speeds were obviously a concern but at least the result of that was application- / protocol- based rather than something more fundamental. Right now though, the basic data is in place and sitting pretty:

:~$ sudo zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 946G 4.26T 232K /pools/pool1
pool1/backup 28.8M 4.26T 198K /pools/pool1/backup
pool1/backup/backuppc 28.6M 4.26T 28.6M /var/lib/backuppc
pool1/home 239M 4.26T 198K /pools/pool1/home
pool1/home/dave 239M 4.26T 239M /srv/home/dave
pool1/multimedia 946G 4.26T 198K /pools/pool1/multimedia
pool1/multimedia/Audio 107G 4.26T 107G /srv/multimedia/Audio
pool1/multimedia/Books 267K 4.26T 267K /srv/multimedia/Books
pool1/multimedia/ISOs 2.54G 4.26T 2.54G /srv/multimedia/ISOs
pool1/multimedia/Photos 2.59G 4.26T 2.59G /srv/multimedia/Photos
pool1/multimedia/Video 834G 4.26T 834G /srv/multimedia/Video

Home NAS & Server (4): Setting up ZFS

This is Part 4 of a series of blog posts on building a Home NAS & Server using Linux. For the full index and other details, please see the Index


EDIT – April 2013: Since originally writing this segment, ZFS on Linux has reached it’s first non-RC version – 0.6.1. With that release, a Debian Wheezy repositories are available that handle the install process. If upgrading from these instructions, the previous packages must be purged fully before installing from the repo (dpkg –purge zfs zfs-devel zfs-dracut zfs-modules zfs-modules-devel zfs-test spl spl-modules spl-modules-devel). Up-to-date Debian-specific instructions are at http://zfsonlinux.org/debian.html)

Fortunately, the good folks involved with ZFS on Linux have got both useful instructions on compiling your own packages along with a lovely Ubuntu PPA repository for both stable and daily releases. Conveniently the Lucid PPA works perfectly with Debian Squeeze, although I’ve settled on Wheezy for the HP ProLiant Microserver, so for that I’m building from source.

Compiling and installing ZFS

In testing (within a virtual environment) I followed the deb building advice for my Squeeze machine, and found the only thing I needed to do was run update-rc.d zfs defaults after install to ensure that the pool is automounted on boot. They were just as painless for Wheezy, and needed no trickery to automount the zpool on boot.
Using the PPA on Ubuntu I had no such concerns.
The PPA pages detail all that is really needed to add the repository to Ubuntu, and installation is as simple as aptitude install ubuntu-zfs but there were a couple of different steps needed for the Debian Squeeze system. As Wheezy shouldn’t be too far off become the new stable, I won’t spend any time on those steps – Google is your friend.

Once it’s finished installing and the kernel modules are compiled you should be able to run zfs and zpool commands.

Preparing zpools

I won’t be using hot swap / external loading drives on either of these builds, at least not to begin with, so having a relatively painless way to identify which disk is which is relatively important to me. To that end, I decided to use the vdev_id.conf file to allow me to assign human-readable names to all the installed disks. As detailed on the project page, this technique is really more useful in much larger environments to ensure you can quickly and easily identify separate controllers for greater redundancy in your pools. In my case it’s more so I can quickly and easily identify which disk is broken or misbehaving when that time comes. A quick cross-reference of the disk serial numbers before I inserted them into the bays with the device assignments once booted helped me confirm the correct PCI addresses. The naming I decided on was:

~ cat /etc/zfs/vdev_id.conf
#
# Custom by-path mapping for large JBOD configurations
# Bay numbering is from left-to-right (internal bays)
#
#<ID> <by-path name>
alias Bay-0 pci-0000:00:11.0-scsi-0:0:0:0
alias Bay-1 pci-0000:00:11.0-scsi-1:0:0:0
alias Bay-2 pci-0000:00:11.0-scsi-2:0:0:0
alias Bay-3 pci-0000:00:11.0-scsi-3:0:0:0

After doing the above, running udevadm trigger will cause the file to be read and the symlinks to the correct devices will appear in /dev/disk/by-vdev/. The benefits should become clearer once the pool has been created.

For reference, Bay-0 is my root drive – I’ve not opted for ZFS on root, nor any real resilience due to space constraints (I may work around this in the future). For the rackmount build – as mentioned – I’ll be looking at small 2.5″ drives mirrored using mdadm for the root files.

At this point, all that’s left to do is create the pool. As mentioned, Bay-0 is my root disk, so the disks I’ll be looking at using are Bay-1Bay-2, and Bay-3 – configured as raidz. To confirm things, I ran zpool create with the -n flag to verify what would be done. Once happy, the -n can be removed and the command run:

zpool create -f -o ashift=12 -O atime=off -m /pools/tank tank raidz Bay-1 Bay-2 Bay-3

A couple of notes on this:

  • I used -f because I received the following error without it:
    invalid vdev specification
    use '-f' to override the following errors:
    /dev/disk/zpool/Bay-1 does not contain an EFI label but it may contain partition
    information in the MBR.

    I’ve seen this error in my testing as well, and found a few threads on it, but nothing convincing. I confirmed with fdisk that the disks do not, in fact, contain any partition tables, and opted for -f to force past it.
  • ashift=12 is per recommendation (drives with 4k blocksizes, per http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives
  • atime=off as a default setting for the pool filesystems as I can’t see any real reason to enable it across them all.
  • -m allows me to specify a mountpoint – whilst I’ll only have one pool, I figured specifying a specific pools directory would be a good habit to form if I ever want to import other pools in the future. The directory must exist before this command is run.

Once created, things can be confirmed with zpool status:

~ zpool status
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
Bay-1 ONLINE 0 0 0
Bay-2 ONLINE 0 0 0
Bay-3 ONLINE 0 0 0

At this point I’m ready to do pretty much what I want and can verify my pool is online and mounted with df.

Home NAS & Server (3): Hardware Choices

This is Part 3 of a series of blog posts on building a Home NAS & Server using Linux. For the full index and other details, please see the Index


Once the software choices had been settled on, it’s easier to get a clearer picture on what I want to get from the hardware along with quite what spec I need.

Case

Having initially wanted to make the new case fit into a 3U max space (which is what I happen to have free in my cabinet at the moment) I was getting a bit nervous when trying to find options. What I began to find was that the vast majority of the available options will easily fit in 2U or 3U, but most end up requiring 26″ depth, which I don’t have. The smaller cases were largely ruled out as they required boards that couldn’t take the amount of RAM I’d want, or didn’t have the requisite disk / expansion slots to cater for the storage side of things.

For a long time I was looking at the Antec rackmount cases (2U/3UEPS systems) but found them hard to source and I was concerned about getting tied in to specific (expensive to replace) PSUs. As time’s gone on it seems they’re hard to find because they’re being wound back, if my recent visit to the Antec website is anything to go on. I later shifted to looking at the Chenbro RM-22300 because it’s seriously cheap, and meets the form-factor requirements. As an additional bonus it also fits standard PSUs, although it looks a bit of a cludge in how its cabled. It’s big drawback is that whilst it would fit the bill today, it lacks room for any real expansion.

Looking back at things more recently, I’ve figured I can safely sacrifice my 2U shelf in order to make room for a 4U case. Whilst I don’t need anything that size right now, it pretty much allows for a full-sized desktop case which means normal PSU, lots of drive bays, and plenty of easy-to-facilitate expansion. Provided I get some fixed rails to support the weight, there’s no real reason not to look at it.
As I liked the look (and reputation) of them already, I went back to the Antec 4U22EPS650 option which comes with a PSU and has plenty of room and options. One nice point is it can potentially be expanded to have 3×5.25″ slots available on both sides of the rack, allowing for hot-swap bays to be added fairly painlessly. It’s easier to find than the other options and will easily address any future expansion concerns.

Hot Swap Drive Bays

Whilst not exactly a key requirement to be in place on the new build from the get-go, easily accessible drive bays is something I’d definitely like to have in place.

I’ve been looking at a few possibilities, but the one that still leads at the minute is on of the 4.35″ in 3×5.25″ options from IcyDock. Not being in a position to get one yet, I haven’t quite decided whether I want to go fully tool-less or not, but price will probably be the main deciding factor. Otherwise, they’ll all do SATA3 which is my only real concern.

Processor

As I’ve already covered, one thing I am keen for this box to do is cater for other projects beyond the scope of a “standard” NAS, including acting as a hypervisor and a PVR backend. So it could do with a bit of oomph in the CPU department, whilst at the same time being conscious of the fact it will be on most (if not all) hours of the day. Given that I’ll be using ZFS for my storage, it also needs to be (ideally) able to work with ECC memory. The more I looked at available desktop options, the more I came to realize that using one of the newer i3 / i5 / i7 Intel chips would mean sacrificing ECC or paying a lot more for enterprise boards – which I can’t justify at the minute. ASUS, on the other hand, offer consumer boards that support ECC merrily in their AMD line.
So I got looking to AMD CPUs. Looking at the stats, the energy efficient ones have considerable savings, but were hard to find in the UK. Not impossible though, and I managed to pick up a 615e on import. I just needed a Heatsink / Fan for it, which eBuyer helped with.

Motherboard

Having already come down on the side of an AMD CPU and ECC RAM, I was looking fairly firmly at Asus to help fulfil the motherboard requirement, and I don’t think they’ve disappointed. The M5A99X EVO ticks all the right boxes and caters for a good number of hard drives (6 using SATA 6Gb/s ports and 2 using 3Gb/s) along with ECC RAM support.
As with the CPU decision, power usage here won’t be as low as I might have wanted from a “pure NAS” box, but it will should have more than enough grunt for the additional tasks I want it to perform.

Memory

The MB can take up to 32GB ECC RAM, but I found 8GB ECC Unbuffered DIMMs hard to come by. All the main manufacturers’ sites have them, but finding them at suppliers was more of a challenge.

In the meantime then, I’ve opted to settle on 4x4GB DDR3 PC3-10600DIMMs from Kingston, KVR1333D3E9S/4G. The pricepoint is pretty good for my plans, and 16GB should give me plenty to work with.

Longer term, 32GB is the plan, using 4 8GB DIMMs, but I can expand to that when needed.

Hard Drives

At the same time as I first started thinking about doing this build, Western Digitial released their Red drives, aimed specifically at the SOHO NAS market. I’ll be honest, I don’t know a vast amount about the intricacies of hard drive technology – and I do appreciate that “Enterprise” / “Server” drives will always be more substantially tested than SOHO / Desktop equivalents – but in the reading I did around these, the implication seems to be that they’re worth a go. Firmware improvements, longer-than-normal warranty and being pitched (and tested) as always-on seems to make them a reasonable choice. I guess time will tell.

When I first set about this, I sort of default assumed I’d be looking at the 2TB drives as a price compromise between safety and resilience, either in a mirror or raidz (RAID-5, effectively) configuration – so tolerable of 1 disk failure. However, recent discussions haveencouraged me to look towards using the 3TB disks, but upping to raidz2 (double parity).

For the boot disk, I’ve opted to go with a single 2.5″ SATA2 drive from WD. I might eventually up this to a software mirrored RAID option, but as all the important details will be backed up to the ZPool anyway, downtime for a single disk failure isn’t the end of the world (at this point).


Side Note: The Other Build

As noted in passing earlier, I am also building a similar (but slightly lower spec / requirements) machine for my parents. The intention is this will serve both as a central store and backup machine along with serving as an off-site backup for myself using snapshot send / receive.
This machine wouldn’t need to do the extra stuff I want the rackmount box to do. It doesn’t need to PVR and it doesn’t really need to host VMs. It needs to hold data and run some relatively light services for the rest of the network.

Given the £100 cashback offer that HP are still running, I opted for a HP ProLiant N40L Microserver, replacing the 2GB RAM with 2 x 4GB ECC RAM modules and adding a StarTech PCI-E Dual-Port Network Card to add the additional ports the box will ultimately need. For disks, I used three Western Digital Reds, 2TB each, and kept the 250Gb that came installed in the system as a root drive.

Home NAS & Server (2): Software Choices

This is Part 2 of a series of blog posts on building a Home NAS & Server using Linux. For the full index and other details, please see the Index


Given the scope of what I want the new build to do, it feels wise to decide what software I want to use to provide those capabilities, as it’s more than likely that some of those choices will effect the hardware specifics they need to run upon.

Looking back over the previous list I feel there’s a few key points to cover, namely Operating SystemFilesystem / RAID, Virtualization, and DVR. There’s probably more besides that I’ll think of later.

Operating System

It’ll be no surprise that the OS is going to be a flavour of GNU/Linux here (it’s sort of the point of the project), but I should probably justify both why, and which flavour I’ll be looking at using.
Using Linux is the obvious choice to my mind based on a number of factors, not least of which is familiarity (as a server platform) combined with my general preference for Open Source software. It’s cheaper too, of course, although to be honest that’s less of a major factor given that Home Server versions of Windows aren’t massively expensive these days. The thing is, I’ve never really liked Windows Server stuff, even now I’m in a position to have access to them through labs at work, they just feel clunky. Solaris / OpenSolaris / Illumos, too, I disregarded as I’m not as familiar with it and it, frankly, doesn’t feel as flexible as my experiences with Linux.
I considered BSD-based solutions which, whilst viable, I’m less sure I can get things like the PVR-backend functionality (and TV Tuners) working on it without considerable (additional) messing around. I’m going to try and get more familiar with BSD, but I can’t justify this build as being the time to do it – given the Swiss Army Knife approach.

In terms of the distribution, I originally came down on the side of Ubuntu 12.04 LTS Server Edition, with a view to moving to Debian 7.0 (“Wheezy”) when it’s released. However, with Wheezy’s release being so close, I figured it’s reasonable to start with it straight out of the blocks.
Debian’s what I’ve done most of my playing around with and I’m a big fan of APT for package management. However, I’m less confident of getting TV Tuners working with Squeeze, unless I start to draw more heavily on Backports – if I’m going to do that, I’d rather use Ubuntu LTS in the interim, and jump to Wheezy when it’s “new”.

Filesystem / RAID

When I set off down this path of building my own NAS I genuinely hadn’t given much thought to how I was going to handle this aspect. In my head, it was obvious – mdadm software RAID (I can’t really force any justification for using hardware RAID), most likely with LVM on top of it and most probably ext4 as the filesystem.

The more I got to thinking about it though, the more I got comparing some of my desired features with the kit I get to play with at work (NetApp filers of varying size and shape). Whilst I had previous exposure to snapshots through ZFS on Solaris, I never fully appreciated quite how powerful and useful they are. I like the idea of being able to do relatively easy, incremental backups to a remote location; I like the scalability of it (even if the scale is way beyond anything I’d need for a home NAS); and I like the extra steps it takes in checksumming and data integrity. Given the way ZFS handles disks, it effectively takes care of the software RAID side of things at the same time.

The downside is that ZFS isn’t really native on Linux (I’ll let Google help you, but fundamentally it’s a licensing issue). There’s a ZFS-FUSE project but as you might expect, all the reports suggest performance is sluggish to say the least. Additionally, I’ve already ruled out using Solaris / BSD as a base. There is, however, a native port maintained as ZFS on Linux which has a pretty good implementation working (albeit not quite with all the features of the very latest ZFS). I’ve been following the project for a while and there’s some healthy activity and, generally, I’ve been impressed.

To keep things native, I also looked at BTRFS which seems to be showing a massive amount of promise but, as of yet, doesn’t quite tick all the boxes to me on older / non-bleeding edge kernels. It’s something I definitely want to keep an eye on and test further though, as it seems to have the potential to surpass ZFS as time goes by, especially with the continued uncertainty after Oracle’s takeover of Sun and whatnot.

So, whilst it flies a little in the face of some recommendations from friends, I’m deciding to trust a collection of things I read on the internet and going with ZFS On Linux at this stage.
The key point for me was send / receive snapshots (on a relatively stable kernel version) – as I’m planning to build a similar device at my parent’s place, it painlessly addresses my off-site backup desire.

Virtualization

Given the way I seem to be wired, when I first considered having this build server as a platform for a few VMs (both for “production” use and for testing environments) I got way ahead of myself in considering whether running the host as dom0 for a Xen hypervisor, or using KVM. In both cases, I was heading down a road of working out whether running the hypervisor kernel would have knock-on effects on the systems desired output (NAS / PVR) and how to mitigate that.

Eventually, I realized I was being ridiculous. At most, I’m going to be running between 3 and 5 VMs near-constantly, with maybe a couple of extras fired up for lab purposes. None of them are going to be particularly resource-intensive, nor should they be under high load. As much as anything, they exist to provide some sandboxing and allow me to mess with config management and other tricks and tips that can ultimately be useful and applicable in a work environment.
Before anyway chooses to state the obvious – I can appreciate that Xen / KVM would be pretty good skills to have, but shoe-horning them into my domestic NAS environment strikes me as overkill (and if it *strikes* me as overkill, it *definitely* is overkill).

In the end I think I’ve setted on using VirtualBox, at least in the interim, for a few reasons. I’ve got some experience with it already in headless mode (albeit, not enough); it provides enough of the passthrough features that make it versatile-enough for my needs (I run my current labs in VirtualBox on my desktop); decent management tools exist; it can do some fun stuff to aid the integration with the neater features of ZFS; from what I can gather, the key limitations really only manifest themselves at a business-level, rather than home-use.

DVR

As has been hinted at previously, I really have no firm alternatives to MythTV for this purpose.
I first saw MythTV in action first-hand at a LUGRadio meetup many moons ago and I was wowed. Whilst I don’t doubt that there are other alternatives available (most, to my knowledge, need Windaz) – and certainly others that are quicker to configure – the sheer scope of what MythTV has achieved appeals to me massively. And I want to be – in some way – involved.

But, I’ll be honest, DVR had taken a bit of a backseat for me in short-term plans due to the difficult integration with XBMC but then these two posts ([1] and [2]) got me excited about it again. The only difficult part will be choosing a decent, supported tuner.

Home NAS & Server (1): Need vs. Want

This is Part 1 of a series of blog posts on building a Home NAS & Server using Linux. For the full index and other details, please see the Index


As may be apparent from the index, one key feature of this new build is that it needs to be a bit of a Swiss Army Knife. Whilst I’m well aware I could quite easily build a lower power, (probably) cheaper and quieter machine that would fulfil the primary requirement of providing more storage space, it wouldn’t be as capable of some of the other tasks I’d ultimately like this unit to do. For what it’s worth, if it was just about storage, I’d probably seriously consider another QNAP if I was going to go for a pre-bought option, I really have liked my TS-219P

Let’s start by covering what I currently use the QNAP for and what benefits it provides, so the bare minimum that this new build has to offer:

  1. Video storage – My own DVD rips and the like, centrally stored for access by XBMC installs
  2. Photo storage – Finally, I shifted them off my aging USB hard drive. Hopefully, the number will begin to increase again before too long
  3. Music storage – Accessed by various means, including a Subsonic server
  4. Miscellaneous files – a pretty lax (if I’m honest) and ad hoc approach towards backing up what I consider to be my “important” stuff
  5. Interoperability with *nix and Windows clients (feasibly this should stretch to Mac, although I have no immediate need for that. Pedants, hush)
  6. Decent (local) backup and resilience – alright, it’s just mirrored disks in a little box, but it’s been sufficient so far
  7. A stable platform – It’s been up almost a full year now and shows no sign of causing any stability issues. It gives me my files when I want them, and writes what I want to write when I want to write it (until recently, see “space consumption”)
So I have the minimal expectations. As you can see from the list there, the only thing I actually need to address the shortcomings of my current solution is more storage space, which I could easily do with either a larger (4-disk) QNAP device, or a mini- / micro-ATX board, 1U housing and a few disks slapped into a software array with mdadm.
But… as I’ve hinted at plenty of times already, I’m looking to kill multiple birds with one stone. I’ve already accepted that this is going to mean the system won’t be quite as lean on power consumption as I might have hoped for, but with the various advances in “green” solutions in more recent components I’m fairly confident this can be somewhat mitigated against. Worst-comes-to-worst, a lot of what I want the system to do doesn’t require it to be powered on all the time, so making allowances for that is an option.

But enough of that, what do I want this build to do and what do I need it to do?

Need Want
  • Fault-tolerant storage
  • Multiprotocol (Windows, Linux, Mac) file sharing
  • Usable capacity of 3TB or more (minimum 50% increase on current capacity)
  • Command-line access
  • UPS integration
  • Power-saving features (schedules, WOL)
  • Able to perform as a central backup platform for other systems
  • Stability
  • Rackmountable (max 3U, max depth 22″)
  • (relatively) Quiet
  • Expandable
  • Configurable with (moderate) ease
  • Hot-Swap data disks
  • Capable of acting as an iscsi target
  • Network Interface bonding
  • PVR / DVR backend
  • Able to host a few virtual machines simultaneously
  • Easy remote (off-site) backup options – ideally incremental

And that is just about that.

Linux-Based Home NAS & Server

Shamelessly inspired by this brilliant series of blog posts – http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/ and http://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/ – both of which contain a great many more details than I will likely cover.

Before moving up to Newcastle, I picked myself up a QNAP TS-219P with a couple of 2TB drives mirrored. At the time, I ummed and ahhed about paying a bit more for a four disk version from an expansion perspective, but couldn’t justify it for a few reasons:

  • Extra cost – I didn’t have any present *need* for the extra capacity, and there was a reasonable step-up in price for a single-function box I didn’t need the full capabilities of.
  • Format – I knew if I was going to get a 4-disk-or-bigger version I would – longer term – want something rack-mountable, but at the time I had no rack to put it in. Without that, a rack-mountable NAS would just be silly.

Now, two years on, a few things have happened. Firstly, I’ve moved into my own place, complete with Cat6 wiring to as many places as I felt it practical and a rack to boot. Secondly – and perhaps more importantly in the context of these posts – I’ve reached the limits of my QNAP 2-disk solution. With an increasing multimedia collection, a refuelled desire to start taking more pictures again, and additional space for making a concerted effort to centralize my other data, more space is going to be needed.

However, as the next few posts should reflect, there’s a few other things I want the system to do, so the considerations (and cost) expand a bit beyond the remit of “NAS”.

Contents

Setting up the Sheevaplug as a Weather Station

UPDATE 25/5/2011:

$ uptime
07:42:35 up 94 days, 14:00,  1 user,  load average: 0.74, 1.32, 0.68

Pretty happy with that so far!


A few months back I got hold of a Marvell Sheevaplug from NewIT with the express intention of it replacing the aged, noisy desktop PC with a more efficient, lower-power and quieter solution. It’s sole intention was to run the open source weather station solution Wview – a tidy little collection of daemons that records the weather statistics provided by a variety of different weather station hardware and then allows you to do a bunch of things to it (generate your own HTML templates, send the data to Wunderground, Weather for You, and so on) .
With that in mind, my key spec and concerns for the device were as follows:

  • Run Debian Squeeze
  • Fetch data from an Oregon Scientific WMR968 via a USB-to-Serial adapter
  • Record the data consistently using WView
  • Given the risk of the SD Card failing, regularly dump a backup of the data to a remote store
  • Take steps to minimise the effect of regular writes to the SD guard and so prolong its life

When I initially got the SheevaPlug from NewIT, it already did run Debian Lenny, and I started taking steps to stick things like Flashybrid on there. However, I was hasty, and it ended up breaking the install.
Given that I was going to have to do it all again, I thought I’d best document it, especially as I could only find the information I used spread out over 4 different websites (see References, bottom). As such, these details are pretty specific to the exact steps I took, rather than the more general (and detailed) information that can be found at those references.

PLEASE NOTE: My installation environment was onto a standard SheevaPlug (no ESATA port), installing from a TFTP server onto the SD Card, and making that SD Card bootable. If you’re planned install environment differs to that, use Martin’s site to see what you need to do differently.
For all codes and commands, I have used
#~ to indicate the commands are being input on a new line.

Continue reading “Setting up the Sheevaplug as a Weather Station”

Old, but Good

I’ve just been reminded of this after a long time.

Remarkably true, and an answer if ever there was one as to why I enjoy EvE so much above other games I’ve played. Long may it remain so. 😉

Learning Curve of Various MMORPGs
Learning Curve of Various MMORPGs

LUGRadio Live 2008

I attended this event last year and, as such, the first thing I want to do know is to admit defeat. I was pretty proud of last year’s writeup, and I had every intention of trying to repeat the experiment in writing up a thorough review. For the record, I have failed, even before I begin.

I took less pictures this year, I was overall less attentive due to recurring hangovers, but I did have every bit as good a time. Anyway, let’s see where we get to…

Transport & Accommodation

After the disaster that was trying to get to Oxford by train a few weeks back, I opted to drive down to Wolves this time round, offering lifts via the LancsLUG list but forgetting to check the LUGRadio Forums – meant I ended up meeting a few others who’d similarly travelled down from Lancaster, which was a mistake. We could have easily car-shared if we’d realized.

Another factor in driving down was that I’d only really decided I was definitely going to go down there the day before – it would have been the same cost to get on the train, with less guarantees.
As it was, the drive down was painless, although I did get there a little late because I went for beer the night before and slept through my alarm – n00b.

With making the decision the day before, I decided to go with what I knew and opted to stay at the Novotel in Wolverhampton – right next to the railway station, 5 minutes walk to the venue, and just off one of the main roads (maybe 15 minutes from the motorway). It was as to be expected – not the cheapest, but conveniently placed, clean and, to be quite honest, easy. Buffet breakfast was included, which I maxed on, and the snack I got there in the evening to wash down my Guinness was pretty good as well.

The Talks

This segment is the reason this is a week late, and it’s an utter failure. I spent most of the last week wanting to really devote time to this part likeI did last year to give the speakers the best possible summation I could. Then I realized I didn’t make enough notes to do that. Instead, I’ll just summarise what I saw, what was interesting, and key things of note.

To begin with, as mentioned, I arrived to the event too late to see the opening segment by the four large gents, and as such also missed the usual rush to get in and see the place fill up. It also meant that I missed a good portion of the first talk I went to see, which was Rufus Pollock talking about the Open Knowledge Foundation. I have to be honest, before LRL, I think I had heard of the Foundation but knew nothing about it. It’s a really cool idea, and I wholeheartedly suggest visiting the webstie to find out more, along with the Comprehensive Knowledge Archive Network – super stuff. The things that particularly made me sit up and think were a couple of key phrases Rufus mentioned during his talk, the first centralising around the many minds principle, which is a principle I have a lot of faith in, and his phrase of ‘the revolution will be decentralized’. It makes so much sense given the nature of the internet and its accessibility. These big changes, that need to happen to make real progress revolve around the idea of open and shared knowledge and the sharing of ideas struggle if they’re dictated to from a central point. I really can’t do it justice by trying to paraphrase everything he said but, to find out more, check out the aforementioned links.
Another snippet he mentioned that caught my ear was, after explaining how ‘3G’ alone has over 7000 patents applicable to it, and 800 distinct patents (all of which, supposedly, ‘protect’ intellectual property), the question ‘Why should this process be open?’ His answer? ‘Because putting Humpty Dumpty back together is easier‘ when things are open. And it really is. When the shit hits the fan, and things need fixing, it’s so much easier if you can have all the blueprints and tools right there at your fingertips. You shouldn’t need to scrabble around for permission to access certain ‘IP Stuffs’ just to be able to start working on a fix. Another point mentioned that I wasn’t aware of was that, in most cases, works without a license default to being treated as proprietary, not open.

I stayed in the main stage for the next talk by Emma Jane Hogbin, regarding women on open source and various aspects surrounding that. I’m not going to go blow-by-blow with what was said, but it really was extremely interesting, well thought out, and energetic. Would definitely like to go see her speak again when I’m not as hungover / tired so I can pay considerably more attention and make notes.

Lunchtime intervened, allowing me to check into the hotel, grab a lot of coffee, and a sandwich to try and help me make it through the afternoon. It failed.

First up was Jeremy Allison of the SAMBA project, again on the main stage. A very good speaker, I actually found the talk fascinating for an area I didn’t know much about as it provided a good broad picture of where SAMBA came from, what it had done and a glimpse at where it could be going. My only regret, as with most talks over the weekend, is that I was in not mental state to really get the most out of them, as I was constantly battling sleep. n00b.

The Gong-a-Thong Lightbulb Extravaganza was up next, and it all got much more surreal than last year. Funny though. I actually didn’t listen much to most of the Gong-a-Thong though, as I decided to run over and make the msot of the quiet spell on the Bytemark gaming rig. Impressive setup. I found myself coming back more and more in a vain attempt to stroke my epeen and get better at Team Fortress 2. I didn’t quite achieve this.

I also played on for much too long so that I missed at least the first half of Steve Lamb’s talk entitled ‘Green IT’. As such, I feel even less qualified than normal to comment on what it was about – some comments about security not being platform dependent rang true though. And well done again Steve, what I heard was really interesting stuff, but I did spend a long time cursing myself for not getting there to listen to it from the start.

The day concluded with LUGRadio Live and Unleashed. Always amusing. You can go listen and download it here (when it gets uploaded… 🙂 ).

I was surprisingly spritely at the start of Day 2, my only guess for that is that I was still drunk. I kicked it off by going to see Barbie’s talk, ‘Understanding Malware’.
I’d heard of Barbie many times before going to LRL and he didn’t disappoint. It was informative and interesting, even if it was a little above the level I would be comfortable with.

Next up was the Mass Debate – a popular event last year – this year featuring Jeremy Allison, Mrben, Matthew Garrett and … I forget the last panelist (I arrived after the introductions). Jono hosted it.
As with last year, it was hilarious, with some good points being made along with plenty of sly and not-so-sly piss-taking.

Lunch came, and with it came the afternoon fragging. Back to the Team Fortress 2 server I went, and the time flew by. I missed Matthew Garrett’s talk on ‘Power Management that Works’, but made it to Neuro’s talk on Second Life. I found Neuro’s brief rundown to Second Life last year pretty interesting, even if it is something I’m not massively interested in getting into and, listening to him this year, I came home and downloaded the client. I’ve yet to give it a thorough trial. It’s impressive technology-wise, not convinced it’ll prove to be for me though. We’ll see!

And that was it. Game over for another year. This time with no regular podcasts in between.

I’ll miss the podcasts, but i certainly won’t miss being at LRL next year. As Westwood may say, “It’s gonna be BIG” – and all that shite. It will be good.

Nutsacks

The fully edited picture will hopefully get linked here Soon ™, but my Nutsack idea this year was, I think, not what I was meant to get, and I apologise if someone left theirs on the side only to return and find I’d swagged it. Having arrived late, I just grabbed the first one that was on the table, saving myself from rummaging through it until I got to the hotel.

Mostly it was of a similar composition to last year (to be expected). The free T-Shirt was from UKUUG, there were one or two pens, some papers and advertising stuff for various upcoming events, the programme, a keyring bottle-opener from Yahoo! (which was considerably more useful than the strangely shaped pen of last year.. 😉 ), and a latest Ubuntu CD. Solid stuff.

In addition to the stuff in the bag, I was also able to grab a lovely free T-Shirt from the LinuxEmporium stand, as a return for entering their competition with the chance to win an EeePC. Alas, I didn’t win the EeePC, but getting a T-Shirt for the effort was a nice touch.

I’ll update this, and add info to the pictures, when I get chance (given that this is 7 days late, expect a sturdy delay on that..), but for now I can’t really think of much else to add!

EeePC Desires

EeePCs seemed to be everywhere at LRL this year. Which wasn’t a good thing for me as it dramatically increased my urge to go buy myself one. I did in fact come very close to just getting one right there and then, but with the knowledge that the new models are coming out very very shortly, I figured I should wait.

These things are bloody lovely for the sort of thing I want them for. The keyboard does indeed feel incredibly small at first, but I’m almost certain that, with a little practise, that will become a non-issue. The screen is plenty big enough for general browsing and document-editing. All in a slick little package with wireless capabilities, massive battery life, and solid state disks so I can feel slightly more comfortable when it crashes to the floor.

The other thing that was on show there that looked really slick was the Ubuntu desktop running on one of them. It just looked really, really good. To be fair though, I only had the ‘Advanced Menu’ mode of the native Xandros install to compare it to which, whilst it’s definitely functional and familiar, was a bit of a disappointment – I was quite looking forward to seeing what the Bubbly and Cuddly default appearance looked and handled like. Even so though, I think I’ll end up getting mine pre-installed with Ubuntu, probably from EfficientPC – as the fellow running the stand for them was really quite interesting to talk to about it and seems more than happy to go to extra lengths to help customize it. Go check ’em out – they’re meant to be getting the new EeePCs in stock in the next few weeks – FUN!

Community, Community, Community

Jono may get a kicking at regular intervals for his overuse of this term but, feck it. The community aspect of LUGRadio may have been clear to see last year but, for whatever reason, this year came across as considerably more sociable right the way through.
Admittedly, as much as anything, that may have been less to do with an increase in sociable atmosphere and more to do with me coming out of my shell a bit more and feeling comfortable striking up conversations and generally imposing myself on other groups. If I ever did become a bind, then I obviously apologise to those people, but I don’t think I did (hopefully).

The addition of Karaoke on the Saturday night was a superb idea, and definitely brought out the best in people, if not always their voices, and generally aided the party atmosphere. Heading down into Wolves afterwards to visit the rock bar (can’t remember it’s name) was just as much a laugh, even if my memory does get considerably sketchier past that point. However, getting lost on my way back to the hotel certainly wasn’t as much fun!

As with last year, both days carried with them plenty of joviality and all round politeness and banter. In many ways similar to what I loved about Glastonbury, everyone at LRL is there because they share similar broad interests – in this case technology in general and FOSS. As such, the atmosphere is brilliant.

I had some great conversations with various folks this year, and learned a whole load, and I thank them for that. I also saw people I first met at last year’s event, and it really did hammer home this community idea, and made me realize how foolish it was to have not taken a bigger part in it during the year. I have no excuses for that, except my own leanings towards being socially inept and not really knowing how to best start to get to know an already established group as a complete outsider. Childish and foolish? Definitely. But it seems to be the way I’m wired.

Pictures and all that Jazz

I didn’t take alf the pictures I’d wanted to take, and most of the ones I did take didn’t come out well.

If you want to look through some of them, then mine are bunched up in the midst of the Flickr collection, tagged lugradiolive. Some fun photos in there!

As you can hopefully tell from this writeup, I had another great weekend at LUGRadio Live this year, and I was extremely pleased to hear that the show will have at least one more outing same time next year. It’s going to be interesting to see how it turns out, seeing as how there won’t be a regular show to promote it, but if anything it provides an impetus to become more actively involved with the community, which can only be a good thing.

So, to the four blokes and their merry band of yellow-shirted helpers – thanks for another great year. For the cool people I met and talked to – thanks for the hospitality and friendliness. To those who provided such solid information – it’s all appreciated, and I apologise I couldn’t do you justice in my write-ups.

I’ll see you all next year, that’s for sure.

HOWTO: Automating Bridges and TUN/TAP

This isn’t ground-breaking stuff by any means, it’s more just a simple reminder for myself about how I did certain things in order to get a network bridge set up under Ubuntu 8.04, and to create a Tap connection that I could then use in VirtualBox to let routes and all that shiny stuff work. It doesn’t explain things fully (I don’t understand it), but it does cover what I did, hopefully step by step.
This only made sense thanks to the following pages:

1. https://help.ubuntu.com/community/VirtualBox#Create%20A%20Bridge
2. http://ubuntuforums.org/showthread.php?t=830777
3. http://ubuntuforums.org/showthread.php?t=752127

Anyway, let’s begin.

Does it Work…?

First step along this Rocky Road to Near-Fail was to follow the useful advice in Link 1 above, and thus making sure that creating a bridge and activating it and the TUN/TAP actually worked. It did. From that link, I did the following:

~$ sudo aptitude install bridge-utils uml-utilities

This installs the pre-requisite applications to do the fun stuff.
The second point depends on your viewpoint, but it’s probably worth backing up your current /etc/network/interfaces file in case you manage to break something:

~$ sudo cp /etc/network/interfaces /etc/network/interfaces.good

Obviously, what you call and where you place the backup is up to you. Just make sure it’s something you remember later.

Now for preparing the bridge itself. Fun:

~$ sudo tunctl -t tap1 -u USERNAME
~$ sudo chown root.vboxusers /dev/net/tun
~$ sudo chmod g+rw /dev/net/tun

Next up, we need to edit another file, apparently to help make permissions persist after reboots. The file we need to edit is /etc/udev/rules.d/20-names.rules
Again, we need to edit this as root, so from the terminal:

~$ sudo [$editor_du_jour] /etc/udev/rules.d/20-names.rules

And then at the end of that file, find the following line:
KERNEL=="tun", NAME="net/%k"
And add the following to make it look like this:
KERNEL=="tun", NAME="net/%k", GROUP="vboxusers", MODE="0660"

Take whichever process your editor takes for saving and closing that.

Now we can create the bridge itself:

~$ sudo brctl addbr br0

Now put the network interface into promiscuous mode, add it to the bridge, and set the Bridge to DHCP (if you are using DHCP, if not, ignore these and see the next statement):

~$ sudo ifconfig eth0 0.0.0.0 promisc
~$ sudo brctl addif br0 eth0
~$ sudo dhclient br0

If you are NOT using DHCP, and have a STATIC IP, follow this example:
~$ sudo ifconfig br0 192.168.1.105 netmask 255.255.0.0
~$ sudo route add default gw 192.168.1.1 br0

(Obviously, replace the IP, Netmask, and Gateway IPs with your own…)

Now, simply add the tap1 device to the bridge and bring up the interface:

~$ sudo brctl addif br0 tap1
~$ sudo ifconfig tap1 up

Last thing I did was just to run ifconfig to double check everything that should be there is there. You should have the Bridge (br0) with your IP Address, the physical interface (eth0) set promiscuously, and the TAP, tap1.

Opening up VirtualBox and change the appropriate network settings for your VirtualMachine to point to the new tap device (in my case, tap1). First step in that is to change the ‘Attached To’ drop-down to point to ‘Host Interface’.

Screenshot of the Settings

Starting the VirtualMachine now should be effortless, and when it starts up (and you add them), the same routes you’ve been using should work just fine… so ping, ping away!

Making it Permanent

The initial instructions I was hoping to follow from Link 1 didn’t work out all too well for me, so I was back trying to work out exactly where I could fix it. Thankfully, SpaceTeddy on the Ubuntu forums was able to point me in the right direction of some useful hints he’d written.
In the end, I did the following.
First step is to go back and edit /etc/network/interfaces with your preferred Text Editor. You need to be root to do this. In there, you are replacing your current Physical Interface settings for the bridge, or, in my case, swapping out eth0 for br0. Then you are adding a rule to tell the bridge it using your physical interface (eth0). Finally you are adding the stuff that brings eth0 up as promiscuous. It should look like this:

auto br0
iface br0 inet static
bridge_ports eth1
auto eth1
iface eth1 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

It’s probably worthwhile noting that you SHOULD NOT remove the references to the Loopback Interface (lo), but do make sure any other references to your physical interface are commented out, or plain old deleted – you made a backup anyway, right?

After doing that, the only thing left to include is finding a way to bring the TAP interface up on startup. The other guides do mention ways to do it through /etc/network/interfaces but they didn’t work for me – I still don’t know why.

Instead, I just added the commands to /etc/rc.local, along with the routes I need to bring up everytime I startup. This was as simple a case as opening up the file in my preferred text editor (again, sudo is needed) and adding the following:

tunctl -t tap1 -u MyUser
brctl addif br0 tap1
ifconfig tap1 up

exit 0

Make sure to keep the ‘exit 0’ at the end of that file – it seems to work.

And that massively over-lengthy block of text is all that you need to do. I will try and refine this at some point but, this works for me and seems easy enough to follow if I need to remind myself what I did again.