anaNAS — Requirement analysis and feature wishlist

Follow-​up from : anaNAS — a guide for an enterprise-​grade NAS

This article — part 2 of my multipart series about “anaNAS” — will summarize my decision process which led from requirement-​analysis to a concrete plan for a custom-​built storage solution.

Analysis of my old storage solution

I own several network-​attached machines like computers, tablets, set-​top boxes and smart-​phones. My current storage capacity requires over 8 TB of data and can be categorized in some main groups :

  • personal documents (scans, contracts, e-​mail, …)
  • system backups (Apple TimeMachine, Windows backups, …)
  • documents created in applications (text, spread sheets, layouts, source code, …)
  • ebooks and digital editions of magazines
  • digital photos
  • music files (converted CDs, digital purchases, …)
  • video files (digital camera, recorded movies and TV shows, DRM protected files, …)
  • misc. virtual machines

I used to store these files on different external hard drives (USB, FireWire) each brining its own power supply. Most of the time all disks were attached to one computer which in general would be set-​up with network shares for other machines as the files were needed on other devices, too.

Important files were backed up on different (non permanent attached) disks, USB pen drives and DVD-​RAM.

Additionally I own a Linux-​based PVR which records digital TV transmissions lossless. This box also is set-​up as an HTPC-​like device as it can run an XBMC-​like client (in my case Plex). An Apple TV is attached to my home cinema. It is my music jukebox (from an iTunes-​share) and is used for VOD services.

Problems with my old setup

I switched from Windows to Mac back in 2005 so I had to migrate all my external hard disks from NTFS to HFS+ in order to be writable with acceptable performance. Another migration would cause serious problems as my digital footprint grew exponentially since back then. The capacity of each disk is limited and as my data grew I had to re-​organize my data numerous times.

Both actions will not be necessary again in the future if I had a single system with an expandable disk configuration which can be grown without too much effort.

One of my devices needed to be powered on in order to access all my files. Many USB and FireWire-​ports were occupied on this machine and many of the hard drive’s power supplies are inefficient and increase my electrical power bill. If I want to listen to music my machine running the iTunes share has to be powered up — the same applies to my Plex service.

It would be best if the new storage solution would provide all services without the need of additional hardware.

Due to the lack of USB– and FireWire-​ports some of the disks were not attached at all times so I had trouble with my backup strategy. A quick solution like a additional Quickport — an external hard disk docking bay for SATA-​disks (shown in the first image in this essay) — would not solve any of my problems.

The new storage solution has to perform backups on hot-​swappable hard disks in a fast and reliable manner without blocking normal operations and should be able so sync to remote systems as well. 

Some of my hard drives died in the past so restoring files was a painful process. Sometimes only the controller in the external enclosure was faulty; in those cases I was lucky and could simply copy the data onto a different external or internal disk.

Buying reliable new external hard drives also gets more and more difficult :

  • Many external hard disks do not feature an internal SATA connector, so they cannot be attached if the controller is fried.
  • Many vendors begun to dynamically encrypt the data with a key, which is built into the controller’s flash-​storage.
  • Larger disks are organized in 4 Kbyte blocks internally but report 512 Byte blocks via USB.
This way it is increasingly harder to access the files when the external enclosure dies but the drive is still ok.

 In order to solve this problem the storage solution must be protracted against disk hardware failures.

 Obvious Solution

Buying a pre-​built NAS would solve most of my problems. Modern systems from QNAPSynology and other vendors provide solid solutions using RAID (redundant array of independent disks) which can resist certain hard disk failures and are expandable in capacity by adding or upgrading hard drives. I even have good experience with QNAP devices as I planned a storage solution for my parents, who didn’t have such extreme requirements. Additionally the company provides decent software and support.

In the case of a hard disk failure data integrity will be not tampered — in regard to the chosen RAID-​level. If the enclosure fails, another box from the same vendor should be able to mount the volume(s). With some experience with LVM one could recover data from the RAID-​set by using any modern Linux distribution, too — if no encryption was enabled.

My requirements regarding a fully-​deatured Plex service could be fit by some devices — only transcoding video files on-​the-​fly can only be performed by few NAS (Marvel-​based solutions totally fail, Intel Atom solutions can provide some transcoding). iTunes shares are provided by most NAS as well, but in general the files are only accessible for iTunes :

Often AirPlay or home-​sharing for Apple devices is not covered.

A NAS system is no replacement for backup so I would still need a strategy for creating backup copies of important files. I would have had to buy a second (smaller, and not permanently powered-​on) NAS to backup these files. Offsite-​backup using services like CrashPlan could be a solution, but one would rely on a foreign entity. Most of those services will also not guarantee integrity nor availability of all files under all circumstances.

In order to be able to fit my needs for growing amounts of data I would have to buy at least an enclosure for 8 drives — which is expensive if one needs decent performance and streaming capabilities. Some people might ask why I discard a 4– or 5-​disk solution as a single RAID 5 of five 4 TB disks would roughly provide 16 TB of storage. In theory this assumption is correct — I went this direction for my parent’s NAS : four 2 TB hard disks and two spare disks in case some disks fail.

In this case a single RAID 5 was sufficient (for me).

 But larger volumes and/or hard disks can cause serious problems :

If RAID-​sets are created with 3 or 4 TB hard drives there is a good chance that problems will arise when hard drives fail. In general one hard drive failure can be handled by using RAID 5, but by rebuilding the RAID-​set the remaining drives will be put under stress.

Another disk failure will kill the entire RAID-​set and all files are gone. So you have to create a RAID 6, which can handle two drive failures at the same time.

By using really large disks there is a good chance that a so called uncorrectable read errors (URE) will occur during rebuilding a RAID-​array (because the raw capacity of the entire volume is larger then the inverse of the specified bit error rate — BER — for the hard disks given by the manufacturer). There are two possible implications if a single URE occurs during RAID recovery :

Best case (highest probability) : 

Only one file will be damaged if the URE is located in user data and cannot be recovered by using parity data. But in general there is no information what particular file was damaged in this process. Sometimes even the parity data will be of no use, as the hard drive will not report a read error.

Worst case :

The entire array is gone if the URE is located in structural data (super blocks or inodes).

If you need large amounts of data storage with huge hard disks you need to build RAID 6 or you have to buy really expensive server-​grade hard disks with much lower BER (10 16  or 10 15  instead of 10 14 ). 

Other problems traditional hard disk storage and LVM-​based RAID-​solutions have to cope with are bit-​rot” and the write-​hole”, which I will not explain — as there are many other articles out there…

 Other Limitations :

NAS enclosures like QNAP TS-​870 or Synology DS1813+ could be easily expanded up to 24 TB (using RAID 6) without using external accessories. But the price-​tag is hard to swallow : approx. 1200€ or 900€ without any hard drive (Intel i3 dual core vs. low powered Atom-​based solution. If I wanted to expand beyond the 8 drive limit I would have to swap every disk for larger ones or I would have to buy an external case (with additional power supply).

Fast backups couldn’t be performed easily because the e-​SATA or USB 3.0 ports would be blocked (and GBit-​LAN is capped at approx. 110 MB/s). Link aggregation may help but would require the purchase of additional network equipment.

Using an enterprise-​grade storage system would also not solve my service problem entirely :

Some network services cannot be performed by those devices due to to hardware or software limitations and they are often only subject to home-​use NAS solutions.

 My solution

Taking all aspects into consideration I decided not to buy a pre-​built NAS. Despite the price-​tag those devices lack some flexibility. Modern file systems with checksums (like ZFS or btrfs) do exist — but rarely are used in consumer NAS systems. But many commercial and open source storage solutions or appliances (like OmniOS with napp-​it) are available and they solve the problems I described earlier in this article.

Additionally I have experience building, configuring and setting-​up computer systems myself (desktops and servers).

My good knowledge of (storage) hardware and different operating systems (Windows, Mac and Linux) is a good bonus, too : I built a Linux NAS (encrypted LVM with RAID 5) and a 64 bit VMware ESXi–Server (with Debian and Windows Server 2003 R2) at my university for over 100 user. 

The next step for my custom storage solution was to create a storage concept which would fit all my needs. This concept will be covered in my next essay.