This is the second post in my Network Attached Storage (NAS) 2024 build series. If you haven't read the first post, check it out here.
The first post I discussed why I am building a new nas, the problems I had with my previous one. This post I am going to go over why I chose to build my own NAS rather than purchase something like a Synology.
Why build a NAS, why not buy one?
While I do believe Synology is a good choice for a lot of people, I prefer to build my own gear as I prefer to run open source NAS software or even virtualization software. I also like the flexibility to change my mind, as I said in my previous post I was running Proxmox to virtualize my NAS, but this time I will be using TrueNAS directly on the hardware without virtualization.
I am a big fan of ZFS file system, and that leaves you with three options to use it. Unraid, TrueNAS, diy Linux setup. Most commercial solutions only support BTRFS which I am not a fan of.
What's so great about ZFS?
ZFS has a fantastic checksum system to prevent bitrot and corruption. This is a real problem with long term storage on modern storage mediums. Because of this checksum system, ZFS is able to self-heal at the block level. If you want to be sure your data stays in tact for the foreseeable future, ZFS is king.
ZFS has an amazing snapshot system that allows you to backup your entire data instantly. You can literally backup hundreds of terabytes in just seconds. snapshots. These backups also take no space on the disk, as ZFS just freezes the current blocks used in the snapshot and moves on to only using new blocks for changes. You can also recover data at similar speeds. This is a massive advantage of ZFS but is a feature also found in other NAS file systems like BTRFS.
ZFS main disadvantage is the inability to just add single drives to an array to expand storage. ZFS works in what are called vDevs, these are individual arrays with data protection (aka Raid) and are combined together to build a pool. vDevs can never be modified, so say you do a six drive raidz2 (2 disk protection), you can never add drives to this to make it larger. You can however create another vDev and add it to the pool. This however is changing as ZFS has support for this in a future release.
Another lesser disadvantage of ZFS is it is tuned for hard drives, and when you start looking at high speed NVME drives, it doesn't run as well. Although it still works really well regardless.
Unraid Vs TrueNAS vs Linux
Unraid is a very popular choice, but is not my preferred choice. The reason is, I prefer to use ZFS, which Unraid now supports, but you lose much of the advantage Unraid offers.
Unraid works on the concept of using dedicated parity disks for arrays, so instead of traditional RAID you add disks to a pool with 1, 2, or even 3 dedicated parity drives. If you want to add more disks, go ahead. This is the main draw to Unraid. Recently, Unraid added support for ZFS but you do not have the luxury of just adding new disks to expand your arrays while using ZFS.
TrueNAS however is 100% focused on ZFS. TrueNAS is also open source and 100% free to use, unlike Unraid which is now a subscription model, although you can buy a lifetime license.
Both Unraid and TrueNAS support virtualization, sadly. I believe both of these lost their focus going in this direction and clutters the experience. I would prefer if my NAS only did NAS stuff, and if I wanted to add virtualization, I could.
Linux of course can be used and you can manage ZFS yourself. ZFS isn't hard and would be super easy to use Linux to do this. The problem comes when you start looking into backups, replication, file sharing, and other features. This is where Unraid/TrueNAS shines, providing a UI to all this additional configuration.
If you haven't already, check out the first post in the series.
Posted Using InLeo Alpha
Is the issue with zfs on nvme it's total guesswork of the blocksize? Like you said it runs great, i'm just wondering what to look for?
This goes into great detail about it.
Short answer:
It really isn't tuned for it, when ZFS was created there was only rust drives, so there is overhead (queuing, locking, syncs) that isn't a big deal when you have slow rust, but with nvme they add up a lot.
You can tune things to run a lot better, and the guys in the video above made a custom patch to really tune it.
Thanks for the detailed description! It's nice to know there are other options out there if I ever decide to go a different direction.