Its new homelab time. And with that, potentially a new OS time too.
I currently am very happy with Debian and Docker. The only issue is I am brand new to using data redundancy. I have a 2 bay NAS I’ll use, and I want the two HDDs to be in raid 1.
Now I could definitely just use ZFS or BTRFS with Debian, and be able to use Docker just like I do currently.
Or I could use a dedicated NAS OS. That would help me with the raid part of this, but a requirement is Docker.
Any recommendations?
Generally, I think it is better to use a general server OS like Debian or Fedora instead of something specialized like Proxmox or Unraid. That way you can always choose the way you want to use your server instead of being channeled into running it a specific way (especially if you ever change your mind).
Honestly, from your description, I’d go with Debian, likely with btrfs. Would be better if you had 3 slots so that you can swap a bad drive but, 2 will work.
If you want to get adventurous, you can see about a Fedora Atomic distro.
Previously, I’ve recommended Proxmox but, not sure that I still can at the moment, if they haven’t fixed their kernel funkiness. Right now, I’m back to libvirt.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters LVM (Linux) Logical Volume Manager for filesystem mapping NAS Network-Attached Storage RAID Redundant Array of Independent Disks for mass storage SSD Solid State Drive mass storage ZFS Solaris/Linux filesystem focusing on data integrity
5 acronyms in this thread; the most compressed thread commented on today has 20 acronyms.
[Thread #887 for this sub, first seen 25th Jul 2024, 15:45] [FAQ] [Full list] [Contact] [Source code]
Definitely use ZFS for the data volumes in order to avoid silent data corruption. If you don’t use separate drive for the OS, then you need to look into ZFS on root.
Or XFS.
ZFS is rigid? Please explain
I need to throw random spare old HDs at it, I expect failures, I expect expanding it, I expect very different sizes between the disks.
You can do that with ZFS. It’s built-in integrierty check will automatically heal errors and tell you what drive has gone bad.
Debian and the standard linux mdraid?
Do you mean mdadm? https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm If not can I have a link?