Good lord I am not able to read this entire thread but here's just a few notes. Also, my background: I'm a "DevOps Engineer" and have been in the software development world for 10 years now. DevOps engineers are engineers who specifically deal with (primarily cloud) infrastructure, IE, we build, automate, and maintain cloud "hardware" like virtual machines, databases, large storage devices, networking, but we also help write, optimize, and deploy code to take advantage of said infrastructure, and tune the infrastructure for said code. I have built over 100 computers, and have managed large labs and server rooms. And I was doing a lot of that kind of stuff (without the cloud and the scale) for probably 10 years before I ever started my first real job, so have been doing computer hardware for about 20 years now, and in addition to my professional work I do a lot of this kind of stuff at home and with friends.
I have had several plex servers and also currently attach my networked storage to my nVidia shield for Kodi.
I don't wanna go back through this entire thing and reply to everything, and I think there's a lot of good stuff from what I've skimmed, so I'm just gonna drop a few tips.
1. In a raid array all your disks need to be the same size, or rather, they will only use the max size of the smallest disk. So uniformity in size is crucial. What doesn't have to be uniform and should not be? The models of the drives you're using. It's so hard sometimes to determine which disks from which manufacturers have issues and they can even have issues in the same model but in different production runs. You're talking about something that spins super thin platters at 5000 to 10000 rpms, multiple of them, jammed into a 3.5" case, and most of them now are pressurized with Helium or some other gas. Each platter has it's own mechanical read/write heads, which may or may not move independently of each other. There's a lot that can go wrong. To that end, the best way to achieve stability and reliability is to never have more drives of the same model than you can tolerate the failure of. So if you have a Raid array that has 6 disks that can tolerate the failure of 3 drives, no more than 3 of any given model, and you should go across brands too. Seagate and Toshiba are my preferred brands for NAS drives. I love WD, have been using their drives for YEARS, but I've seen a lot of failure from their "Reds". Granted the failure rate on them isn't astronomical, but it's definitely higher on average than other NAS drive manufacturers.
2. For anybody going the route of building a custom NAS instead of buying one, you should consider FreeNAS (or I guess TrueNas Core now? I guess that's new since I've been to their site)
FreeNAS Storage Operating System | Open Source - FreeNAS - Open Source Storage Operating System . It's BSD based but what it has that's just incomparable to the others is ZFS and RAIDZ. ZFS is hands down the best file system for mass data storage out there. Maybe one day BTRFS will get better and rival it, but it's not there yet. Last time I checked they also didn't have that custom NAS hardware either, which I'm going to have to seriously look into.
3. Just say no to RAID0, RAID1, and RAID10. And if you are using drives larger than 8TB, RAID6 is kind of the minimum. RAIDZ is better if you're going custom though. Some background on it and this guy was a little apocalyptic a little too soon, but his overall point of RAID5 not being safe is still valid especially with larger drives.
Why RAID 5 stops working in 2009 | ZDNet
4. RAID is not a backup solution.
RAID is NOT a backup! · RAID is NOT a backup! RAID is a stability/availability/data-security solution.