On Sun, February 4, 2024 9:55 pm, Nick Holland wrote:

> On 2/4/24 14:02, [email protected] wrote:

>> hello

>>

>> I will make a storage server, and RAID just has to be on it, right?

>

> maaaaybbe... (more later)

>

>> is RAID6 in work or maybe plans, I would like to know

>> what about RAID5 + CRYPTO or RAID6 + CRYPTO?

>> I read these

>> https://www.reddit.com/r/openbsd/comments/r4bydk/encrypted_raid6_support/

>> and from it

>> https://marc.info/?t=154348693400001&r=1&w=2

>

> best to start with authoritative sources that are up to date.

> https://man.openbsd.org/softraid

>

> you will note no reference to RAID6 in there.  Nor one-layer

> softraid 5C, like there is 1C.


I know I read `man bioctl` instead..

> Is it "in the works"?  how would that matter?  If it is there, you

> can use it.  If it isn't...you can't.  If it is "in the works", it

> still isn't there.  So...I'd suggest just assuming it isn't there,

> and if it is added (or you add it), upgrade at your next HW refresh.

>

>> encryption is a must, I won't have it unencrypted

>> what about RAID controller like RAID6 and software RAIDC combination?

>> it would be cool to have redundancy like RAID6 and secure data with CRYPTO..

>> RAID1C is too expensive

>

> "RAID1C is too expensive" -- define expensive?


with parity-RAID you get more storage meaning more bang for buck

> You can get a Really Big SATA disk for the price of a good HW RAID controller,

> and a good HW RAID controller generally requires a big, power hungry chassis.

> Oh...and if you are going to run HW RAID, you MUST have spare HW on-hand

> because you can't just take the drives off RAID controller X and put them on

> the RAID controller you just managed to find two years later when you need

> it and hope it will work.  And of course, that implies a second chassis,

> because these things tend to work together.


someone said HW RAID bad today because it doesn't check parity, so to do that
need to have good HW raid controller (doesn't exist anymore?) and disks with
520-byte sectors (also doesn't exist unless some lucky expensive entterprise)
so I think HW RAID is a no-go :(
I want many TB, and possibly expandble (on same RAID array)
HW raid also sounds to be too complicated and also more failure prone?
because like OpenZFS checks parity because it has check-sums of parity, and
with 520-byte sectors the 512bytes used for data, but the 8 for check-sum of
parity, and also HW RAID controller has to support that........

>> does anyone run multi-TB storage servers with OpenBSD? what raid do you run,

>> what about hardware raid? I fear/dislike hardware raid but I never tried it

>> I want to live without OpenZFS/FreeBSD, butnot without encryption and
redundancy

>

> HW RAID works, but you better understand your controller.  Most people get

> their system running, pat themselves on the back, and are 100% hosed when

> they need to replace a drive and have no idea how.  HW raid is usually a

> little easier to figure out how to get running without reading the

> instructions, but much harder to figure out when things go wonky.


I think so, too

> (granted, SW raid, you have to figure out how to detect and swap out a

> failed drive, but my SW RAID is more similar to yours than my HW RAID is

> to yours, and thus, I can probably help you out more.  x the number of

> people on misc@ :)


heh I am glad people use stuff like this here.. I don't have RAID yet, but plan
on getting it.. my budget maybe 400$ starter but want to increase as time goes
and money comes, I would do it a non-standard homemade chassie so it's cheap
and put some DIY strap-with-spunge so it's non-vibrator

>> I don't have to be able to boot from it (canbe other disk which also maybe in

>> RAID1C), but would be nice

>>

>> I know OpenBSD is not meant to be run as big fancy storage server with maybe

>> complicated reliability like RAID6 + CRYPTO, but what you expect? everyone

>> loves OpenBSD and wants to use it for everything, not FreeBSD

>

> Realistically, for home use, I suspect OpenBSD will be more-than-sufficient

> for most people.  You just don't need the World's Fastest for most

> applications.  Case in point: I was whining to myself about the removal of

> softdeps from OpenBSD recently...it is a HUGE performance hit for a few of

> the systems I manage.  But you know what I discovered?  Worst case, even

> though one backup went from two hours to eight or more hours, it doesn't

> change what I accomplish in a day.  Wickedly fast is fun.  But the real

> performance problem is usually me.  It would work fine for many business

> uses, too.


what you mean removal of softdep?
doesn't having softdep increase speed?????? I got softdep in /etc/fstab on
everything almost

I don't care about fast.. I wanted archive drives but they too slow and at
higher TB count it's more easily prone to another failure so RAID6 is ideal,
but also non-archive drives because I will do more random access and stuff
I found some surveillance HDDs cheap, not sure if I want those.. storage server
will be 24/7 online so not sure the impact on HDD, some said get NAS drives,
but those reeaaally expensive


>> thank you I am sorry if I ask too much, I don't demand, just nice request

>

> OpenBSD Softraid RAID6 isn't a thing (yet?).

> OpenBSD Softraid RAID5C isn't a thing (yet?).

>

> Layered RAID isn't officially supported, but it works.  Layering crypto on

> top of a HW RAID works in every sense.  Softraid doesn't even know it is on

> HW RAID and doesn't care (though bioctl can be used to monitor both).

> Expecting the system to come up on its own with manually layered softraid

> is not wise.


yeah not sure if we can do that with HW because of someone said good HW RAID
controllers that support 520-byte sectors and also HDDs with those are rare to
find, so also probably too expensive so is very sad

> If you want to layer your RAID, you will probably want to have your boot

> partitions/drives be RAID1C (or just RAID1), then the data stored on a

> big softraid "drive".  I would suggest NOT putting the layered RAID volumes

> in /etc/fstab, but rather have some kind of manual script that you run post

> boot to bring up the big data storage drives.  This way, when the power goes

> out and you need an fsck on your array, you don't have to go to the box to

> do it, you can do it remotely.

>

> RAID1 wins a lot of awards for just plain simplicity, and thus, some

> versatility.  So I'd suggest reconsidering your "need" for RAID5, and see

> if you can get by with RAID1C on a big pair of drives.

>

> And as for my "maaaaybbe" on automatically assuming you need RAID on a

> storage server, you MIGHT just find that multiple stand-alone systems will

> give you better redundancy for some applications.  RAID helps if your

> disk fails, but there are a lot of other things that fail on storage servers,

> and for SOME applications, having a whole other machine ready to roll is

> a better solution.  Granted, my FIRST choice is TWO machines running RAID

> storage, but that's not always practical.


yes, I think about this but would need 2 computers 24/7, also probably
offsite(where??), would need a lot more money for HDDs which I don't have so
IDK
I think best cheapest solution is unfrotunatly FreeBSD with OpenZFS for now,
because it
has check-sum on party what do you think?

>

> Nick.

>


thanks nick, pleasure to chat


Reply via email to