Take the following example vinum config file:

drive a device /dev/da2a
drive b device /dev/da3a

volume rambo
    plex org concat
        sd length 512m drive a
    plex org concat
        sd length 512m drive b
The keyword "concat" specifies the relationship between the plexes and
the subdisks. All writes, are always written to all plexes of a given
volume, thus the example above is a mirror with two plexes, each being
comprised of one very small subdisk. I understand this. What I don't
understand, is how to implement a RAID-5 volume.

The only two vinum plex organizations listed in the handbook were
"striped" and "concat". How do I implement striping with distributed
parity (RAID 5)? This was not covered (or I missed it) in the
handbook, or the vinum(4) manual page, or the gvinum(8) manual page,
or in "The Complete FreeBSD". There is a lot of great material on how
vinum is implemented and how great it will make your life, but
painfully little on the actual configuration syntax.

In the vinum(4) man page, it describes a number of mappings between
subdisks and plexes including: "Concatenated", "Striped" and "RAID-5",
however these are section headings and in the example config files,
the keywords were "striped" and "concat", not "Striped" and
"Concatenated" were used. There has to be at least one other subdisk
to plex mapping:

    "Vinum implements the RAID-0, RAID-1 and RAID-5 models, both
individually and in combination."
RAID-5 is mentioned several times, but no examples were ever given.
What is the plex organization keyword, "raid5", "raid-5", "RAID-5",
"5", "parity", "disparity"? I could use trial and error, but there has
to be a document with this information somewhere. Other than rummaging
through source code, is there any additional documentation on vinum
configuration syntax, (A strict specification would be great!)? I
found a FreeBSD Diary article using vinum, but it wasn't for RAID-5,
so no luck there.

freebsd-questions@freebsd.org mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to