graid3 or graid5? with or without gjournal?
Hi all I am busy putting together a new server. I want to avoid using the motherboards raid 'hardware' (intel matrix raid) and rather do it all in software so if anything goes wrong with the motherboard, the drives can work in some other box. I have 4x 1TB drives available for the main data array. graid3 can only use 3 graid5 can use all 4, but is it production ready? any ideas? The advantage of using graid3 at this point is that the extra 1TB drive I have can then go into the backup server which needs more space anyway. Having suffered data loss on the previous raid5 (intel matrix) array when UFS went bananas due to one drive failing, I am looking at solutions/preventatives. Will gjournal be useful? Thanks -- DA Fo rsythNetwork Supervisor Principal Technical Officer -- Institute for Water Research http://www.ru.ac.za/institutes/iwr/ ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: graid3 or graid5? with or without gjournal?
On 26/07/2011 08:48, DA Forsyth wrote: Hi all I am busy putting together a new server. I want to avoid using the motherboards raid 'hardware' (intel matrix raid) and rather do it all in software so if anything goes wrong with the motherboard, the drives can work in some other box. I have 4x 1TB drives available for the main data array. graid3 can only use 3 graid5 can use all 4, but is it production ready? any ideas? Take everything I say with a grain of salt, I am still testing these kinds of setup. I do not know about graid5, but gvinum is very slow when used in a raid5 config, this is especially true for meta intensive operations, such as rsync. graid3 should be even worse as Raid3 is supposed to work on the octet level (In software mode it actually writes in sector, but I do not know how it computes). Another thing that strongly encourages me to stay away from graid3, graid5 and gvinum raid5 is that the examples were removed from the handbook. I ended up using gvinum in a mix of concat and stripe. Not as efficient in terms of data space, but much much faster. In your case for example I would cut all the drives in two subdisks and go for a RAID10 setup. The advantage of using graid3 at this point is that the extra 1TB drive I have can then go into the backup server which needs more space anyway. Having suffered data loss on the previous raid5 (intel matrix) array when UFS went bananas due to one drive failing, I am looking at solutions/preventatives. Will gjournal be useful? Thanks -- DA Fo rsythNetwork Supervisor Principal Technical Officer -- Institute for Water Research http://www.ru.ac.za/institutes/iwr/ ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: graid3 or graid5? with or without gjournal?
On Tue, Jul 26, 2011 at 1:48 AM, DA Forsyth d.fors...@ru.ac.za wrote: The advantage of using graid3 at this point is that the extra 1TB drive I have can then go into the backup server which needs more space anyway. Having suffered data loss on the previous raid5 (intel matrix) array when UFS went bananas due to one drive failing, I am looking at solutions/preventatives. Will gjournal be useful? Graid3 for several reasons. Works great with gjournal. http://lists.freebsd.org/pipermail/freebsd-geom/2007-May/002337.html You could also consider RAIDZx -- Adam Vande More ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: graid3 or graid5? with or without gjournal?
Em Ter, 2011-07-26 às 08:48 +0200, DA Forsyth escreveu: Hi all I am busy putting together a new server. I want to avoid using the motherboards raid 'hardware' (intel matrix raid) and rather do it all in software so if anything goes wrong with the motherboard, the drives can work in some other box. I have 4x 1TB drives available for the main data array. graid3 can only use 3 graid5 can use all 4, but is it production ready? any ideas? The advantage of using graid3 at this point is that the extra 1TB drive I have can then go into the backup server which needs more space anyway. Having suffered data loss on the previous raid5 (intel matrix) array when UFS went bananas due to one drive failing, I am looking at solutions/preventatives. Will gjournal be useful? Thanks I prefer ZFS..all my servers (about 100... ) are running with zfs now (8.2 amd64)... dual drivers of 1TB or 2TB each... I have had some driver dying.. but no loss of data thanks to zfs mirror... Sergio ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Is it possible to setup a graid3 on root?
Just wondering if it is possible to setup a striped root partition (graid3) and still be able to boot from it. Logically, it doesn't sound promising, but has anyone tried this? Thanks! -Modulok- ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Is it possible to setup a graid3 on root?
On 9/25/09, Modulok modu...@gmail.com wrote: Just wondering if it is possible to setup a striped root partition (graid3) and still be able to boot from it. Logically, it doesn't sound promising, but has anyone tried this? Thanks! -Modulok- Remember -- To boot off a distributed RAID, it needs to be known, established, turned on before the kernel loads. Software raid is turned on after the kernel probes and starts running /etc/rc Software RAID1 is not striped across disks, so it can be booted from. All other software RAIDs on the drives are unable to be individualized, and shouldn't logically be bootable. I doubt you can do it with graid3 part of the kernel either. This is a big advantage over a hardware raid card... the card takes care of the distribted peices of a file. Sorry it wasn't a positive answer. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: growing a graid3 array and growfs not growing ....
Sorry to revive an old thread.. I am working with raid3 vs raid5 at home to understand the difference better. And this might help the OP. RAID3 has a dedicated parity drive, and the number of consumers must be (2^n)+1 (2^1)+1 = 3 (2^2)+1 = 5 (2^3)+1 = 9 RAID5 is a distributed parity, and what seems an unlimited number of consumers. And about the fdisk error. See under providers the line: sectorsize: 2048 - means that /boot/mbr (512 bytes) does not match the sectorsize of the provider, 2048 bytes. You'll have to append out the MBR file with zeros to fit the provider's sectorsize before fdisk will even consider placing it into the provider. Now -- A RAID array will only be as big as it's smallest member/consumer. 5x drives are probably rebuilding as the original size because of the other 4 consumers being the smaller size. Hope this helps paint a bigger picture for the OP. On 5/29/09, Vikash Badal vikash.ba...@is.co.za wrote: Can someone please advise why growfs would return: growfs: we are not growing (8388607-4194303) ? I have a FreeBSD 7.2 server in a VM. I initially had 5 x 4G disks Created a raid graid3 label datavol da2 da3 da4 da5 da6 I upgraded them to 5 x 8g disks swopped out the virtual disks one at a time graid3 remove -n 0 datavol graid3 insert -n 0 datavol da2 [wait] .. graid3 remove -n 4 datavol graid3 insert -n 4 datavol da6 [wait] graid3 stop datavol growfs /dev/raid3/datavol error message: growfs: we are not growing (8388607-4194303) ? vix-sw-raid# graid3 list Geom name: datavol State: COMPLETE Components: 5 Flags: NONE GenID: 0 SyncID: 1 ID: 2704170828 Zone64kFailed: 0 Zone64kRequested: 0 Zone16kFailed: 0 Zone16kRequested: 0 Zone4kFailed: 0 Zone4kRequested: 524 Providers: 1. Name: raid3/datavol Mediasize: 34359736320 (32G) Sectorsize: 2048 Mode: r0w0e0 Consumers: 1. Name: da2 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE Flags: NONE GenID: 0 SyncID: 1 Number: 0 Type: DATA 2. Name: da3 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE Flags: NONE GenID: 0 SyncID: 1 Number: 1 Type: DATA 3. Name: da4 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE Flags: NONE GenID: 0 SyncID: 1 Number: 2 Type: DATA 4. Name: da5 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE Flags: NONE GenID: 0 SyncID: 1 Number: 3 Type: DATA 5. Name: da6 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE Flags: NONE GenID: 0 SyncID: 1 Number: 4 Type: PARITY fdisk /dev/raid3/datavol *** Working on device /dev/raid3/datavol *** parameters extracted from in-core disklabel are: cylinders=1044 heads=255 sectors/track=63 (16065 blks/cyl) Figures below won't work with BIOS for partitions not in cyl 1 parameters to be used for BIOS calculations are: cylinders=1044 heads=255 sectors/track=63 (16065 blks/cyl) fdisk: invalid fdisk partition table found fdisk: /boot/mbr: length must be a multiple of sector size what am I missing ? Please note: This email and its content are subject to the disclaimer as displayed at the following link http://www.is.co.za/legal/E-mail+Confidentiality+Notice+and+Disclaimer.htm. Should you not have Web access, send a mail to disclaim...@is.co.za and a copy will be emailed to you. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
growing a graid3 array and growfs not growing ....
Can someone please advise why growfs would return: growfs: we are not growing (8388607-4194303) ? I have a FreeBSD 7.2 server in a VM. I initially had 5 x 4G disks Created a raid graid3 label datavol da2 da3 da4 da5 da6 I upgraded them to 5 x 8g disks swopped out the virtual disks one at a time graid3 remove -n 0 datavol graid3 insert -n 0 datavol da2 [wait] .. graid3 remove -n 4 datavol graid3 insert -n 4 datavol da6 [wait] graid3 stop datavol growfs /dev/raid3/datavol error message: growfs: we are not growing (8388607-4194303) ? vix-sw-raid# graid3 list Geom name: datavol State: COMPLETE Components: 5 Flags: NONE GenID: 0 SyncID: 1 ID: 2704170828 Zone64kFailed: 0 Zone64kRequested: 0 Zone16kFailed: 0 Zone16kRequested: 0 Zone4kFailed: 0 Zone4kRequested: 524 Providers: 1. Name: raid3/datavol Mediasize: 34359736320 (32G) Sectorsize: 2048 Mode: r0w0e0 Consumers: 1. Name: da2 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE Flags: NONE GenID: 0 SyncID: 1 Number: 0 Type: DATA 2. Name: da3 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE Flags: NONE GenID: 0 SyncID: 1 Number: 1 Type: DATA 3. Name: da4 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE Flags: NONE GenID: 0 SyncID: 1 Number: 2 Type: DATA 4. Name: da5 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE Flags: NONE GenID: 0 SyncID: 1 Number: 3 Type: DATA 5. Name: da6 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE Flags: NONE GenID: 0 SyncID: 1 Number: 4 Type: PARITY fdisk /dev/raid3/datavol *** Working on device /dev/raid3/datavol *** parameters extracted from in-core disklabel are: cylinders=1044 heads=255 sectors/track=63 (16065 blks/cyl) Figures below won't work with BIOS for partitions not in cyl 1 parameters to be used for BIOS calculations are: cylinders=1044 heads=255 sectors/track=63 (16065 blks/cyl) fdisk: invalid fdisk partition table found fdisk: /boot/mbr: length must be a multiple of sector size what am I missing ? Please note: This email and its content are subject to the disclaimer as displayed at the following link http://www.is.co.za/legal/E-mail+Confidentiality+Notice+and+Disclaimer.htm. Should you not have Web access, send a mail to disclaim...@is.co.za and a copy will be emailed to you. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: graid3
why it can't be say 5 disks+parity? The reason is in the definition on RAID 3, which says the updates to the RAID device must be atomic. In some ideal universe, RAID 3 is implemented in hardware and on individual bytes, but here we cannot write to the drives in units other than sectorsize and sectorsize is 512 bytes. OK i understand - the RAID sectors must be 2^something, so amount of drives must be 2^something+1. thanks ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: graid3
Wojciech Puchar wrote: i read the graid3 manual and http://www.acnc.com/04_01_03.html to make sure i know what's RAID3 and i don't understand few things. 1) The number of components must be equal to 3, 5, 9, 17, etc. (2^n + 1). why it can't be say 5 disks+parity? The reason is in the definition on RAID 3, which says the updates to the RAID device must be atomic. In some ideal universe, RAID 3 is implemented in hardware and on individual bytes, but here we cannot write to the drives in units other than sectorsize and sectorsize is 512 bytes. Parity needs to be calculated with regards to each sector, so at the sector level, the minimum number of sectors is three sectors: two for data and one for parity. This means the high-level atomic sectorsize is 2*512=1024 bytes. If you inspect your RAID 3 devices, you'll see just that: # diskinfo -v /dev/raid3/homes /dev/raid3/homes 1024# sectorsize 107374181376# mediasize in bytes (100G) 104857599 # mediasize in sectors But each drive has a normal sectorsize of 512: # diskinfo -v /dev/ad4 /dev/ad4 512 # sectorsize 80026361856 # mediasize in bytes (75G) 156301488 # mediasize in sectors Sector sizes cannot be arbitrary for various reasons, mostly dealing with how memory pages and virtual memory are managed. In short, they need to be powers of two. This restricts us to high-level (big) sector sizes that can be exactly one of the following values: 1024, 2048, 4096, 8192, etc. Since drive sectors are fixed to 512 bytes, this means that the number of *data* drives must also be a power of two: 2, 4, 8, 16, etc. Add one more drive for the parity and you get the starting sequence: 3, 5, 9, 17. In practice, this means that if you have 17 drives in RAID3, the sectorsize of the array itself will be 16*512 = 8192. Each write to the array will update all 17 drives before returning (one sector on each drive, ensuring an atomic operation). Note that the file system created on such an array will also have its characteristics modified to the sector size (the fragment size will be the sector size). 2) -r Use parity component for reading in round-robin fashion. Without this option the parity component is not used at all for reading operations when the device is in a complete state. With this option specified random I/O read operations are even 40% faster , but sequential reads are slower. One cannot use this option if the -w option is also specified. how parity disk could speed up random I/O? It will work well only when the number of drives is small (i.e. three drives), by using the parity drive as a valid source of data, avoiding some seeks to all drives. I think that, theoretically, you can save at most 0.33 (1/3) of all seeks - I don't know where the 40% number comes from. signature.asc Description: OpenPGP digital signature
graid3
i read the graid3 manual and http://www.acnc.com/04_01_03.html to make sure i know what's RAID3 and i don't understand few things. 1) The number of components must be equal to 3, 5, 9, 17, etc. (2^n + 1). why it can't be say 5 disks+parity? 2) -r Use parity component for reading in round-robin fashion. Without this option the parity component is not used at all for reading operations when the device is in a complete state. With this option specified random I/O read operations are even 40% faster , but sequential reads are slower. One cannot use this option if the -w option is also specified. how parity disk could speed up random I/O? is there any description of how graid3 actually works? ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: graid3
Hello, 1. I don't see such a thing on the weblink you gave (acnc) In my opinion, this rule is pure nonsense, as raid 3 just use a separate drive to store stripe parity. You just need at least 3 drives, one for parity, 2 for data. you can do raid 3 with how many drives you want. 2. because the raid controler/software thing can reconstruct the data with only n-1 of the n drives in the array. in random IO this can be quite usefull, while in sequential read, the parity drive is not that much of use. On Fri, Jul 25, 2008 at 11:46 AM, Wojciech Puchar [EMAIL PROTECTED] wrote: i read the graid3 manual and http://www.acnc.com/04_01_03.html to make sure i know what's RAID3 and i don't understand few things. 1) The number of components must be equal to 3, 5, 9, 17, etc. (2^n + 1). why it can't be say 5 disks+parity? 2) -r Use parity component for reading in round-robin fashion. Without this option the parity component is not used at all for reading operations when the device is in a complete state. With this option specified random I/O read operations are even 40% faster , but sequential reads are slower. One cannot use this option if the -w option is also specified. how parity disk could speed up random I/O? is there any description of how graid3 actually works? ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED] ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
creating a broken graid3 array?
Is it possible to create a (degraded) graid3 array with only two (or one less than the planned total) providers? I'm asking since I would like to move from my current one-disk setup to a three-disk raid3 array, but I'd like the disk currently in use to be a member of the array and I don't have anywhere to conveniently back up the data already there. I'd like to create a degraded graid3 array with the two new components, copy the data from the current disk to the array, and then add the current disk in to the array. If that's not a possibility, can anyone suggest a way to get the same end result? Thanks, JN ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: creating a broken graid3 array?
On 11/23/06, John Nielsen [EMAIL PROTECTED] wrote: Is it possible to create a (degraded) graid3 array Maybe you'll be able to create graid3 with md0 as the third member (based on sparse file for example) and later emulate a failure (md0 disappears) and insert your hard drive. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: creating a broken graid3 array?
John, On 11/23/06, John Nielsen [EMAIL PROTECTED] wrote: Is it possible to create a (degraded) graid3 array with only two (or one less than the planned total) providers? I'm asking since I would like to move from my current one-disk setup to a three-disk raid3 array, but I'd like the disk currently in use to be a member of the array and I don't have anywhere to conveniently back up the data already there. I'd like to create a degraded graid3 array with the two new components, copy the data from the current disk to the array, and then add the current disk in to the array. If that's not a possibility, can anyone suggest a way to get the same end result? while i know close to nothing about raid, here is what i think: 1. you have no backup ( otherwise you could pull it off ) 2. you are trying to achieve your goal through a tricky method ( me thinks anyways :-) is the loss of your data worth less than the cost of an extra hd? if so, buy another hd. if not, make a clean install? and assuming a 3 hd raid setup, would it not be wise to have a spare hd anyway? what's the point? regards, usleep ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: creating a broken graid3 array?
is the loss of your data worth less than the cost of an extra hd? if so, buy another hd. if not, make a clean install? should read: is the cost of an extra hd less than the value of your data/install? if so, buy another hd. if not, make a clean install? regards, usleep ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: creating a broken graid3 array?
On Thursday 23 November 2006 17:10, [EMAIL PROTECTED] wrote: is the loss of your data worth less than the cost of an extra hd? if so, buy another hd. if not, make a clean install? should read: is the cost of an extra hd less than the value of your data/install? if so, buy another hd. if not, make a clean install? I have backups of the data that can't be reproduced. I just don't have room for some of the larger files (CD ISO's, DVD rips, etc). It would be inconvenient to lose the data but far from catastrophic. One goal of this exercise is to get some redundancy, but at least as important are the goals of learning more about something I haven't used before (graid3) and getting a larger volume on a limited budget. Besides, trickery is where the fun comes in. :) I appreciate the response, though. It's a point I might have raised myself. JN ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: creating a broken graid3 array?
On Thursday 23 November 2006 16:00, Andrew Pantyukhin wrote: On 11/23/06, John Nielsen [EMAIL PROTECTED] wrote: Is it possible to create a (degraded) graid3 array Maybe you'll be able to create graid3 with md0 as the third member (based on sparse file for example) and later emulate a failure (md0 disappears) and insert your hard drive. That's the thought I had as well after I posted. I'll probably give that a try once I'm ready to get started. Thanks, JN ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
graid3 lockups?
I've got a dual-proc machine with 3 ATA drives that I'd like to roll together in a graid3 configuration. 2 of the drives are 250G, and the third is 300G. I'm using the raw disk for the two 250G drives (ad4 and ad6) and the a partition (which is 250G) of the 300G disk (ad9). ad9 looks like this: #size offsetfstype [fsize bsize bps/cpg] a: 488397168 164.2BSD0 0 0 c: 5860723680unused 2048 16384 # raw part, don't edit d: 97675184 4883971844.2BSD 2048 16384 28552 I need to transfer all the data from partition d on ad9 to the new raid3 volume. Creating the volume and newfs work fine. However, when I begin copying the files from ad9 into the raid3 volume, the system locks hard. Naturally the hard-lock isn't good, but I'm also cursious if my configuration is actually valid. Any ideas? -- ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]