In response to Mark Mielke [EMAIL PROTECTED]:
Bill Moran wrote:
What do you mean heard of? Which raid system do you know of that reads
all drives for RAID 1?
I'm fairly sure that FreeBSD's GEOM does. Of course, it couldn't be doing
consistency checking at that point.
Bill Moran wrote:
In response to Mark Mielke [EMAIL PROTECTED]:
Bill Moran wrote:
I'm fairly sure that FreeBSD's GEOM does. Of course, it couldn't be doing
consistency checking at that point.
According to this:
In response to Mark Mielke [EMAIL PROTECTED]:
Bill Moran wrote:
In response to Mark Mielke [EMAIL PROTECTED]:
Bill Moran wrote:
I'm fairly sure that FreeBSD's GEOM does. Of course, it couldn't be doing
consistency checking at that point.
According to this:
Shane Ambler wrote:
I achieve something closer to +20% - +60% over the theoretical
performance of a single disk with my four disk RAID 1+0 partitions.
If a good 4 disk SATA RAID 1+0 can achieve 60% more throughput than a
single SATA disk, what sort of percentage can be achieved from a
On Dec 26, 2007, at 10:21 AM, Bill Moran wrote:
I snipped the rest of your message because none of it matters.
Never use
RAID 5 on a database system. Ever. There is absolutely NO reason to
every put yourself through that much suffering. If you hate yourself
that much just commit suicide,
On Dec 26, 2007, at 4:28 PM, [EMAIL PROTECTED] wrote:
now, if you can afford solid-state drives which don't have noticable
seek times, things are completely different ;-)
Who makes one with infinite lifetime? The only ones I know of are
built using RAM and have disk drive backup with
RAID 10.
I snipped the rest of your message because none of it matters. Never use
RAID 5 on a database system. Ever. There is absolutely NO reason to
every put yourself through that much suffering. If you hate yourself
that much just commit suicide, it's less drastic.
--
Bill Moran
On Wed, 26 Dec 2007, Mark Mielke wrote:
I believe hardware RAID 5 is also horrible, but since the hardware hides
it from the application, a hardware RAID 5 user might not care.
Typically anything doing hardware RAID 5 also has a reasonable sized write
cache on the controller, which softens
Mark Mielke Wrote:
In my experience, software RAID 5 is horrible. Write performance can
decrease below the speed of one disk on its own, and read performance will
not be significantly more than RAID 1+0 as the number of stripes has only
increased from 2 to 3, and if reading while writing, you
Bill Moran wrote:
RAID 10.
I snipped the rest of your message because none of it matters. Never use
RAID 5 on a database system. Ever. There is absolutely NO reason to
every put yourself through that much suffering. If you hate yourself
that much just commit suicide, it's less
On Wed, 26 Dec 2007, Fernando Hevia wrote:
Mark Mielke Wrote:
In my experience, software RAID 5 is horrible. Write performance can
decrease below the speed of one disk on its own, and read performance will
not be significantly more than RAID 1+0 as the number of stripes has only
increased
David Lang Wrote:
with only four drives the space difference between raid 1+0 and raid 5
isn't that much, but when you do a write you must write to two drives (the
drive holding the data you are changing, and the drive that holds the
parity data for that stripe, possibly needing to read
seek/read/calculate/seek/write since the drive moves on after the
read), when you read you must read _all_ drives in the set to check
the data integrity.
I don't know of any RAID implementation that performs consistency
checking on each read operation. 8-(
---(end of
In response to Fernando Hevia [EMAIL PROTECTED]:
David Lang Wrote:
with only four drives the space difference between raid 1+0 and raid 5
isn't that much, but when you do a write you must write to two drives (the
drive holding the data you are changing, and the drive that holds the
On Wed, 26 Dec 2007, Fernando Hevia wrote:
David Lang Wrote:
with only four drives the space difference between raid 1+0 and raid 5
isn't that much, but when you do a write you must write to two drives (the
drive holding the data you are changing, and the drive that holds the
parity data for
On Wed, 26 Dec 2007, Florian Weimer wrote:
seek/read/calculate/seek/write since the drive moves on after the
read), when you read you must read _all_ drives in the set to check
the data integrity.
I don't know of any RAID implementation that performs consistency
checking on each read
Florian Weimer wrote:
seek/read/calculate/seek/write since the drive moves on after the
read), when you read you must read _all_ drives in the set to check
the data integrity.
I don't know of any RAID implementation that performs consistency
checking on each read operation. 8-(
Dave
On Wed, 26 Dec 2007, Mark Mielke wrote:
Florian Weimer wrote:
seek/read/calculate/seek/write since the drive moves on after the
read), when you read you must read _all_ drives in the set to check
the data integrity.
I don't know of any RAID implementation that performs consistency
checking
[EMAIL PROTECTED] wrote:
On Wed, 26 Dec 2007, Mark Mielke wrote:
Florian Weimer wrote:
seek/read/calculate/seek/write since the drive moves on after the
read), when you read you must read _all_ drives in the set to check
the data integrity.
I don't know of any RAID implementation that
[EMAIL PROTECTED] wrote:
I could see a raid 1 array not doing consistancy checking (after all,
it has no way of knowing what's right if it finds an error), but since
raid 5/6 can repair the data I would expect them to do the checking
each time.
Your messages are spread across the thread. :-)
Bill Moran wrote:
In order to recalculate the parity, it has to have data from all disks. Thus,
if you have 4 disks, it has to read 2 (the unknown data blocks included in
the parity calculation) then write 2 (the new data block and the new
parity data) Caching can help some, but if your data
In response to Mark Mielke [EMAIL PROTECTED]:
[EMAIL PROTECTED] wrote:
On Wed, 26 Dec 2007, Mark Mielke wrote:
Florian Weimer wrote:
seek/read/calculate/seek/write since the drive moves on after the
read), when you read you must read _all_ drives in the set to check
the data
In response to Mark Mielke [EMAIL PROTECTED]:
Bill Moran wrote:
In order to recalculate the parity, it has to have data from all disks.
Thus,
if you have 4 disks, it has to read 2 (the unknown data blocks included in
the parity calculation) then write 2 (the new data block and the new
On Wed, 26 Dec 2007, Mark Mielke wrote:
[EMAIL PROTECTED] wrote:
Thanks for the explanation David. It's good to know not only what but also
why. Still I wonder why reads do hit all drives. Shouldn't only 2 disks be
read: the one with the data and the parity disk?
no, becouse the parity is of
Bill Moran wrote:
What do you mean heard of? Which raid system do you know of that reads
all drives for RAID 1?
I'm fairly sure that FreeBSD's GEOM does. Of course, it couldn't be doing
consistency checking at that point.
According to this:
On Wed, 26 Dec 2007, Mark Mielke wrote:
[EMAIL PROTECTED] wrote:
I could see a raid 1 array not doing consistancy checking (after all, it
has no way of knowing what's right if it finds an error), but since raid
5/6 can repair the data I would expect them to do the checking each time.
Your
[EMAIL PROTECTED] wrote:
Thanks for the explanation David. It's good to know not only what but
also
why. Still I wonder why reads do hit all drives. Shouldn't only 2
disks be
read: the one with the data and the parity disk?
no, becouse the parity is of the sort (A+B+C+P) mod X = 0
so if X=10
On Wed, 26 Dec 2007, Mark Mielke wrote:
[EMAIL PROTECTED] wrote:
On Wed, 26 Dec 2007, Mark Mielke wrote:
Florian Weimer wrote:
seek/read/calculate/seek/write since the drive moves on after the
read), when you read you must read _all_ drives in the set to check
the data integrity.
I don't
[EMAIL PROTECTED] wrote:
however I was addressing the point that for reads you can't do any
checking until you have read in all the blocks.
if you never check the consistency, how will it ever be proven otherwise.
A scheme often used is to mark the disk/slice as clean during clean
system
On Wed, 26 Dec 2007, [EMAIL PROTECTED] wrote:
yes, the two linux software implementations only read from one disk, but I
have seen hardware implementations where it reads from both drives, and if
they disagree it returns a read error rather then possibly invalid data (it's
up to the admin to
On Thu, 27 Dec 2007, Shane Ambler wrote:
So in theory a modern RAID 1 setup can be configured to get similar read
speeds as RAID 0 but would still drop to single disk speeds (or similar) when
writing, but RAID 0 can get the faster write performance.
The trick is, you need a perfect
Shane Ambler wrote:
So in theory a modern RAID 1 setup can be configured to get similar
read speeds as RAID 0 but would still drop to single disk speeds (or
similar) when writing, but RAID 0 can get the faster write performance.
Unfortunately, it's a bit more complicated than that. RAID 1 has
Mark Mielke wrote:
Shane Ambler wrote:
So in a perfect setup (probably 1+0) 4x 300MB/s SATA drives could
deliver 1200MB/s of data to RAM, which is also assuming that all 4
channels have their own data path to RAM and aren't sharing.
(anyone know how segregated the on board controllers such as
Greg Smith wrote:
On Thu, 27 Dec 2007, Shane Ambler wrote:
So in theory a modern RAID 1 setup can be configured to get similar
read speeds as RAID 0 but would still drop to single disk speeds (or
similar) when writing, but RAID 0 can get the faster write performance.
The trick is, you need
34 matches
Mail list logo