On Fri, 22 Jun 2007, David Greaves wrote:
That's not a bad thing - until you look at the complexity it brings - and
then consider the impact and exceptions when you do, eg hardware
acceleration? md information fed up to the fs layer for xfs? simple long term
maintenance?
Often these
Bill Davidsen wrote:
David Greaves wrote:
[EMAIL PROTECTED] wrote:
On Fri, 22 Jun 2007, David Greaves wrote:
If you end up 'fiddling' in md because someone specified
--assume-clean on a raid5 [in this case just to save a few minutes
*testing time* on system with a heavily choked bus!] then
On Fri, 22 Jun 2007, Bill Davidsen wrote:
By delaying parity computation until the first write to a stripe only the
growth of a filesystem is slowed, and all data are protected without waiting
for the lengthly check. The rebuild speed can be set very low, because
on-demand rebuild will do
Neil Brown wrote:
This isn't quite right.
Thanks :)
Firstly, it is mdadm which decided to make one drive a 'spare' for
raid5, not the kernel.
Secondly, it only applies to raid5, not raid6 or raid1 or raid10.
For raid6, the initial resync (just like the resync after an unclean
shutdown)
[EMAIL PROTECTED] wrote:
On Thu, 21 Jun 2007, David Chinner wrote:
On Thu, Jun 21, 2007 at 12:56:44PM +1000, Neil Brown wrote:
I have that - apparently naive - idea that drives use strong checksum,
and will never return bad data, only good data or an error. If this
isn't right, then it
On 21 Jun 2007, Neil Brown stated:
I have that - apparently naive - idea that drives use strong checksum,
and will never return bad data, only good data or an error. If this
isn't right, then it would really help to understand what the cause of
other failures are before working out how to
I didn't get a comment on my suggestion for a quick and dirty fix for
-assume-clean issues...
Bill Davidsen wrote:
Neil Brown wrote:
On Thursday June 14, [EMAIL PROTECTED] wrote:
it's now churning away 'rebuilding' the brand new array.
a few questions/thoughts.
why does it need to do a
[EMAIL PROTECTED] wrote:
one channel, 2 OS drives plus the 45 drives in the array.
Huh? You can only have 16 devices on a scsi bus, counting the host
adapter. And I don't think you can even manage that much reliably with
the newer higher speed versions, at least not without some very
On Mon, Jun 18, 2007 at 02:56:10PM -0700, [EMAIL PROTECTED] wrote:
yes, I'm useing promise drive shelves, I have them configured to export
the 15 drives as 15 LUNs on a single ID.
I'm going to be useing this as a huge circular buffer that will just be
overwritten eventually 99% of the
On Tue, 19 Jun 2007, Lennart Sorensen wrote:
On Mon, Jun 18, 2007 at 02:56:10PM -0700, [EMAIL PROTECTED] wrote:
yes, I'm useing promise drive shelves, I have them configured to export
the 15 drives as 15 LUNs on a single ID.
I'm going to be useing this as a huge circular buffer that will just
[EMAIL PROTECTED] wrote:
in my case it takes 2+ days to resync the array before I can do any
performance testing with it. for some reason it's only doing the rebuild
at ~5M/sec (even though I've increased the min and max rebuild speeds
and a dd to the array seems to be ~44M/sec, even during
On Mon, 18 Jun 2007, Brendan Conoboy wrote:
[EMAIL PROTECTED] wrote:
in my case it takes 2+ days to resync the array before I can do any
performance testing with it. for some reason it's only doing the rebuild
at ~5M/sec (even though I've increased the min and max rebuild speeds and
a dd
On Mon, 18 Jun 2007, Lennart Sorensen wrote:
On Mon, Jun 18, 2007 at 10:28:38AM -0700, [EMAIL PROTECTED] wrote:
I plan to test the different configurations.
however, if I was saturating the bus with the reconstruct how can I fire
off a dd if=/dev/zero of=/mnt/test and get ~45M/sec whild only
On Mon, 18 Jun 2007, Brendan Conoboy wrote:
[EMAIL PROTECTED] wrote:
I plan to test the different configurations.
however, if I was saturating the bus with the reconstruct how can I fire
off a dd if=/dev/zero of=/mnt/test and get ~45M/sec whild only slowing the
reconstruct to ~4M/sec?
On Mon, 18 Jun 2007, Lennart Sorensen wrote:
On Mon, Jun 18, 2007 at 11:12:45AM -0700, [EMAIL PROTECTED] wrote:
simple ultra-wide SCSI to a single controller.
Hmm, isn't ultra-wide limited to 40MB/s? Is it Ultra320 wide? That
could do a lot more, and 220MB/s sounds plausable for 320 scsi.
[EMAIL PROTECTED] wrote:
yes, sorry, ultra 320 wide.
Exactly how many channels and drives?
--
Brendan Conoboy / Red Hat, Inc. / [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
On Mon, 18 Jun 2007, Brendan Conoboy wrote:
[EMAIL PROTECTED] wrote:
yes, sorry, ultra 320 wide.
Exactly how many channels and drives?
one channel, 2 OS drives plus the 45 drives in the array.
yes I realize that there will be bottlenecks with this, the large capacity
is to handle longer
Neil Brown [EMAIL PROTECTED] writes:
Having the filesystem duplicate data, store checksums, and be able to
find a different copy if the first one it chose was bad is very
sensible and cannot be done by just putting the filesystem on RAID.
Apropos checksums: since RAID5 copies/xors anyways it
dean gaudet wrote:
On Sat, 16 Jun 2007, Wakko Warner wrote:
When I've had an unclean shutdown on one of my systems (10x 50gb raid5) it's
always slowed the system down when booting up. Quite significantly I must
say. I wait until I can login and change the rebuild max speed to slow it
Neil Brown wrote:
On Thursday June 14, [EMAIL PROTECTED] wrote:
On Fri, 15 Jun 2007, Neil Brown wrote:
On Thursday June 14, [EMAIL PROTECTED] wrote:
what is the limit for the number of devices that can be in a single array?
I'm trying to build a 45x750G array and want to
dean gaudet wrote:
On Sun, 17 Jun 2007, Wakko Warner wrote:
i use an external write-intent bitmap on a raid1 to avoid this... you
could use internal bitmap but that slows down i/o too much for my tastes.
i also use an external xfs journal for the same reason. 2 disk raid1 for
On Sat, Jun 16, 2007 at 07:59:29AM +1000, Neil Brown wrote:
Combining these thoughts, it would make a lot of sense for the
filesystem to be able to say to the block device That blocks looks
wrong - can you find me another copy to try?. That is an example of
the sort of closer integration
On Sat, 16 Jun 2007, Neil Brown wrote:
It would be possible to have a 'this is not initialised' flag on the
array, and if that is not set, always do a reconstruct-write rather
than a read-modify-write. But the first time you have an unclean
shutdown you are going to resync all the parity
On Friday June 15, [EMAIL PROTECTED] wrote:
As I understand the way
raid works, when you write a block to the array, it will have to read all
the other blocks in the stripe and recalculate the parity and write it out.
Your understanding is
24 matches
Mail list logo