RE: How many drives are bad?

2008-02-19 Thread Guy Watkins


} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Steve Fairbairn
} Sent: Tuesday, February 19, 2008 2:45 PM
} To: 'Norman Elton'
} Cc: linux-raid@vger.kernel.org
} Subject: RE: How many drives are bad?
} 
} 
} 
}  The box presents 48 drives, split across 6 SATA controllers.
}  So disks sda-sdh are on one controller, etc. In our
}  configuration, I run a RAID5 MD array for each controller,
}  then run LVM on top of these to form one large VolGroup.
} 
} 
} I might be missing something here, and I realise you'd lose 8 drives to
} redundancy rather than 6, but wouldn't it have been better to have 8
} arrays of 6 drives, each array using a single drive from each
} controller?  That way a single controller failure (assuming no other HD
} failures) wouldn't actually take any array down?  I do realise that 2
} controller failures at the same time would lose everything.

Wow.  Sounds like what I said a few months ago.  I think I also recommended
RAID6.

Guy

} 
} Steve.
} 
} No virus found in this outgoing message.
} Checked by AVG Free Edition.
} Version: 7.5.516 / Virus Database: 269.20.7/1286 - Release Date:
} 18/02/2008 18:49
} 
} 
} -
} To unsubscribe from this list: send the line unsubscribe linux-raid in
} the body of a message to [EMAIL PROTECTED]
} More majordomo info at  http://vger.kernel.org/majordomo-info.html

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Raid over 48 disks

2007-12-18 Thread Guy Watkins
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Brendan Conoboy
} Sent: Tuesday, December 18, 2007 3:36 PM
} To: Norman Elton
} Cc: linux-raid@vger.kernel.org
} Subject: Re: Raid over 48 disks
} 
} Norman Elton wrote:
}  We're investigating the possibility of running Linux (RHEL) on top of
}  Sun's X4500 Thumper box:
} 
}  http://www.sun.com/servers/x64/x4500/
} 
} Neat- 6 8 port SATA controllers!  It'll be worth checking to be sure
} each controller has equal bandwidth.  If some controllers are on slower
} buses than others you may want to consider that and balance the md
} device layout.

Assuming the 6 controllers are equal, I would make 3 16 disk RAID6 arrays
using 2 disks from each controller.  That way any 1 controller can fail and
your system will still be running.  6 disks will be used for redundancy.

Or 6 8 disk RAID6 arrays using 1 disk from each controller).  That way any 2
controllers can fail and your system will still be running.  12 disks will
be used for redundancy.  Might be too excessive!

Combine them into a RAID0 array.

Guy

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Few questions

2007-12-07 Thread Guy Watkins
man md
man mdadm

I use RAID6.  Happy with it so far, but haven't had a disk failure yet.
RAID5 sucks because if you have 1 failed disk and 1 bad block on any other
disk, you are hosed.

Hope that helps.

} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Michael Makuch
} Sent: Friday, December 07, 2007 7:12 PM
} To: linux-raid@vger.kernel.org
} Subject: Few questions
} 
} I realize this is the developers list and though I am a developer I'm
} not a developer
} of linux raid, but I can find no other source of answers to these
} questions:
} 
} I've been using linux software raid (5) for a couple of years, having
} recently uped
} to the 2.6.23 kernel (FC7, was previously on FC5). I just noticed that
} my /proc/mdstat shows
} 
} $ cat /proc/mdstat
} Personalities : [raid6] [raid5] [raid4]
} md0 : active raid5 etherd/e0.0[0] etherd/e0.2[9](S) etherd/e0.9[8]
} etherd/e0.8[7] etherd/e0.7[6] etherd/e0.6[5] etherd/e0.5[4]
} etherd/e0.4[3] etherd/e0.3[2] etherd/e0.1[1]
}   3907091968 blocks level 5, 64k chunk, algorithm 2 [9/9] [U]
}   []  resync = 64.5% (315458352/488386496)
} finish=2228.0min speed=1292K/sec
} unused devices: none
} 
} and I have no idea where the raid6 came from. The only thing I've found
} on raid6
} is a wikipedia.org page, nothing on
} http://tldp.org/HOWTO/Software-RAID-HOWTO.html
} 
} So my questions are:
} 
} - Is raid6 documented anywhere? If so, where? I'd like to take advantage
} of it if
} it's really there.
} - Why does my array (which I configured as raid5) have personalities of
} raid6 (I can understand why raid4 would be there)?
} - Is this a.o.k for a raid5 array?
} 
} Thanks
} -
} To unsubscribe from this list: send the line unsubscribe linux-raid in
} the body of a message to [EMAIL PROTECTED]
} More majordomo info at  http://vger.kernel.org/majordomo-info.html

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: very degraded RAID5, or increasing capacity by adding discs

2007-10-08 Thread Guy Watkins
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Richard Scobie
} Sent: Monday, October 08, 2007 3:27 PM
} To: linux-raid@vger.kernel.org
} Subject: Re: very degraded RAID5, or increasing capacity by adding discs
} 
} Janek Kozicki wrote:
} 
}  Is it possible anyhow to create a very degraded raid array - a one
}  that consists of 4 drives, but has only TWO ?
} 
} No, but you can make a degraded 3 drive array, containing 2 drives and
} then add the next drive to complete it.
} 
} The array can then be grown (man mdadm, GROW section), to add the fourth.
} 
} Regards,
} 
} Richard

I think someone once said you could create a 2 disk degraded RAID5 array
with just 1 disk.  Then add one later.  Then expand as needed.  Someone
should test this.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: very degraded RAID5, or increasing capacity by adding discs

2007-10-08 Thread Guy Watkins


} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Janek Kozicki
} Sent: Monday, October 08, 2007 6:47 PM
} To: linux-raid@vger.kernel.org
} Subject: Re: very degraded RAID5, or increasing capacity by adding discs
} 
} Janek Kozicki said: (by the date of Tue, 9 Oct 2007 00:25:50 +0200)
} 
}  Richard Scobie said: (by the date of Tue, 09 Oct 2007 08:26:35
} +1300)
} 
}   No, but you can make a degraded 3 drive array, containing 2 drives and
}   then add the next drive to complete it.
}  
}   The array can then be grown (man mdadm, GROW section), to add the
} fourth.
} 
}  Oh, good. Thanks, I must've been blind that I missed this.
}  This completely solves my problem.
} 
} Uh, actually not :)
} 
} My 1st 500 GB drive is full now. When I buy a 2nd one I want to
} create a 3-disc degraded array using just 2 discs, one of which
} contains unbackupable data.
} 
} steps:
} 1. create degraded two-disc RAID5 on 1 new disc
} 2. copy data from old disc to new one
} 3. rebuild the array with old and new discs (now I have 500 GB on 2 discs)
3. Add old disk to new array.  Once done RAID5 is redundant.

} 4. GROW this array to a degraded 3 discs RAID5 (so I have 1000 GB on 2
} discs)
4. Buy 3rd disk.
5. Add new 3rd disk to array and grow to 3 disk RAID5 array.  Once done,
array is redundant.

Repeat 4 and 5 each time you buy a new disk.

I don't think you can grow to a degraded array.  I think you must add a new
disk first.  But I am not sure.

} ...
} 5. when I buy 3rd drive I either grow the array, or just rebuild and
} wait with growing until I buy a 4th drive.
} 
} Problems at step 4.: 'man mdadm' doesn't tell if it's possible to
} grow an array to a degraded array (non existant disc). Is it possible?
} 
} 
} PS: the fact, that degraded array will be unsafe for the data is an
} intented motivating factor for buying next drive ;)
} 
} --
} Janek Kozicki |
} -
} To unsubscribe from this list: send the line unsubscribe linux-raid in
} the body of a message to [EMAIL PROTECTED]
} More majordomo info at  http://vger.kernel.org/majordomo-info.html

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: RAID6 clean?

2007-08-17 Thread Guy Watkins
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Neil Brown
} Sent: Monday, June 04, 2007 2:59 AM
} To: Guy Watkins
} Cc: 'linux-raid'
} Subject: Re: RAID6 clean?
} 
} On Monday June 4, [EMAIL PROTECTED] wrote:
}  I have a RAID6 array.  1 drive is bad and now un-plugged because the
} system
}  hangs waiting on the disk.
} 
}  The system won't boot because / is not clean.  I booted a rescue CD
} and
}  managed to start my arrays using --force.  I tried to stop and start the
}  arrays but they still required --force.  I then used echo repair 
}  sync_action to make the arrays clean.  I can now stop and start the
} RAID6
}  array without --force.  I can now boot normally with 1 missing disk.
} 
}  Is there an easier method?  Some sort of boot option?  This was a real
} pain
}  in the @$$.
} 
}  It would be nice if there was an array option to allow an un-clean
} array
}  to be started.  An option that would be set in the md superblock.
} 
} Documentation/md.txt
} 
} search for 'clean' - no luck.
} search for 'dirty'
} 
} |
} |So, to boot with a root filesystem of a dirty degraded raid[56], use
} |
} |   md-mod.start_dirty_degraded=1
} |
} 
} NeilBrown
} 

Neil,
I had this happen again.  The above worked, thanks.

Feature request...  Allow us to set a start_dirty_degraded bit in
the superblock so we can set it and forget it.  This way it can be automatic
and per array.  What do you think?

Thanks,
Guy

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: mdadm create to existing raid5

2007-07-12 Thread Guy Watkins
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Jon Collette
} Sent: Thursday, July 12, 2007 5:29 PM
} To: linux-raid@vger.kernel.org
} Subject: mdadm create to existing raid5
} 
} I wasn't thinking and did a mdadm --create to my existing raid5 instead
} of --assemble.  The syncing process ran and now its not mountable.  Is
} there anyway to recover from this?
} 
} Thanks
} -
} To unsubscribe from this list: send the line unsubscribe linux-raid in
} the body of a message to [EMAIL PROTECTED]
} More majordomo info at  http://vger.kernel.org/majordomo-info.html

Maybe.  Not really sure.  But don't do anything until someone that really
knows answers!

What I think...
If you did a create with the exact same parameters the data should not have
changed.  But you can't mount so you must have used different parameters.

Only 1 disk was written to during the create.  Only that disk was changed.
If you remove the 1 disk and do another create with the original parameters
and put missing for the 1 disk your array will be back to normal, but
degraded.  Once you confirm this you can add back the 1 disk.  You must be
able to determine which disk was written to.  I don't know how to do that
unless you have the output from mdadm -D during the create/syncing.

But please don't proceed until someone else confirms what I say or gives
better advice!

Guy

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [dm-devel] Re: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-07-12 Thread Guy Watkins
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED]
} Sent: Thursday, July 12, 2007 1:35 PM
} To: [EMAIL PROTECTED]
} Cc: Tejun Heo; [EMAIL PROTECTED]; Stefan Bader; Phillip Susi; device-mapper
} development; [EMAIL PROTECTED]; [EMAIL PROTECTED];
} linux-raid@vger.kernel.org; Jens Axboe; David Chinner; Andreas Dilger
} Subject: Re: [dm-devel] Re: [RFD] BIO_RW_BARRIER - what it means for
} devices, filesystems, and dm/md.
} 
} On Wed, 11 Jul 2007 18:44:21 EDT, Ric Wheeler said:
}  [EMAIL PROTECTED] wrote:
}   On Tue, 10 Jul 2007 14:39:41 EDT, Ric Wheeler said:
}  
}   All of the high end arrays have non-volatile cache (read, on power
} loss, it is a
}   promise that it will get all of your data out to permanent storage).
} You don't
}   need to ask this kind of array to drain the cache. In fact, it might
} just ignore
}   you if you send it that kind of request ;-)
}  
}   OK, I'll bite - how does the kernel know whether the other end of that
}   fiberchannel cable is attached to a DMX-3 or to some no-name product
} that
}   may not have the same assurances?  Is there a I'm a high-end array
} bit
}   in the sense data that I'm unaware of?
}  
} 
}  There are ways to query devices (think of hdparm -I in S-ATA/P-ATA
} drives, SCSI
}  has similar queries) to see what kind of device you are talking to. I am
} not
}  sure it is worth the trouble to do any automatic detection/handling of
} this.
} 
}  In this specific case, it is more a case of when you attach a high end
} (or
}  mid-tier) device to a server, you should configure it without barriers
} for its
}  exported LUNs.
} 
} I don't have a problem with the sysadmin *telling* the system the other
} end of
} that fiber cable has characteristics X, Y and Z.  What worried me was
} that it
} looked like conflating device reported writeback cache with device
} actually
} has enough battery/hamster/whatever backup to flush everything on a power
} loss.
} (My back-of-envelope calculation shows for a worst-case of needing a 1ms
} seek
} for each 4K block, a 1G cache can take up to 4 1/2 minutes to sync.
} That's
} a lot of battery..)

Most hardware RAID devices I know of use the battery to save the cache while
the power is off.  When the power is restored it flushes the cache to disk.
If the power failure lasts longer than the batteries then the cache data is
lost, but the batteries last 24+ hours I beleve.

A big EMC array we had had enough battery power to power about 400 disks
while the 16 Gig of cache was flushed.  I think EMC told me the batteries
would last about 20 minutes.  I don't recall if the array was usable during
the 20 minutes.  We never tested a power failure.

Guy

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [RFD] BIO_RW_BARRIER - what it means for devices, filesystems, and dm/md.

2007-06-02 Thread Guy Watkins
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Jens Axboe
} Sent: Saturday, June 02, 2007 10:35 AM
} To: Tejun Heo
} Cc: David Chinner; [EMAIL PROTECTED]; Phillip Susi; Neil Brown; linux-
} [EMAIL PROTECTED]; [EMAIL PROTECTED]; dm-
} [EMAIL PROTECTED]; linux-raid@vger.kernel.org; Stefan Bader; Andreas Dilger
} Subject: Re: [RFD] BIO_RW_BARRIER - what it means for devices,
} filesystems, and dm/md.
} 
} On Sat, Jun 02 2007, Tejun Heo wrote:
}  Hello,
} 
}  Jens Axboe wrote:
}   Would that be very different from issuing barrier and not waiting for
}   its completion?  For ATA and SCSI, we'll have to flush write back
} cache
}   anyway, so I don't see how we can get performance advantage by
}   implementing separate WRITE_ORDERED.  I think zero-length barrier
}   (haven't looked at the code yet, still recovering from jet lag :-)
} can
}   serve as genuine barrier without the extra write tho.
}  
}   As always, it depends :-)
}  
}   If you are doing pure flush barriers, then there's no difference.
} Unless
}   you only guarantee ordering wrt previously submitted requests, in
} which
}   case you can eliminate the post flush.
}  
}   If you are doing ordered tags, then just setting the ordered bit is
}   enough. That is different from the barrier in that we don't need a
} flush
}   of FUA bit set.
} 
}  Hmmm... I'm feeling dense.  Zero-length barrier also requires only one
}  flush to separate requests before and after it (haven't looked at the
}  code yet, will soon).  Can you enlighten me?
} 
} Yeah, that's what the zero-length barrier implementation I posted does.
} Not sure if you have a question beyond that, if so fire away :-)
} 
} --
} Jens Axboe

I must admit I have only read some of the barrier related posts, so this
issue may have been covered.  If so, sorry.

What I have read seems to be related to a single disk.  What if a logical
disk is used (md, LVM, ...)?  If a barrier is issued to a logical disk and
that driver issues barriers to all related devices (logical or physical),
all the devices MUST honor the barrier together.  If 1 device crosses the
barrier before another reaches the barrier, corruption should be assumed.
It seems to me each block device that represents more than 2 other devices
must do a flush at a barrier so that all devices will cross the barrier at
the same time.

Guy

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: raid10 on centos 5

2007-05-04 Thread Guy Watkins


} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Ruslan Sivak
} Sent: Friday, May 04, 2007 12:22 PM
} To: linux-raid@vger.kernel.org
} Subject: raid10 on centos 5
} 
} I am trying to set up raid 10 and so far with no luck.  I have 4 drives,
} and Anaconda will not let me do raid 10.  mdadm doesn't have the raid 10
} personality loaded.  When I create the array manually like so:
} 
} 2 drives in /dev/md11 as raid1
} 2 drives in /dev/md12 as raid1
} md11 and md12 in /dev/md10 as raid0
} 
} Everything looks fine from the shell, but anaconda only sees md11 and
} md12.
} 
} The only choice I see is to set up LVM over md11 and md12.  Is this
} really raid10?
} 
} Russ

You are making a RAID1+RAID0 array.
Try making a real RAID10 array with 4 drives.  This way you would only have
1 array with 4 drives.

From the mdadm man page:
Currently, Linux supports LINEAR md devices,  RAID0  (striping),  RAID1
   (mirroring), RAID4, RAID5, RAID6, RAID10, MULTIPATH, and FAULTY.

Notice RAID10 is listed, use that.  Man mdadm for more info.

However, I would (and do) use RAID6.  With RAID6 any 2 disks can fail
without data loss.  With RAID1+RAID0, any one disk can fail, a second
failure has a 1 in 3 chance of vast data loss.

I hope this helps,
Guy

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: RAID6 question

2007-05-04 Thread Guy Watkins
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Guy Watkins
} Sent: Saturday, April 28, 2007 8:52 PM
} To: linux-raid@vger.kernel.org
} Subject: RAID6 question
} 
} I read in processor.com that Adaptec has a RAID 6/60 that is patented.
} 
} Does Linux RAID6 have a conflict?
} 
} Thanks,
} Guy
} 
} Adaptec also has announced a new family of Unified Serial (meaning 3Gbps
} SAS/SATA) RAID controllers for PCI Express. Five models include cards with
} four, eight, 12, and 16 internal ports, plus one groundbreaking SKU with
} eight external ports and dual path failover. The new controllers support
} RAIDs 0, 1, 5, 10, 50, 5EE, and the patented RAID 6/60, which Adaptec says
} can survive two simultaneous drive failures.
} investor.adaptec.com/ReleaseDetail.cfm?ReleaseID=233555
} investor.adaptec.com/ReleaseDetail.cfm?ReleaseID=233556
} 
} http://www.processor.com/editorial/article.asp?article=articles%2Fp2912%2F
} 02
} p12%2F02p12.aspguid=searchtype=WordList=bJumpTo=True
} 
} http://tinyurl.com/2kdzcb

No feedback.  Is no news good news?

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: raid10 on centos 5

2007-05-04 Thread Guy Watkins
} -Original Message-
} From: Ruslan Sivak [mailto:[EMAIL PROTECTED]
} Sent: Friday, May 04, 2007 7:22 PM
} To: Guy Watkins
} Cc: linux-raid@vger.kernel.org
} Subject: Re: raid10 on centos 5
} 
} Guy Watkins wrote:
}  } -Original Message-
}  } From: [EMAIL PROTECTED] [mailto:linux-raid-
}  } [EMAIL PROTECTED] On Behalf Of Ruslan Sivak
}  } Sent: Friday, May 04, 2007 12:22 PM
}  } To: linux-raid@vger.kernel.org
}  } Subject: raid10 on centos 5
}  }
}  } I am trying to set up raid 10 and so far with no luck.  I have 4
} drives,
}  } and Anaconda will not let me do raid 10.  mdadm doesn't have the raid
} 10
}  } personality loaded.  When I create the array manually like so:
}  }
}  } 2 drives in /dev/md11 as raid1
}  } 2 drives in /dev/md12 as raid1
}  } md11 and md12 in /dev/md10 as raid0
}  }
}  } Everything looks fine from the shell, but anaconda only sees md11 and
}  } md12.
}  }
}  } The only choice I see is to set up LVM over md11 and md12.  Is this
}  } really raid10?
}  }
}  } Russ
} 
}  You are making a RAID1+RAID0 array.
}  Try making a real RAID10 array with 4 drives.  This way you would only
} have
}  1 array with 4 drives.
} 
}  From the mdadm man page:
}  Currently, Linux supports LINEAR md devices,  RAID0  (striping),  RAID1
} (mirroring), RAID4, RAID5, RAID6, RAID10, MULTIPATH, and FAULTY.
} 
}  Notice RAID10 is listed, use that.  Man mdadm for more info.
} 
}  However, I would (and do) use RAID6.  With RAID6 any 2 disks can fail
}  without data loss.  With RAID1+RAID0, any one disk can fail, a second
}  failure has a 1 in 3 chance of vast data loss.
} 
}  I hope this helps,
}  Guy
} 
}  -
} 
} 
} 
} Guy,
} 
} That's what I've been trying to do.  Unfortunatelly, my distro, CentOS 5
} (based on RHEL 5, I believe), does not have the RAID10 personality in
} the kernel.  I guess I would have to compile my own kernel and load the
} module through a driver disk.  Would that work?  Are there some
} instructions somewhere I can follow?
} 
} Russ

I don't know how to make a driver disk.  Also not much on making modules.
From what I know, linux only loads the RAID modules it needs.  My system
does not have raid0 or raid10 loaded.  But both loaded when I used modprobe.
I have FC6, upgraded from FC5 using yum, so maybe not 100% FC6.

Anyway, you were not making a RAID10 array.  You were making 2 RAID1 arrays
and then 1 RAID0 array.  That does not need the RAID10 module (AFAIK).

If I recall, there is an issue of nesting arrays like you were doing.  The
problem was related to auto starting them.  But I don't recall any details,
and maybe it has been corrected.

Guy

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RAID6 question

2007-04-28 Thread Guy Watkins
I read in processor.com that Adaptec has a RAID 6/60 that is patented.

Does Linux RAID6 have a conflict?

Thanks,
Guy

Adaptec also has announced a new family of Unified Serial (meaning 3Gbps
SAS/SATA) RAID controllers for PCI Express. Five models include cards with
four, eight, 12, and 16 internal ports, plus one groundbreaking SKU with
eight external ports and dual path failover. The new controllers support
RAIDs 0, 1, 5, 10, 50, 5EE, and the patented RAID 6/60, which Adaptec says
can survive two simultaneous drive failures. 
investor.adaptec.com/ReleaseDetail.cfm?ReleaseID=233555 
investor.adaptec.com/ReleaseDetail.cfm?ReleaseID=233556

http://www.processor.com/editorial/article.asp?article=articles%2Fp2912%2F02
p12%2F02p12.aspguid=searchtype=WordList=bJumpTo=True

http://tinyurl.com/2kdzcb


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: mkinitrd and RAID6 on FC5

2007-04-23 Thread Guy Watkins
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of H. Peter Anvin
} Sent: Monday, April 23, 2007 1:49 PM
} To: Guy Watkins
} Cc: linux-raid@vger.kernel.org
} Subject: Re: mkinitrd and RAID6 on FC5
} 
} Guy Watkins wrote:
}  Is this a REDHAT only problem/bug?  If so, since bugzilla.redhat.com
} gets
}  ignored, where do I complain?
} 
} Yes, this is Redhat only, and as far as I know, it was fixed a long time
} ago.  I suspect you need to make sure you upgrade your entire system,
} especially mkinitrd, not just the kernel.
} 
}   -hpa

I tried to update/upgrade and no updates are available for mkinitrd.  Do you
know what version has the fix?  The bugzilla was never closed, so it seems
it has not been fixed.

My version:
mkinitrd.i3865.0.32-2   installed

Thanks,
Guy

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html