On Monday February 18, [EMAIL PROTECTED] wrote:
On Mon, Feb 18, 2008 at 03:07:44PM +1100, Neil Brown wrote:
On Sunday February 17, [EMAIL PROTECTED] wrote:
Hi
It seems like a good way to avoid the performance problems of raid-5
/raid-6
I think there are better ways.
On Feb 17, 2008 10:26 PM, Janek Kozicki [EMAIL PROTECTED] wrote:
Conway S. Smith said: (by the date of Sun, 17 Feb 2008 07:45:26 -0700)
Well, I was reading that LVM2 had a 20%-50% performance penalty,
huh? Make a benchmark. Do you really think that anyone would be using
it if there was
On 17:40, Mark Hahn wrote:
Question to other people here - what is the maximum partition size
that ext3 can handle, am I correct it 4 TB ?
8 TB. people who want to push this are probably using ext4 already.
ext3 supports up to 16T for quite some time. It works fine for me:
[EMAIL
Beolach said: (by the date of Mon, 18 Feb 2008 05:38:15 -0700)
On Feb 17, 2008 10:26 PM, Janek Kozicki [EMAIL PROTECTED] wrote:
Conway S. Smith said: (by the date of Sun, 17 Feb 2008 07:45:26 -0700)
Well, I was reading that LVM2 had a 20%-50% performance penalty,
8 TB. people who want to push this are probably using ext4 already.
ext3 supports up to 16T for quite some time. It works fine for me:
thanks. 16 makes sense (2^32 * 4k blocks).
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL
Looks like your replacement disk is no good, the SATA port is bad or other
issue. I am not sure what SDB FIS means but as long as you keep getting
that error, don't expect the drive to work correctly, I had a drive that
did a similar thing (DOA Raptor) and after I got the replacement it worked
Hi All,
I've got a degraded RAID5 which I'm trying to add in the replacement
disk. Trouble is, every time the recovery starts, it flies along at
70MB/s or so. Then after doing about 1%, it starts dropping rapidly,
until eventually a device is marked failed.
When I look in dmesg, I get the
Steve Fairbairn wrote:
Hi All,
I've got a degraded RAID5 which I'm trying to add in the replacement
disk. Trouble is, every time the recovery starts, it flies along at
70MB/s or so. Then after doing about 1%, it starts dropping rapidly,
until eventually a device is marked failed.
When I look
On Mon, Feb 18, 2008 at 09:51:15PM +1100, Neil Brown wrote:
On Monday February 18, [EMAIL PROTECTED] wrote:
On Mon, Feb 18, 2008 at 03:07:44PM +1100, Neil Brown wrote:
On Sunday February 17, [EMAIL PROTECTED] wrote:
Hi
It seems like a good way to avoid the performance
On Sun, 17 Feb 2008 07:45:26 -0700, Conway S. Smith
[EMAIL PROTECTED] said:
[ ... ]
beolach Which part isn't wise? Starting w/ a few drives w/ the
beolach intention of growing; or ending w/ a large array (IOW,
beolach are 14 drives more than I should put in 1 array expect
beolach to be safe
10 matches
Mail list logo