> I tried the new patches (2.4.1-ac13) and it seemed very stable. After
> moving about 50GB of data to the raid5, the system crashed. here is the
> syslog... (the system had been up for about 20 hours)
Ok so better but not perfect
> Feb 15 01:54:01 bertha kernel: hdg: timeout waiting for
I tried the new patches (2.4.1-ac13) and it seemed very stable. After
moving about 50GB of data to the raid5, the system crashed. here is the
syslog... (the system had been up for about 20 hours)
Ok so better but not perfect
Feb 15 01:54:01 bertha kernel: hdg: timeout waiting for DMA
You have junk for cables or they are noe sheilded correctly from
crosstalk. But I do not think this is the case.
Go check your power-supply for stality and load.
Then do a ripple noise test to make sure that underload, it does not cause
the clock on the drives to fail.
On Thu, 15 Feb 2001,
>>I've not changed anything related to DMA handling specifically. The current
>>-ac does have a fix for a couple of cases where an IDE reset on the promise
>>could hang the box dead. That may be the problem.
I tried the new patches (2.4.1-ac13) and it seemed very stable. After
moving about
I've not changed anything related to DMA handling specifically. The current
-ac does have a fix for a couple of cases where an IDE reset on the promise
could hang the box dead. That may be the problem.
I tried the new patches (2.4.1-ac13) and it seemed very stable. After
moving about 50GB
You have junk for cables or they are noe sheilded correctly from
crosstalk. But I do not think this is the case.
Go check your power-supply for stality and load.
Then do a ripple noise test to make sure that underload, it does not cause
the clock on the drives to fail.
On Thu, 15 Feb 2001,
Alan Cox wrote:
>> Feb 13 05:23:27 bertha kernel: hdo: dma_intr: status=0x51 { DriveReady
>> SeekComplete Error }
>> Feb 13 05:23:27 bertha kernel: hdo: dma_intr: error=0x84 { DriveStatusError
>> BadCRC }
>
>You have inadequate cabling. CRC errors are indications of that. Make sure you
>are
Jasmeet Sidhu wrote:
>
> Hey guys,
>
> I am attaching my previous email for additional info. Now I am using
> kernel 2.4.1-ac12 and these problems have not gone away.
>
> Anybody else having these problems with a ide raid 5?
>
> The Raid 5 performance should also be questioned..here are some
> >You will get horribly bad performance off raid5 if you have stripes on both
> >hda/hdb or hdc/hdd etc.
>
> If I am reading this correctly, then by striping on both hda/hdb and
> /hdc/hdd you mean that I have two drives per ide channel. In other words,
> you think I have a Master and a
At 08:28 PM 2/14/2001 +, Alan Cox wrote:
> > Anybody else having these problems with a ide raid 5?
> > The Raid 5 performance should also be questioned..here are some number
> > returned by hdparam
>
>You will get horribly bad performance off raid5 if you have stripes on both
>hda/hdb or
> Anybody else having these problems with a ide raid 5?
> The Raid 5 performance should also be questioned..here are some number
> returned by hdparam
You will get horribly bad performance off raid5 if you have stripes on both
hda/hdb or hdc/hdd etc.
> Feb 13 05:23:27 bertha kernel: hdo:
Hey guys,
I am attaching my previous email for additional info. Now I am using
kernel 2.4.1-ac12 and these problems have not gone away.
Anybody else having these problems with a ide raid 5?
The Raid 5 performance should also be questioned..here are some number
returned by hdparam
/dev/hda
Hey guys,
I am attaching my previous email for additional info. Now I am using
kernel 2.4.1-ac12 and these problems have not gone away.
Anybody else having these problems with a ide raid 5?
The Raid 5 performance should also be questioned..here are some number
returned by hdparam
/dev/hda
Anybody else having these problems with a ide raid 5?
The Raid 5 performance should also be questioned..here are some number
returned by hdparam
You will get horribly bad performance off raid5 if you have stripes on both
hda/hdb or hdc/hdd etc.
Feb 13 05:23:27 bertha kernel: hdo:
At 08:28 PM 2/14/2001 +, Alan Cox wrote:
Anybody else having these problems with a ide raid 5?
The Raid 5 performance should also be questioned..here are some number
returned by hdparam
You will get horribly bad performance off raid5 if you have stripes on both
hda/hdb or hdc/hdd etc.
You will get horribly bad performance off raid5 if you have stripes on both
hda/hdb or hdc/hdd etc.
If I am reading this correctly, then by striping on both hda/hdb and
/hdc/hdd you mean that I have two drives per ide channel. In other words,
you think I have a Master and a Slave type
Jasmeet Sidhu wrote:
Hey guys,
I am attaching my previous email for additional info. Now I am using
kernel 2.4.1-ac12 and these problems have not gone away.
Anybody else having these problems with a ide raid 5?
The Raid 5 performance should also be questioned..here are some number
Alan Cox wrote:
Feb 13 05:23:27 bertha kernel: hdo: dma_intr: status=0x51 { DriveReady
SeekComplete Error }
Feb 13 05:23:27 bertha kernel: hdo: dma_intr: error=0x84 { DriveStatusError
BadCRC }
You have inadequate cabling. CRC errors are indications of that. Make sure you
are using
18 matches
Mail list logo