Ah ha
# ls -l /sys/block/*/holders/*
lrwxrwxrwx 1 root root 0 Feb 26 06:28 /sys/block/sdb/holders/dm-0 -
../../../block/dm-0
lrwxrwxrwx 1 root root 0 Feb 26 06:28 /sys/block/sdc/holders/dm-0 -
../../../block/dm-0
which I am assuming is dmraid? I did a quick check, and
# dmraid -r
No RAID
Al Boldi [EMAIL PROTECTED] wrote:
Interesting link. They seem to point out that smart not necessarily warns of
pending failure. This is probably worse than not having smart at all, as it
gives you the illusion of safety.
If SMART gives you the illusion of safety, you didn't understand
Rebooted and checked that there were no arrays defined. I have 2 cards in the
server - one is a VIA based card, with a single Maxtor, the other this a
Promise based card, with 2 Maxtors by Seagate on it. I've been into the config
utilites on both cards, and none of them have arrays defined on
On Mon, Feb 26, 2007 at 04:33:37PM +1100, Neil Brown wrote:
Do we want a path in the other direction to handle write errors? The
file system could say Don't worry to much if this block cannot be
written, just return an error and I will write it somewhere else?
This might allow md not to fail
the new location. I believe this should be always true, so presumably
with all modern disk drives a write error should mean something very
serious has happend.
Not quite that simple.
If you write a block aligned size the same size as the physical media
block size maybe this is true. If you
On Mon, 2007-02-26 at 08:25 -0500, Theodore Tso wrote:
Somewhat off-topic, but my one big regret with how the dm vs. evms
competition settled out was that evms had the ability to perform block
device snapshots using a non-LVM volume as the base --- and that EVMS
allowed a single drive to be
Alan wrote:
the new location. I believe this should be always true, so presumably
with all modern disk drives a write error should mean something very
serious has happend.
Not quite that simple.
I think that write errors are normally quite serious, but there are exceptions
which might
I think that this is mostly true, but we also need to balance this against
the
need for higher levels to get a timely response. In a really large IO, a
naive
retry of a very large write could lead to a non-responsive system for a very
large time...
And losing the I/O could result in a
On 2/25/07, Richard Scobie [EMAIL PROTECTED] wrote:
Colin Simpson wrote:
They therefore do not have the check option in the kernel. Is there
anything else I can do? Would forcing a resync achieve the same result
(or is that down right dangerous as the array is not considered
consistent for a
Mario 'BitKoenig' Holbe wrote:
Al Boldi [EMAIL PROTECTED] wrote:
Interesting link. They seem to point out that smart not necessarily
warns of pending failure. This is probably worse than not having smart
at all, as it gives you the illusion of safety.
If SMART gives you the illusion of
Hello,
I have been trying to get a software raid configuration working for a
few weeks with little success.
I currently have a 3ware 7506-4 card with 4 drives. The raid 5
performance of this card is poor and it was recommended to me to try
using the card as a JBOD controller and running
On Mon, 26 Feb 2007 at 5:26pm, Colin Simpson wrote
SATA isn't supported on RH 4's SMART.
Not true (for many SATA chipsets at least). Just pass '-d ata' to
smartctl.
--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University
-
To unsubscribe from this list: send the line
On 2/26/07, Colin Simpson [EMAIL PROTECTED] wrote:
If I say,
dd if=/dev/sda2 of=/dev/null
where /dev/sda2 is a component of an active md device.
Will the RAID subsystem get upset that someone else is fiddling with the
disk (even in just a read only way)? And will a read error on this dd
On Monday February 26, [EMAIL PROTECTED] wrote:
On 2/26/07, Colin Simpson [EMAIL PROTECTED] wrote:
If I say,
dd if=/dev/sda2 of=/dev/null
where /dev/sda2 is a component of an active md device.
Will the RAID subsystem get upset that someone else is fiddling with the
disk (even in
Colin Simpson wrote:
SATA isn't supported on RH 4's SMART.
False. Works just fine.
Jeff
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Alan wrote:
I think that this is mostly true, but we also need to balance this against the
need for higher levels to get a timely response. In a really large IO, a naive
retry of a very large write could lead to a non-responsive system for a very
large time...
And losing the I/O could
Theodore Tso wrote:
In any case, the reason why I bring this up is that it would be really
nice if there was a way with a single laptop drive to be able to do
snapshots and background fsck's without having to use initrd's with
device mapper.
This is a major part of why I've been trying to
Theodore Tso wrote:
Can someone with knowledge of current disk drive behavior confirm that
for all drives that support bad block sparing, if an attempt to write
to a particular spot on disk results in an error due to bad media at
that spot, the disk drive will automatically rewrite the sector to
Jeff Garzik wrote:
Theodore Tso wrote:
Can someone with knowledge of current disk drive behavior confirm that
for all drives that support bad block sparing, if an attempt to write
to a particular spot on disk results in an error due to bad media at
that spot, the disk drive will automatically
One interesting counter example is a smaller write than a full page - say 512
bytes out of 4k.
If we need to do a read-modify-write and it just so happens that 1 of the 7
sectors we need to read is flaky, will this look like a write failure?
The current core kernel code can't handle
On Monday, February 26, 2007 9:42 AM, Ric Wheeler wrote:
Which brings us back to a recent discussion at the file
system workshop on being
more repair oriented in file system design so we can survive
situations like
this a bit more reliably ;-)
On the second day of the workshop, there
David Rees wrote:
On 2/25/07, Richard Scobie [EMAIL PROTECTED] wrote:
Colin Simpson wrote:
They therefore do not have the check option in the kernel. Is there
anything else I can do? Would forcing a resync achieve the same result
(or is that down right dangerous as the array is not
22 matches
Mail list logo