On Sun, 24 Feb 2008, Janek Kozicki wrote:
Justin Piszcz said: (by the date of Sun, 24 Feb 2008 04:26:39 -0500 (EST))
Kernel 2.6.24.2 I've seen it on different occasions, for this last time
though it may have been due to a power outage that lasted 2hours and
obviously the UPS did
On Mon, 25 Feb 2008, Dexter Filmore wrote:
Currently my array consists of four Samsung Spinpoint sATA drives, I'm about
to enlarge to 6 drive.
As of now they sit on an Sil3114 controller via PCI, hence there's a
bottleneck, can't squeeze more than 15-30 megs write speed (rather 15 today
as
On Mon, 25 Feb 2008, Dexter Filmore wrote:
On Monday 25 February 2008 15:02:31 Justin Piszcz wrote:
On Mon, 25 Feb 2008, Dexter Filmore wrote:
Currently my array consists of four Samsung Spinpoint sATA drives, I'm
about to enlarge to 6 drive.
As of now they sit on an Sil3114 controller via
On Mon, 25 Feb 2008, Dexter Filmore wrote:
On Monday 25 February 2008 19:50:52 Justin Piszcz wrote:
On Mon, 25 Feb 2008, Dexter Filmore wrote:
On Monday 25 February 2008 15:02:31 Justin Piszcz wrote:
On Mon, 25 Feb 2008, Dexter Filmore wrote:
Currently my array consists of four Samsung
On Sat, 23 Feb 2008, Carlos Carvalho wrote:
Justin Piszcz ([EMAIL PROTECTED]) wrote on 23 February 2008 10:44:
On Sat, 23 Feb 2008, Justin Piszcz wrote:
On Sat, 23 Feb 2008, Michael Tokarev wrote:
Justin Piszcz wrote:
Should I be worried?
Fri Feb 22 20:00:05 EST 2008: Executing
How many drives actually failed?
Failed Devices : 1
On Tue, 19 Feb 2008, Norman Elton wrote:
So I had my first failure today, when I got a report that one drive
(/dev/sdam) failed. I've attached the output of mdadm --detail. It
appears that two drives are listed as removed, but the array is
or failed.
Any ideas?
Thanks,
Norman
On Feb 19, 2008, at 12:31 PM, Justin Piszcz wrote:
How many drives actually failed?
Failed Devices : 1
On Tue, 19 Feb 2008, Norman Elton wrote:
So I had my first failure today, when I got a report that one drive
(/dev/sdam) failed. I've attached
48 drives inside.
/dev/sd[a-z] are all there as well, just in other RAID sets. Once you
get to /dev/sdz, it starts up at /dev/sdaa, sdab, etc.
I'd be curious if what I'm experiencing is a bug. What should I try to
restore the array?
Norman
On 2/19/08, Justin Piszcz [EMAIL PROTECTED] wrote:
Neil
Looks like your replacement disk is no good, the SATA port is bad or other
issue. I am not sure what SDB FIS means but as long as you keep getting
that error, don't expect the drive to work correctly, I had a drive that
did a similar thing (DOA Raptor) and after I got the replacement it worked
When you reate the array its --chunk or -c -- I found 256 KiB to 1024 KiB
to be optimal.
Justin.
On Sat, 9 Feb 2008, Andreas-Sokov wrote:
Hi linux-raid.
RAID5 how chage chunck size from 64 to 128, 256 ?
is it possible ?
Somebody did this ?
--
Best regards,
Andreas-Sokov
-
To unsubscribe
On Fri, 8 Feb 2008, Iustin Pop wrote:
On Fri, Feb 08, 2008 at 08:54:55AM -0500, Justin Piszcz wrote:
The promise tx4 pci works great and supports sata/300+ncq/etc $60-$70.
Wait, I have used tx4 pci up until ~2.6.22 and it didn't support AFAIK
ncq. Are you sure that current driver supports
On Fri, 8 Feb 2008, Iustin Pop wrote:
On Fri, Feb 08, 2008 at 02:24:15PM -0500, Justin Piszcz wrote:
On Fri, 8 Feb 2008, Iustin Pop wrote:
On Fri, Feb 08, 2008 at 08:54:55AM -0500, Justin Piszcz wrote:
The promise tx4 pci works great and supports sata/300+ncq/etc $60-$70.
Wait, I have
On Fri, 8 Feb 2008, Bill Davidsen wrote:
Steve Fairbairn wrote:
Can anyone see any issues with what I'm trying to do?
No.
Are there any known issues with IT8212 cards (They worked as straight
disks on linux fine)?
No idea, don't have that card.
Is anyone using an array with disks
On Tue, 5 Feb 2008, Keld Jørn Simonsen wrote:
On Thu, Jan 31, 2008 at 02:55:07AM +0100, Keld Jørn Simonsen wrote:
On Wed, Jan 30, 2008 at 11:36:39PM +0100, Janek Kozicki wrote:
Keld Jørn Simonsen said: (by the date of Wed, 30 Jan 2008 23:00:07 +0100)
All the raid10's will have double
On Tue, 5 Feb 2008, Keld Jørn Simonsen wrote:
Hi
I am looking at revising our howto. I see a number of places where a
chunk size of 32 kiB is recommended, and even recommendations on
maybe using sizes of 4 kiB.
My own take on that is that this really hurts performance.
Normal disks have a
On Tue, 5 Feb 2008, Keld Jørn Simonsen wrote:
On Tue, Feb 05, 2008 at 11:54:27AM -0500, Justin Piszcz wrote:
On Tue, 5 Feb 2008, Keld Jørn Simonsen wrote:
On Thu, Jan 31, 2008 at 02:55:07AM +0100, Keld Jørn Simonsen wrote:
On Wed, Jan 30, 2008 at 11:36:39PM +0100, Janek Kozicki wrote
On Tue, 5 Feb 2008, Keld Jørn Simonsen wrote:
On Tue, Feb 05, 2008 at 05:28:27PM -0500, Justin Piszcz wrote:
Could you give some figures?
I remember testing with bonnie++ and raid10 was about half the speed
(200-265 MiB/s) as RAID5 (400-420 MiB/s) for sequential output, but input
On Mon, 4 Feb 2008, Michael Tokarev wrote:
Moshe Yudkowsky wrote:
[]
If I'm reading the man pages, Wikis, READMEs and mailing lists correctly
-- not necessarily the case -- the ext3 file system uses the equivalent
of data=journal as a default.
ext3 defaults to data=ordered, not
On Mon, 4 Feb 2008, Michael Tokarev wrote:
Eric Sandeen wrote:
[]
http://oss.sgi.com/projects/xfs/faq.html#nulls
and note that recent fixes have been made in this area (also noted in
the faq)
Also - the above all assumes that when a drive says it's written/flushed
data, that it truly has.
On Fri, 18 Jan 2008, Bill Davidsen wrote:
Justin Piszcz wrote:
On Thu, 17 Jan 2008, Al Boldi wrote:
Justin Piszcz wrote:
On Wed, 16 Jan 2008, Al Boldi wrote:
Also, can you retest using dd with different block-sizes?
I can do this, moment..
I know about oflag=direct but I choose
On Fri, 18 Jan 2008, Greg Cormier wrote:
Also, don't use ext*, XFS can be up to 2-3x faster (in many of the
benchmarks).
I'm going to swap file systems and give it a shot right now! :)
How is stability of XFS? I heard recovery is easier with ext2/3 due to
more people using it, more tools
On Fri, 18 Jan 2008, Greg Cormier wrote:
Justin, thanks for the script. Here's my results. I ran it a few times
with different tests, hence the small number of results you see here,
I slowly trimmed out the obvious not-ideal sizes.
Nice, we all love benchmarks!! :)
System
---
Athlon64
For these benchmarks I timed how long it takes to extract a standard 4.4
GiB DVD:
Settings: Software RAID 5 with the following settings (until I change
those too):
Base setup:
blockdev --setra 65536 /dev/md3
echo 16384 /sys/block/md3/md/stripe_cache_size
echo Disabling NCQ on all disks...
On Wed, 16 Jan 2008, Justin Piszcz wrote:
For these benchmarks I timed how long it takes to extract a standard 4.4 GiB
DVD:
Settings: Software RAID 5 with the following settings (until I change those
too):
http://home.comcast.net/~jpiszcz/sunit-swidth/newresults.html
Any idea why
On Wed, 16 Jan 2008, Al Boldi wrote:
Justin Piszcz wrote:
For these benchmarks I timed how long it takes to extract a standard 4.4
GiB DVD:
Settings: Software RAID 5 with the following settings (until I change
those too):
Base setup:
blockdev --setra 65536 /dev/md3
echo 16384 /sys/block
On Wed, 16 Jan 2008, Greg Cormier wrote:
What sort of tools are you using to get these benchmarks, and can I
used them for ext3?
Very interested in running this on my server.
Thanks,
Greg
You can use whatever suits you, such as untar kernel source tree, copy files,
untar backups, etc--,
On Thu, 17 Jan 2008, Al Boldi wrote:
Justin Piszcz wrote:
On Wed, 16 Jan 2008, Al Boldi wrote:
Also, can you retest using dd with different block-sizes?
I can do this, moment..
I know about oflag=direct but I choose to use dd with sync and measure the
total time it takes.
/usr/bin/time
On Thu, 17 Jan 2008, Al Boldi wrote:
Justin Piszcz wrote:
On Wed, 16 Jan 2008, Al Boldi wrote:
Also, can you retest using dd with different block-sizes?
I can do this, moment..
I know about oflag=direct but I choose to use dd with sync and measure the
total time it takes.
/usr/bin/time
p34:~# mdadm /dev/md3 --zero-superblock
p34:~# mdadm --examine --scan
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=f463057c:9a696419:3bcb794a:7aaa12b2
ARRAY /dev/md1 level=raid1 num-devices=2
UUID=98e4948c:c6685f82:e082fd95:e7f45529
ARRAY /dev/md2 level=raid1 num-devices=2
On Wed, 16 Jan 2008, Justin Piszcz wrote:
p34:~# mdadm /dev/md3 --zero-superblock
p34:~# mdadm --examine --scan
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=f463057c:9a696419:3bcb794a:7aaa12b2
ARRAY /dev/md1 level=raid1 num-devices=2
UUID=98e4948c:c6685f82:e082fd95:e7f45529
ARRAY /dev/md2
On Fri, 4 Jan 2008, Changliang Chen wrote:
Hi Justin£¬
From your report£¬It looks that the p34-default's behavior is
better£¬which item make you consider that the p34-dchinner looks nice£¿
--
Best Regards
The re-write and sequential input and output is faster for dchinner.
Justin.
On Mon, 31 Dec 2007, Greg Cormier wrote:
So I've been slowly expanding my knowledge of mdadm/linux raid.
I've got a 1 terabyte array which stores mostly large media files, and
from my reading, increasing the stripe size should really help my
performance
Is there any way to do this to an
When setting the scheduler, is it possible to set it on /dev/mdX or is it
only possible to set it on the underlying devices which compose the sw
raid device? /dev/sda /dev/sdb and does that really affect how the data is
accessed by specifying the underlying device and not mdX?
Justin.
-
To
On Sat, 29 Dec 2007, dean gaudet wrote:
On Tue, 25 Dec 2007, Bill Davidsen wrote:
The issue I'm thinking about is hardware sector size, which on modern drives
may be larger than 512b and therefore entail a read-alter-rewrite (RAR) cycle
when writing a 512b block.
i'm not sure any shipping
On Sat, 29 Dec 2007, dean gaudet wrote:
On Sat, 29 Dec 2007, Dan Williams wrote:
On Dec 29, 2007 9:48 AM, dean gaudet [EMAIL PROTECTED] wrote:
hmm bummer, i'm doing another test (rsync 3.5M inodes from another box) on
the same 64k chunk array and had raised the stripe_cache_size to 1024...
On Thu, 27 Dec 2007, dean gaudet wrote:
hey neil -- remember that raid5 hang which me and only one or two others
ever experienced and which was hard to reproduce? we were debugging it
well over a year ago (that box has 400+ day uptime now so at least that
long ago :) the workaround was to
On Thu, 20 Dec 2007, Bill Davidsen wrote:
Justin Piszcz wrote:
On Wed, 19 Dec 2007, Bill Davidsen wrote:
I'm going to try another approach, I'll describe it when I get results (or
not).
http://home.comcast.net/~jpiszcz/align_vs_noalign/
Hardly any difference at whatsoever, only
The (up to) 30% percent figure is mentioned here:
http://insights.oetiker.ch/linux/raidoptimization.html
On http://forums.storagereview.net/index.php?showtopic=25786:
This user writes about the problem:
XP, and virtually every O/S and partitioning software of XP's day, by default
places the
On Wed, 19 Dec 2007, Bill Davidsen wrote:
Thiemo Nagel wrote:
Performance of the raw device is fair:
# dd if=/dev/md2 of=/dev/zero bs=128k count=64k
8589934592 bytes (8.6 GB) copied, 15.6071 seconds, 550 MB/s
Somewhat less through ext3 (created with -E stride=64):
# dd if=largetestfile
On Wed, 19 Dec 2007, Mattias Wadenstein wrote:
On Wed, 19 Dec 2007, Justin Piszcz wrote:
--
Now to my setup / question:
# fdisk -l /dev/sdc
Disk /dev/sdc: 150.0 GB, 150039945216 bytes
255 heads, 63 sectors/track, 18241 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk
On Wed, 19 Dec 2007, Jon Nelson wrote:
On 12/19/07, Justin Piszcz [EMAIL PROTECTED] wrote:
On Wed, 19 Dec 2007, Mattias Wadenstein wrote:
From that setup it seems simple, scrap the partition table and use the
disk device for raid. This is what we do for all data storage disks (hw raid
On Wed, 19 Dec 2007, Bill Davidsen wrote:
Justin Piszcz wrote:
On Wed, 19 Dec 2007, Mattias Wadenstein wrote:
On Wed, 19 Dec 2007, Justin Piszcz wrote:
--
Now to my setup / question:
# fdisk -l /dev/sdc
Disk /dev/sdc: 150.0 GB, 150039945216 bytes
255 heads, 63 sectors/track
On Wed, 19 Dec 2007, Jon Sabo wrote:
So I was trying to copy over some Indiana Jones wav files and it
wasn't going my way. I noticed that my software raid device showed:
/dev/md1 on / type ext3 (rw,errors=remount-ro)
Is this saying that it was remounted, read only because it found a
On Wed, 19 Dec 2007, Jon Sabo wrote:
I found the problem. The power was unplugged from the drive. The
sata power connectors aren't very good at securing the connector. I
reattached the power connector to the sata drive and booted up. This
is what it looks like now:
[EMAIL
On Wed, 19 Dec 2007, Bill Davidsen wrote:
Justin Piszcz wrote:
On Wed, 19 Dec 2007, Bill Davidsen wrote:
Justin Piszcz wrote:
On Wed, 19 Dec 2007, Mattias Wadenstein wrote:
On Wed, 19 Dec 2007, Justin Piszcz wrote:
--
Now to my setup / question:
# fdisk -l /dev/sdc
Disk
On Wed, 19 Dec 2007, Bill Davidsen wrote:
I'm going to try another approach, I'll describe it when I get results (or
not).
http://home.comcast.net/~jpiszcz/align_vs_noalign/
Hardly any difference at whatsoever, only on the per char for read/write
is it any faster..?
Average of 3 runs
On Wed, 19 Dec 2007, Robin Hill wrote:
On Wed Dec 19, 2007 at 09:50:16AM -0500, Justin Piszcz wrote:
The (up to) 30% percent figure is mentioned here:
http://insights.oetiker.ch/linux/raidoptimization.html
That looks to be referring to partitioning a RAID device - this'll only
apply
On Tue, 18 Dec 2007, Norman Elton wrote:
We're investigating the possibility of running Linux (RHEL) on top of Sun's
X4500 Thumper box:
http://www.sun.com/servers/x64/x4500/
Basically, it's a server with 48 SATA hard drives. No hardware RAID. It's
designed for Sun's ZFS filesystem.
On Tue, 18 Dec 2007, Thiemo Nagel wrote:
Dear Norman,
So... we're curious how Linux will handle such a beast. Has anyone run MD
software RAID over so many disks? Then piled LVM/ext3 on top of that? Any
suggestions?
Are we crazy to think this is even possible?
I'm running 22x 500GB
On Tue, 18 Dec 2007, Thiemo Nagel wrote:
Performance of the raw device is fair:
# dd if=/dev/md2 of=/dev/zero bs=128k count=64k
8589934592 bytes (8.6 GB) copied, 15.6071 seconds, 550 MB/s
Somewhat less through ext3 (created with -E stride=64):
# dd if=largetestfile of=/dev/zero bs=128k
On Tue, 18 Dec 2007, Jon Nelson wrote:
On 12/18/07, Thiemo Nagel [EMAIL PROTECTED] wrote:
Performance of the raw device is fair:
# dd if=/dev/md2 of=/dev/zero bs=128k count=64k
8589934592 bytes (8.6 GB) copied, 15.6071 seconds, 550 MB/s
Somewhat less through ext3 (created with -E
On Tue, 18 Dec 2007, Guy Watkins wrote:
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Brendan Conoboy
} Sent: Tuesday, December 18, 2007 3:36 PM
} To: Norman Elton
} Cc: linux-raid@vger.kernel.org
} Subject: Re: Raid over 48 disks
On Tue, 18 Dec 2007, Justin Piszcz wrote:
On Tue, 18 Dec 2007, Guy Watkins wrote:
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Brendan Conoboy
} Sent: Tuesday, December 18, 2007 3:36 PM
} To: Norman Elton
} Cc: linux-raid
On Thu, 13 Dec 2007, Louis-David Mitterrand wrote:
Hi,
after reading some interesting suggestions on kernel tuning at:
http://hep.kbfi.ee/index.php/IT/KernelTuning
I am wondering whether 'deadline' is indeed the best IO scheduler (vs.
anticipatory and cfq) for a soft raid5/6
On Thu, 6 Dec 2007, David Rees wrote:
On Dec 6, 2007 1:06 AM, Justin Piszcz [EMAIL PROTECTED] wrote:
On Wed, 5 Dec 2007, Jon Nelson wrote:
I saw something really similar while moving some very large (300MB to
4GB) files.
I was really surprised to see actual disk I/O (as measured by dstat
On Thu, 6 Dec 2007, Andrew Morton wrote:
On Sat, 1 Dec 2007 06:26:08 -0500 (EST)
Justin Piszcz [EMAIL PROTECTED] wrote:
I am putting a new machine together and I have dual raptor raid 1 for the
root, which works just fine under all stress tests.
Then I have the WD 750 GiB drive (not RE2
On Sat, 1 Dec 2007, Justin Piszcz wrote:
On Sat, 1 Dec 2007, Janek Kozicki wrote:
Justin Piszcz said: (by the date of Sat, 1 Dec 2007 07:23:41 -0500
(EST))
dd if=/dev/zero of=/dev/sdc
The purpose is with any new disk its good to write to all the blocks and
let the drive to all
On Sun, 2 Dec 2007, Oliver Martin wrote:
[Please CC me on replies as I'm not subscribed]
Hello!
I've been experimenting with software RAID a bit lately, using two
external 500GB drives. One is connected via USB, one via Firewire. It is
set up as a RAID5 with LVM on top so that I can easily
On Sun, 2 Dec 2007, Janek Kozicki wrote:
Justin Piszcz said: (by the date of Sun, 2 Dec 2007 04:11:59 -0500 (EST))
The badblocks did not do anything; however, when I built a software raid 5
and the performed a dd:
/usr/bin/time dd if=/dev/zero of=fill_disk bs=1M
I saw this somewhere
On Mon, 3 Dec 2007, Michael Tokarev wrote:
Justin Piszcz said: (by the date of Sun, 2 Dec 2007 04:11:59 -0500 (EST))
The badblocks did not do anything; however, when I built a software raid 5
and the performed a dd:
/usr/bin/time dd if=/dev/zero of=fill_disk bs=1M
I saw this somewhere
root 2206 1 4 Dec02 ?00:10:37 dd if /dev/zero of 1.out
bs 1M
root 2207 1 4 Dec02 ?00:10:38 dd if /dev/zero of 2.out
bs 1M
root 2208 1 4 Dec02 ?00:10:35 dd if /dev/zero of 3.out
bs 1M
root 2209 1 4 Dec02 ?00:10:45 dd if
On Mon, 3 Dec 2007, Neil Brown wrote:
On Sunday December 2, [EMAIL PROTECTED] wrote:
Anyway, the problems are back: To test my theory that everything is
alright with the CPU running within its specs, I removed one of the
drives while copying some large files yesterday. Initially, everything
Quick question,
Setup a new machine last night with two raptor 150 disks. Setup RAID1 as
I do everywhere else, 0.90.03 superblocks (in order to be compatible with
LILO, if you use 1.x superblocks with LILO you can't boot), and then:
/dev/sda1+sdb1 - /dev/md0 - swap
/dev/sda2+sdb2 - /dev/md1
I am putting a new machine together and I have dual raptor raid 1 for the
root, which works just fine under all stress tests.
Then I have the WD 750 GiB drive (not RE2, desktop ones for ~150-160 on
sale now adays):
I ran the following:
dd if=/dev/zero of=/dev/sdc
dd if=/dev/zero of=/dev/sdd
On Sat, 1 Dec 2007, Jan Engelhardt wrote:
On Dec 1 2007 06:26, Justin Piszcz wrote:
I ran the following:
dd if=/dev/zero of=/dev/sdc
dd if=/dev/zero of=/dev/sdd
dd if=/dev/zero of=/dev/sde
(as it is always a very good idea to do this with any new disk)
Why would you care about what's
On Sat, 1 Dec 2007, Jan Engelhardt wrote:
On Dec 1 2007 07:12, Justin Piszcz wrote:
On Sat, 1 Dec 2007, Jan Engelhardt wrote:
On Dec 1 2007 06:19, Justin Piszcz wrote:
RAID1, 0.90.03 superblocks (in order to be compatible with LILO, if
you use 1.x superblocks with LILO you can't boot
On Sat, 1 Dec 2007, Jan Engelhardt wrote:
On Dec 1 2007 06:19, Justin Piszcz wrote:
RAID1, 0.90.03 superblocks (in order to be compatible with LILO, if
you use 1.x superblocks with LILO you can't boot)
Says who? (Don't use LILO ;-)
I like LILO :)
, and then:
/dev/sda1+sdb1 - /dev
On Sat, 1 Dec 2007, Janek Kozicki wrote:
Justin Piszcz said: (by the date of Sat, 1 Dec 2007 07:23:41 -0500 (EST))
dd if=/dev/zero of=/dev/sdc
The purpose is with any new disk its good to write to all the blocks and
let the drive to all of the re-mapping before you put 'real' data
On Wed, 14 Nov 2007, Peter Magnusson wrote:
On Wed, 14 Nov 2007, Justin Piszcz wrote:
This is a known bug in 2.6.23 and should be fixed in 2.6.23.2 if the RAID5
bio* patches are applied.
Ok, good to know.
Do you know when it first appeared because it existed in linux-2.6.22.3
also
On Wed, 14 Nov 2007, Bill Davidsen wrote:
Justin Piszcz wrote:
This is a known bug in 2.6.23 and should be fixed in 2.6.23.2 if the RAID5
bio* patches are applied.
Note below he's running 2.6.22.3 which doesn't have the bug unless -STABLE
added it. So should not really be in 2.6.22
On Thu, 8 Nov 2007, Carlos Carvalho wrote:
Jeff Lessem ([EMAIL PROTECTED]) wrote on 6 November 2007 22:00:
Dan Williams wrote:
The following patch, also attached, cleans up cases where the code looks
at sh-ops.pending when it should be looking at the consistent
stack-based snapshot of
On Thu, 8 Nov 2007, BERTRAND Joël wrote:
BERTRAND Joël wrote:
Chuck Ebbert wrote:
On 11/05/2007 03:36 AM, BERTRAND Joël wrote:
Neil Brown wrote:
On Sunday November 4, [EMAIL PROTECTED] wrote:
# ps auxww | grep D
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME
COMMAND
On Tue, 6 Nov 2007, BERTRAND Joël wrote:
Done. Here is obtained ouput :
[ 1265.899068] check 4: state 0x6 toread read
write f800fdd4e360 written
[ 1265.941328] check 3: state 0x1 toread read
On Tue, 6 Nov 2007, BERTRAND Joël wrote:
Justin Piszcz wrote:
On Tue, 6 Nov 2007, BERTRAND Joël wrote:
Done. Here is obtained ouput :
[ 1265.899068] check 4: state 0x6 toread read
write f800fdd4e360 written
[ 1265.941328] check
On Mon, 5 Nov 2007, Dan Williams wrote:
On 11/4/07, Justin Piszcz [EMAIL PROTECTED] wrote:
On Mon, 5 Nov 2007, Neil Brown wrote:
On Sunday November 4, [EMAIL PROTECTED] wrote:
# ps auxww | grep D
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
root 273
# ps auxww | grep D
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
root 273 0.0 0.0 0 0 ?DOct21 14:40 [pdflush]
root 274 0.0 0.0 0 0 ?DOct21 13:00 [pdflush]
After several days/weeks, this is the second time
: 60
high: 62
batch: 15
vm stats threshold: 42
all_unreclaimable: 0
prev_priority: 12
start_pfn: 1048576
On Sun, 4 Nov 2007, Justin Piszcz wrote:
# ps auxww | grep D
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
root
On Sun, 4 Nov 2007, BERTRAND Joël wrote:
Justin Piszcz wrote:
# ps auxww | grep D
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
root 273 0.0 0.0 0 0 ?DOct21 14:40 [pdflush]
root 274 0.0 0.0 0 0 ?DOct21 13
On Mon, 5 Nov 2007, Neil Brown wrote:
On Sunday November 4, [EMAIL PROTECTED] wrote:
# ps auxww | grep D
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
root 273 0.0 0.0 0 0 ?DOct21 14:40 [pdflush]
root 274 0.0 0.0 0 0 ?
On Fri, 26 Oct 2007, Filippo Carletti wrote:
Is there a way to control an array resync process?
In particular, is it possible to skip read errors?
My setup:
LVM2 Phisical Volume over a two disks MD RAID1 array
Logical Volumes didn't span whole PV, some PE free at the end of disks
What
On Fri, 26 Oct 2007, Goswin von Brederlow wrote:
Justin Piszcz [EMAIL PROTECTED] writes:
On Fri, 19 Oct 2007, Alberto Alonso wrote:
On Thu, 2007-10-18 at 17:26 +0200, Goswin von Brederlow wrote:
Mike Accetta [EMAIL PROTECTED] writes:
What I would like to see is a timeout driven
Success.
On Thu, 25 Oct 2007, Daniel L. Miller wrote:
Sorry for consuming bandwidth - but all of a sudden I'm not seeing messages.
Is this going through?
--
Daniel
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More
Success 2.
On Thu, 25 Oct 2007, Daniel L. Miller wrote:
Thanks for the test responses - I have re-subscribed...if I see this
myself...I'm back!
--
Daniel
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
On Mon, 22 Oct 2007, Louis-David Mitterrand wrote:
Hi,
[using kernel 2.6.23 and mdadm 2.6.3+20070929]
I have a rather flaky sata controller with which I am trying to resync a raid5
array. It usually starts failing after 40% of the resync is done. Short of
changing the controller (which I
On Tue, 23 Oct 2007, Richard Scobie wrote:
Peter wrote:
Thanks Justin, good to hear about some real world experience.
Hi Peter,
I recently built a 3 drive RAID5 using the onboard SATA controllers on an
MCP55 based board and get around 115MB/s write and 141MB/s read.
A fourth drive was
On Sat, 20 Oct 2007, Michael Tokarev wrote:
There was an idea some years ago about having an additional layer on
between a block device and whatever else is above it (filesystem or
something else), that will just do bad block remapping. Maybe it was
even implemented in LVM or IBM-proposed
On Fri, 19 Oct 2007, Doug Ledford wrote:
On Fri, 2007-10-19 at 13:05 -0400, Justin Piszcz wrote:
I'm sure an internal bitmap would. On RAID1 arrays, reads/writes are
never split up by a chunk size for stripes. A 2mb read is a single
read, where as on a raid4/5/6 array, a 2mb read will end
On Fri, 19 Oct 2007, Doug Ledford wrote:
On Fri, 2007-10-19 at 12:45 -0400, Justin Piszcz wrote:
On Fri, 19 Oct 2007, John Stoffel wrote:
Justin == Justin Piszcz [EMAIL PROTECTED] writes:
Justin Is a bitmap created by default with 1.x? I remember seeing
Justin reports of 15-30
On Fri, 19 Oct 2007, John Stoffel wrote:
Doug == Doug Ledford [EMAIL PROTECTED] writes:
Doug On Fri, 2007-10-19 at 11:46 -0400, John Stoffel wrote:
Justin == Justin Piszcz [EMAIL PROTECTED] writes:
Justin On Fri, 19 Oct 2007, John Stoffel wrote:
So,
Is it time to start thinking
On Fri, 19 Oct 2007, Doug Ledford wrote:
On Fri, 2007-10-19 at 11:46 -0400, John Stoffel wrote:
Justin == Justin Piszcz [EMAIL PROTECTED] writes:
Justin On Fri, 19 Oct 2007, John Stoffel wrote:
So,
Is it time to start thinking about deprecating the old 0.9, 1.0 and
1.1 formats to just
On Fri, 19 Oct 2007, John Stoffel wrote:
Justin == Justin Piszcz [EMAIL PROTECTED] writes:
Justin On Fri, 19 Oct 2007, John Stoffel wrote:
So,
Is it time to start thinking about deprecating the old 0.9, 1.0 and
1.1 formats to just standardize on the 1.2 format? What are the
issues
On Fri, 19 Oct 2007, John Stoffel wrote:
So,
Is it time to start thinking about deprecating the old 0.9, 1.0 and
1.1 formats to just standardize on the 1.2 format? What are the
issues surrounding this?
It's certainly easy enough to change mdadm to default to the 1.2
format and to require
On Fri, 19 Oct 2007, Alberto Alonso wrote:
On Thu, 2007-10-18 at 17:26 +0200, Goswin von Brederlow wrote:
Mike Accetta [EMAIL PROTECTED] writes:
What I would like to see is a timeout driven fallback mechanism. If
one mirror does not return the requested data within a certain time
(say 1
On Fri, 19 Oct 2007, John Stoffel wrote:
Justin == Justin Piszcz [EMAIL PROTECTED] writes:
Justin Is a bitmap created by default with 1.x? I remember seeing
Justin reports of 15-30% performance degradation using a bitmap on a
Justin RAID5 with 1.x.
Not according to the mdadm man page
On Mon, 15 Oct 2007, Bernd Schubert wrote:
Hi,
in order to tune raid performance I did some benchmarks with and without the
stripe queue patches. 2.6.22 is only for comparison to rule out other
effects, e.g. the new scheduler, etc.
It seems there is a regression with these patch regarding
On Sat, 13 Oct 2007, Marko Berg wrote:
Bill Davidsen wrote:
Marko Berg wrote:
I added a fourth drive to a RAID 5 array. After some complications related
to adding a new HD controller at the same time, and thus changing some
device names, I re-created the array and got it working (in the
On Sat, 13 Oct 2007, Marko Berg wrote:
Corey Hickey wrote:
Marko Berg wrote:
Bill Davidsen wrote:
Marko Berg wrote:
Any suggestions on how to fix this, or what to investigate next, would
be appreciated!
I'm not sure what you're trying to fix here, everything you posted
looks sane.
On Thu, 11 Oct 2007, Andrew Clayton wrote:
On Thu, 11 Oct 2007 13:06:39 -0400, Bill Davidsen wrote:
Andrew Clayton wrote:
On Fri, 5 Oct 2007 16:56:03 -0400, John Stoffel wrote:
Can you start a 'vmstat 1' in one window, then start whatever
you do
to get crappy performance. That would
On Sun, 7 Oct 2007, Dean S. Messing wrote:
Justin Piszcz wrote:
On Fri, 5 Oct 2007, Dean S. Messing wrote:
Brendan Conoboy wrote:
snip
Is the onboard SATA controller real SATA or just an ATA-SATA
converter? If the latter, you're going to have trouble getting faster
performance than any
On Mon, 8 Oct 2007, Janek Kozicki wrote:
Hello,
Recently I started to use mdadm and I'm very impressed by its
capabilities.
I have raid0 (250+250 GB) on my workstation. And I want to have
raid5 (4*500 = 1500 GB) on my backup machine.
The backup machine currently doesn't have raid, just a
1 - 100 of 339 matches
Mail list logo