running 2.6.18.8-0.3-default on x86_64, openSUSE 10.2.
Am I doing something wrong or is something weird going on?
--
Jon Nelson [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
On Thu, 31 May 2007, Richard Scobie wrote:
Jon Nelson wrote:
I am getting 70-80MB/s read rates as reported via dstat, and 60-80MB/s as
reported by dd. What I don't understand is why just one disk is being used
here, instead of two or more. I tried different versions of metadata
completed a series of some 800+ tests on a 4 disk raid5
varying the I/O scheduler, readahead of the components, readahead of the
raid, bitmap present or not, and filesystem and arrived at some fairly
interesting results. I hope to throw them together in a more usable form
in the near future.
--
Jon
or not. Is there more than one bitmap?
--
Jon Nelson [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
.
10gb read test:
dd if=/dev/md0 bs=1M count=10240 of=/dev/null
eek! Make sure to use iflag=direct
with that otherwise you'll get cached reads and that will throw
the numbers off considerably.
--
Jon Nelson [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux
, otherwise you get caching.
--
Jon Nelson [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
/21/07, Jon Nelson [EMAIL PROTECTED] wrote:
I've been futzing with stripe_cache_size on a 3x component raid5,
using 2.6.18.8-0.3-default on x86_64 (openSUSE 10.2).
With the value set at 4096 I get pretty great write numbers.
2048 and on down the write numbers slowly drop.
However
On Thu, 21 Jun 2007, Jon Nelson wrote:
On Thu, 21 Jun 2007, Raz wrote:
What is your raid configuration ?
Please note that the stripe_cache_size is acting as a bottle neck in some
cases.
Well, that's kind of the point of my email. I'll try to restate things,
as my question appears
. Why?
Question:
After performance goes bad does it go back up if you reduce the size
back down to 384?
Yes, and almost instantly.
--
Jon Nelson [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo
(also confirmed via dstat output) stayed
low. 2-3MB/s. At 26000 and up, the value jumped more or less instantly
to 70-74MB/s. What makes 26000 special? If I set the value to 2 why
do I still get 2-3MB/s actual?
--
Jon Nelson [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line
On Tue, 26 Jun 2007, Justin Piszcz wrote:
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Mon, 25 Jun 2007, Justin Piszcz wrote:
Neil has a patch for the bad speed.
What does the patch do?
In the mean time, do this (or better to set it to 30, for instance):
# Set minimum
On Tue, 26 Jun 2007, Justin Piszcz wrote:
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Tue, 26 Jun 2007, Justin Piszcz wrote:
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Mon, 25 Jun 2007, Justin Piszcz wrote:
Neil has a patch for the bad speed.
What does
Please note: I'm having trouble w/gmail's formatting... so please
forgive this if it looks horrible. :-|
On 9/28/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Dean S. Messing wrote:
It has been some time since I read the rsync man page. I see that
there is (among the bazillion and one
On 9/28/07, Bill Davidsen [EMAIL PROTECTED] wrote:
What I don't understand is how you use hard links... because a hard link
needs to be in the same filesystem, and because a hard link is just
another pointer to the inode and doesn't make a physical copy of the
data to another device or to
I have a software raid5 using /dev/sd{a,b,c}4.
It's been up for months, through many reboots.
I had to do a reboot using sysrq
When the box came back up, the raid did not re-assemble.
I am not using bitmaps.
I believe it comes down to this:
4md: kicking non-fresh sda4 from array!
what does
On 10/12/07, Andre Noll [EMAIL PROTECTED] wrote:
On 10:38, Jon Nelson wrote:
4md: kicking non-fresh sda4 from array!
what does that mean?
sda4 was not included because the array has been assembled previously
using only sdb4 and sdc4. So the data on sda4 is out of date.
I don't
You said you had to reboot your box using sysrq. There are chances you
caused the reboot while all pending data was written to sdb4 and sdc4,
but not to sda4. So sda4 appears to be non-fresh after the reboot and,
since mdadm refuses to use non-fresh devices, it kicks sda4.
Can mdadm be told
I was testing some network throughput today and ran into this.
I'm going to bet it's a forcedeth driver problem but since it also
involve software raid I thought I'd include it.
Whom should I contact regarding the forcedeth problem?
The following is only an harmless informational message.
Unless
I saw something really similar while moving some very large (300MB to
4GB) files.
I was really surprised to see actual disk I/O (as measured by dstat)
be really horrible.
--
Jon
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
On 12/6/07, David Rees [EMAIL PROTECTED] wrote:
On Dec 6, 2007 1:06 AM, Justin Piszcz [EMAIL PROTECTED] wrote:
On Wed, 5 Dec 2007, Jon Nelson wrote:
I saw something really similar while moving some very large (300MB to
4GB) files.
I was really surprised to see actual disk I/O
This is what dstat shows me copying lots of large files about (ext3),
one file at a time.
I've benchmarked the raid itself around 65-70 MB/s maximum actual
write I/O so this 3-4MB/s stuff is pretty bad.
I should note that ALL other I/O suffers horribly, even on other filesystems.
What might the
This just happened to me.
Create raid with:
mdadm --create /dev/md2 --level=raid10 --raid-devices=3
--spare-devices=0 --layout=o2 /dev/sdb3 /dev/sdc3 /dev/sdd3
cat /proc/mdstat
md2 : active raid10 sdd3[2] sdc3[1] sdb3[0]
5855424 blocks 64K chunks 2 offset-copies [3/3] [UUU]
On 12/18/07, Thiemo Nagel [EMAIL PROTECTED] wrote:
Performance of the raw device is fair:
# dd if=/dev/md2 of=/dev/zero bs=128k count=64k
8589934592 bytes (8.6 GB) copied, 15.6071 seconds, 550 MB/s
Somewhat less through ext3 (created with -E stride=64):
# dd if=largetestfile
On 12/19/07, Justin Piszcz [EMAIL PROTECTED] wrote:
On Wed, 19 Dec 2007, Mattias Wadenstein wrote:
From that setup it seems simple, scrap the partition table and use the
disk device for raid. This is what we do for all data storage disks (hw
raid)
and sw raid members.
/Mattias
On 12/19/07, Bill Davidsen [EMAIL PROTECTED] wrote:
As other posts have detailed, putting the partition on a 64k aligned
boundary can address the performance problems. However, a poor choice of
chunk size, cache_buffer size, or just random i/o in small sizes can eat
up a lot of the benefit.
On 12/19/07, Bill Davidsen [EMAIL PROTECTED] wrote:
As other posts have detailed, putting the partition on a 64k aligned
boundary can address the performance problems. However, a poor choice of
chunk size, cache_buffer size, or just random i/o in small sizes can eat
up a lot of the benefit.
On 12/19/07, Michal Soltys [EMAIL PROTECTED] wrote:
Justin Piszcz wrote:
Or is there a better way to do this, does parted handle this situation
better?
What is the best (and correct) way to calculate stripe-alignment on the
RAID5 device itself?
Does this also apply to Linux/SW
On 12/19/07, Neil Brown [EMAIL PROTECTED] wrote:
On Tuesday December 18, [EMAIL PROTECTED] wrote:
This just happened to me.
Create raid with:
mdadm --create /dev/md2 --level=raid10 --raid-devices=3
--spare-devices=0 --layout=o2 /dev/sdb3 /dev/sdc3 /dev/sdd3
cat /proc/mdstat
md2
On 12/19/07, Jon Nelson [EMAIL PROTECTED] wrote:
On 12/19/07, Neil Brown [EMAIL PROTECTED] wrote:
On Tuesday December 18, [EMAIL PROTECTED] wrote:
This just happened to me.
Create raid with:
mdadm --create /dev/md2 --level=raid10 --raid-devices=3
--spare-devices=0 --layout=o2
On 12/22/07, Neil Brown [EMAIL PROTECTED] wrote:
On Wednesday December 19, [EMAIL PROTECTED] wrote:
On 12/19/07, Jon Nelson [EMAIL PROTECTED] wrote:
On 12/19/07, Neil Brown [EMAIL PROTECTED] wrote:
On Tuesday December 18, [EMAIL PROTECTED] wrote:
I tried to stop the array
On 12/22/07, Janek Kozicki [EMAIL PROTECTED] wrote:
Michael Tokarev said: (by the date of Fri, 21 Dec 2007 23:56:09 +0300)
Janek Kozicki wrote:
what's your kernel version? I recall that recently there have been
some works regarding load balancing.
It was in my original email:
On 12/23/07, maobo [EMAIL PROTECTED] wrote:
Hi,all
Yes, I agree some of you. But in my test both using real life trace and
Iometer test I found that for absolutely read requests, RAID0 is better than
RAID10 (with same data disks: 3 disks in RAID0, 6 disks in RAID10). I don't
know why this
I've found in some tests that raid10,f2 gives me the best I/O of any
raid5 or raid10 format. However, the performance of raid10,o2 and
raid10,n2 in degraded mode is nearly identical to the non-degraded
mode performance (for me, this hovers around 100MB/s). raid10,f2 has
degraded mode performance,
This isn't a high priority issue or anything, but I'm curious:
I --stop(ped) an array but /sys/block/md2 remained largely populated.
Is that intentional?
--
Jon
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo
On Feb 3, 2008 5:29 PM, Janek Kozicki [EMAIL PROTECTED] wrote:
Neil Brown said: (by the date of Mon, 4 Feb 2008 10:11:27 +1100)
wow, thanks for quick reply :)
3. Another thing - would raid10,far=2 work when three drives are used?
Would it increase the read performance?
Yes.
On Feb 3, 2008 5:29 PM, Janek Kozicki [EMAIL PROTECTED] wrote:
Neil Brown said: (by the date of Mon, 4 Feb 2008 10:11:27 +1100)
wow, thanks for quick reply :)
3. Another thing - would raid10,far=2 work when three drives are used?
Would it increase the read performance?
Yes.
On Feb 6, 2008 12:43 PM, Bill Davidsen [EMAIL PROTECTED] wrote:
Can you create a raid10 with one drive missing and add it later? I
know, I should try it when I get a machine free... but I'm being lazy today.
Yes you can. With 3 drives, however, performance will be awful (at
least with layout
On Feb 19, 2008 1:41 PM, Oliver Martin
[EMAIL PROTECTED] wrote:
Janek Kozicki schrieb:
hold on. This might be related to raid chunk positioning with respect
to LVM chunk positioning. If they interfere there indeed may be some
performance drop. Best to make sure that those chunks are aligned
38 matches
Mail list logo