On Feb 19, 2008 1:41 PM, Oliver Martin
<[EMAIL PROTECTED]> wrote:
> Janek Kozicki schrieb:
> > hold on. This might be related to raid chunk positioning with respect
> > to LVM chunk positioning. If they interfere there indeed may be some
> > performance drop. Best to make sure that those chunks are
On Feb 6, 2008 12:43 PM, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> Can you create a raid10 with one drive "missing" and add it later? I
> know, I should try it when I get a machine free... but I'm being lazy today.
Yes you can. With 3 drives, however, performance will be awful (at
least with lay
On Feb 3, 2008 5:29 PM, Janek Kozicki <[EMAIL PROTECTED]> wrote:
> Neil Brown said: (by the date of Mon, 4 Feb 2008 10:11:27 +1100)
>
> wow, thanks for quick reply :)
>
> > > 3. Another thing - would raid10,far=2 work when three drives are used?
> > >Would it increase the read performance?
On Feb 3, 2008 5:29 PM, Janek Kozicki <[EMAIL PROTECTED]> wrote:
> Neil Brown said: (by the date of Mon, 4 Feb 2008 10:11:27 +1100)
>
> wow, thanks for quick reply :)
>
> > > 3. Another thing - would raid10,far=2 work when three drives are used?
> > >Would it increase the read performance?
This isn't a high priority issue or anything, but I'm curious:
I --stop(ped) an array but /sys/block/md2 remained largely populated.
Is that intentional?
--
Jon
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo in
I've found in some tests that raid10,f2 gives me the best I/O of any
raid5 or raid10 format. However, the performance of raid10,o2 and
raid10,n2 in degraded mode is nearly identical to the non-degraded
mode performance (for me, this hovers around 100MB/s). raid10,f2 has
degraded mode performance,
On 12/23/07, maobo <[EMAIL PROTECTED]> wrote:
> Hi,all
>
> Yes, I agree some of you. But in my test both using real life trace and
> Iometer test I found that for absolutely read requests, RAID0 is better than
> RAID10 (with same data disks: 3 disks in RAID0, 6 disks in RAID10). I don't
> know why
On 12/22/07, Janek Kozicki <[EMAIL PROTECTED]> wrote:
> Michael Tokarev said: (by the date of Fri, 21 Dec 2007 23:56:09 +0300)
>
> > Janek Kozicki wrote:
> > > what's your kernel version? I recall that recently there have been
> > > some works regarding load balancing.
> >
> > It was in my orig
On 12/22/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> On Wednesday December 19, [EMAIL PROTECTED] wrote:
> > On 12/19/07, Jon Nelson <[EMAIL PROTECTED]> wrote:
> > > On 12/19/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> > > > On Tuesday December 18
On 12/19/07, Jon Nelson <[EMAIL PROTECTED]> wrote:
> On 12/19/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> > On Tuesday December 18, [EMAIL PROTECTED] wrote:
> > > This just happened to me.
> > > Create raid with:
> > >
> > > mdadm --crea
On 12/19/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> On Tuesday December 18, [EMAIL PROTECTED] wrote:
> > This just happened to me.
> > Create raid with:
> >
> > mdadm --create /dev/md2 --level=raid10 --raid-devices=3
> > --spare-devices=0 --layout=o2 /dev/sdb3 /dev/sdc3 /dev/sdd3
> >
> > cat /proc
On 12/19/07, Michal Soltys <[EMAIL PROTECTED]> wrote:
> Justin Piszcz wrote:
> >
> > Or is there a better way to do this, does parted handle this situation
> > better?
> >
> > What is the best (and correct) way to calculate stripe-alignment on the
> > RAID5 device itself?
> >
> >
> > Does this also
On 12/19/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> As other posts have detailed, putting the partition on a 64k aligned
> boundary can address the performance problems. However, a poor choice of
> chunk size, cache_buffer size, or just random i/o in small sizes can eat
> up a lot of the benefi
On 12/19/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> As other posts have detailed, putting the partition on a 64k aligned
> boundary can address the performance problems. However, a poor choice of
> chunk size, cache_buffer size, or just random i/o in small sizes can eat
> up a lot of the benefi
On 12/19/07, Justin Piszcz <[EMAIL PROTECTED]> wrote:
>
>
> On Wed, 19 Dec 2007, Mattias Wadenstein wrote:
> >> From that setup it seems simple, scrap the partition table and use the
> > disk device for raid. This is what we do for all data storage disks (hw
> > raid)
> > and sw raid members.
> >
On 12/18/07, Thiemo Nagel <[EMAIL PROTECTED]> wrote:
> >> Performance of the raw device is fair:
> >> # dd if=/dev/md2 of=/dev/zero bs=128k count=64k
> >> 8589934592 bytes (8.6 GB) copied, 15.6071 seconds, 550 MB/s
> >>
> >> Somewhat less through ext3 (created with -E stride=64):
> >> # dd if=large
This just happened to me.
Create raid with:
mdadm --create /dev/md2 --level=raid10 --raid-devices=3
--spare-devices=0 --layout=o2 /dev/sdb3 /dev/sdc3 /dev/sdd3
cat /proc/mdstat
md2 : active raid10 sdd3[2] sdc3[1] sdb3[0]
5855424 blocks 64K chunks 2 offset-copies [3/3] [UUU]
[==>.
This is what dstat shows me copying lots of large files about (ext3),
one file at a time.
I've benchmarked the raid itself around 65-70 MB/s maximum actual
write I/O so this 3-4MB/s stuff is pretty bad.
I should note that ALL other I/O suffers horribly, even on other filesystems.
What might the ca
On 12/6/07, David Rees <[EMAIL PROTECTED]> wrote:
> On Dec 6, 2007 1:06 AM, Justin Piszcz <[EMAIL PROTECTED]> wrote:
> > On Wed, 5 Dec 2007, Jon Nelson wrote:
> >
> > > I saw something really similar while moving some very large (300MB to
> > > 4G
I saw something really similar while moving some very large (300MB to
4GB) files.
I was really surprised to see actual disk I/O (as measured by dstat)
be really horrible.
--
Jon
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
M
I was testing some network throughput today and ran into this.
I'm going to bet it's a forcedeth driver problem but since it also
involve software raid I thought I'd include it.
Whom should I contact regarding the forcedeth problem?
The following is only an harmless informational message.
Unless y
> You said you had to reboot your box using sysrq. There are chances you
> caused the reboot while all pending data was written to sdb4 and sdc4,
> but not to sda4. So sda4 appears to be non-fresh after the reboot and,
> since mdadm refuses to use non-fresh devices, it kicks sda4.
Can mdadm be tol
On 10/12/07, Andre Noll <[EMAIL PROTECTED]> wrote:
> On 10:38, Jon Nelson wrote:
> > <4>md: kicking non-fresh sda4 from array!
> >
> > what does that mean?
>
> sda4 was not included because the array has been assembled previously
> using only sdb4 and sd
I have a software raid5 using /dev/sd{a,b,c}4.
It's been up for months, through many reboots.
I had to do a reboot using sysrq
When the box came back up, the raid did not re-assemble.
I am not using bitmaps.
I believe it comes down to this:
<4>md: kicking non-fresh sda4 from array!
what does t
On 9/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> What I don't understand is how you use hard links... because a hard link
> needs to be in the same filesystem, and because a hard link is just
> another pointer to the inode and doesn't make a physical copy of the
> data to another device or to
Please note: I'm having trouble w/gmail's formatting... so please
forgive this if it looks horrible. :-|
On 9/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
>
> Dean S. Messing wrote:
> > It has been some time since I read the rsync man page. I see that
> > there is (among the bazillion and one
On Thu, 28 Jun 2007, Matti Aarnio wrote:
> I do have LVM in between the MD-RAID5 and XFS, so I did also align
> the LVM to that 3 * 256k.
How did you align the LVM ?
--
Jon Nelson <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid&
On Tue, 26 Jun 2007, Justin Piszcz wrote:
>
>
> On Tue, 26 Jun 2007, Jon Nelson wrote:
>
> > On Tue, 26 Jun 2007, Justin Piszcz wrote:
> >
> > >
> > >
> > > On Tue, 26 Jun 2007, Jon Nelson wrote:
> > >
> > > > On Mon, 25 J
On Tue, 26 Jun 2007, Justin Piszcz wrote:
>
>
> On Tue, 26 Jun 2007, Jon Nelson wrote:
>
> > On Mon, 25 Jun 2007, Justin Piszcz wrote:
> >
> > > Neil has a patch for the bad speed.
> >
> > What does the patch do?
> >
> > > In the
eird behavior:
at values below 26000 the rate (also confirmed via dstat output) stayed
low. 2-3MB/s. At 26000 and up, the value jumped more or less instantly
to 70-74MB/s. What makes 26000 special? If I set the value to 2 why
do I still get 2-3MB/s actual?
--
Jon Nelson <[EMAIL PROTECTED
#x27; (and
> > rebuild, one can assume) performance. Why?
> >
> Question:
> After performance goes "bad" does it go back up if you reduce the size
> back down to 384?
Yes, and almost instantly.
--
Jon Nelson <[EMAIL PROTECTED]>
-
To unsubscribe from this list
On Thu, 21 Jun 2007, Jon Nelson wrote:
> On Thu, 21 Jun 2007, Raz wrote:
>
> > What is your raid configuration ?
> > Please note that the stripe_cache_size is acting as a bottle neck in some
> > cases.
Well, that's kind of the point of my email. I'll try
space is quiescent).
> On 6/21/07, Jon Nelson <[EMAIL PROTECTED]> wrote:
> >
> > I've been futzing with stripe_cache_size on a 3x component raid5,
> > using 2.6.18.8-0.3-default on x86_64 (openSUSE 10.2).
> >
> > With the value set at 4096 I get pretty great wr
/s. Wow!
Can somebody 'splain to me what is going on?
--
Jon Nelson <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
ould play games
with fault+remove of the "borrowed" drive and replace it or whatever
you want to do...
Otherwise, I don't think you can use mdadm to accomplish this.
--
Jon Nelson <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid
rds out
> 50240 bytes (502 MB) copied, 18.6172 s, 27.0 MB/s
And what is it like with 'iflag=direct' which I really feel you have to
use, otherwise you get caching.
--
Jon Nelson <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
; > > to get into the 20-30MB/s area. Too much asked for?
> > > >
> > > > Dex
> > >
> > > What do you get without LVM?
> >
> > Hard to tell: the PV hogs all of the disk space, can't really do non-LVM
> > tests.
>
> Y
t;The bitmap "file" is only 150KB or so in size, why does storing it
> >internally cause such a huge performance problem?
>
> If the bitmap is internal, you have to keep seeking to the end of the
> devices to update the bitmap. If the bitmap is external and
" is only 150KB or so in size, why does storing it
internally cause such a huge performance problem?
--
Jon Nelson <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
throw them together in a more usable form
in the near future.
--
Jon Nelson <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
ce reads using dd with
iflag=direct). What *should* I be able to get?
--
Jon Nelson
<[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, 30 May 2007, Jon Nelson wrote:
> On Thu, 31 May 2007, Richard Scobie wrote:
>
> > Jon Nelson wrote:
> >
> > > I am getting 70-80MB/s read rates as reported via dstat, and 60-80MB/s as
> > > reported by dd. What I don't understand is why just one
On Thu, 31 May 2007, Richard Scobie wrote:
> Jon Nelson wrote:
>
> > I am getting 70-80MB/s read rates as reported via dstat, and 60-80MB/s as
> > reported by dd. What I don't understand is why just one disk is being used
> > here, instead of two or more. I tried d
-devices=3 /dev/md1 /dev/sda3 /dev/sdb3 /dev/sdc3
I am running 2.6.18.8-0.3-default on x86_64, openSUSE 10.2.
Am I doing something wrong or is something weird going on?
--
Jon Nelson <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the
44 matches
Mail list logo