Re: MD RAID1 performance very different from non-RAID partition

2007-09-18 Thread Luca Berra

On Mon, Sep 17, 2007 at 10:58:11AM -0500, Jordan Russell wrote:

Goswin von Brederlow wrote:

Jordan Russell [EMAIL PROTECTED] writes:

It's an ext3 partition, so I guess that doesn't apply?

I tried remounting /dev/sda2 with the barrier=0 option (which I assume
disables barriers, looking at the source), though, just to see if it
would make any difference, but it didn't; the database build still took
31 minutes.


Compare the read ahead settings.


I'm not sure what you mean. There's a per-mount read ahead setting?



per device

compare
blockdev --getra /dev/sda2
and 
blockdev --getra /dev/md0


L.
--
Luca Berra -- [EMAIL PROTECTED]
   Communication Media  Services S.r.l.
/\
\ / ASCII RIBBON CAMPAIGN
 XAGAINST HTML MAIL
/ \
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mke2fs stuck in D state while creating filesystem on md*

2007-09-18 Thread Bill Davidsen

Wiesner Thomas wrote:
Before running Linux software raid on a production system, I'm trying 
to get the hang of it and use

4 loopback devices on which I play with RAID5.

I've:
* mdadm - v2.6.3 - 20th August 2007
* mke2fs 1.37 (21-Mar-2005) Using EXT2FS Library version 1.37
* Linux hazard 2.6.22.5 #2 PREEMPT Wed Aug 29 13:06:26 CEST 2007 
i686 GNU/Linux


The HDD can transfer appriximately 40MB/s and the loopback files are 
only 200 to 350MB.


Well when I create a MD with
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 
/dev/loop0 /dev/loop1 /dev/loop2

and try to create the filesystem on it with
mke2fs /dev/md0
mke2fs gets stuck in D state. Not always but sometimes. It hangs there 
for different amounts of time, at least a
Minute or two. (Far longer than the filesystem creation process itself 
takes.) Afterwards, the process continues and finishes.
Sometimes it hangs at 6/65 blocks, sometimes earlier, sometimes later, 
but always quite early.


The interesting line of a ps ax looks like:
1609 tty2 D+ 0:00 mke2fs -j /dev/md0

It doesn't seem to be a real showstopper, but I think it's not a 
normal behaviour.
While stuck, I don't see any CPU or disk activity. Trying to kill the 
mke2fs seems to lead to
a completely stuck mke2fs (doesn't get out of D state) but I don't 
know that for sure, because I did

that only once and didn't wait very long before I rebooted.

The base System is a 3.1 (Sarge) with a plain Vanilla 2.6.22.5 kernel 
and latest mdadm.


If you need additional info or I should try something, I'm willing to 
do that, because I've some time ATM.


Has there been any progress on this? I think I saw it, or something 
similar, during some testing of recent 2.6.23-rc kernels, on mke2fs took 
about 11 min longer than all the others (~2 min) and it was not 
repeatable. I worry that process of more interest will have the same hang.


--
bill davidsen [EMAIL PROTECTED]
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [linux-lvm] Q: Online resizing ext3 FS

2007-09-18 Thread Bill Davidsen

Goswin von Brederlow wrote:

Tomasz Chmielewski [EMAIL PROTECTED] writes:

  

Chris Osicki schrieb:


Hi

I apologize in advance for asking a question not really appropriate
for this mailing list, but I couldn't find a better place with lots of
people managing lots of disk space.

The question:
Has anyone of you been using ext2online to resize (large) ext3 filesystems?
I have to do it going from 500GB to 1TB on a productive system I was
wondering if you have some horror/success stories.
I'm using RHEL4/U4 (kernel 2.6.9) on this system.
  


That kernel seems to be a bit old. Better upgrade first.
  


You don't upgrade when using the stable releases... that's the whole 
idea, you don't have to worry about a new versions of anything, the bugs 
and security issues are backported, but the version stays the same. 
Highly desirable for must work systems, no so nice for doing cutting 
edge stuff using nice features you don't have. :-(


--
bill davidsen [EMAIL PROTECTED]
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [linux-lvm] Q: Online resizing ext3 FS

2007-09-18 Thread Goswin von Brederlow
Bill Davidsen [EMAIL PROTECTED] writes:

 Goswin von Brederlow wrote:
 I'm using RHEL4/U4 (kernel 2.6.9) on this system.


 That kernel seems to be a bit old. Better upgrade first.


 You don't upgrade when using the stable releases... that's the whole
 idea, you don't have to worry about a new versions of anything, the
 bugs and security issues are backported, but the version stays the
 same. Highly desirable for must work systems, no so nice for doing
 cutting edge stuff using nice features you don't have. :-(

Debian has 2.6.18 in stable. And they Debian stable is way too old to
use. :)

MfG
Goswin
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Help: very slow software RAID 5.

2007-09-18 Thread Dean S. Messing


I'm not getting nearly the read speed I expected
from a newly defined software RAID 5 array
across three disk partitions (on the 3 drives,
of course!).

Would someone kindly point me straight?

After defining the RAID 5 I did `hdparm -t /dev/md0'
and got the abysmal read speed of ~65MB/sec.
The individual device speeds are ~55, ~70,
and ~75 MB/sec.

Shouldn't this array be running (at the slowest)
at about 55+70 = 125 MB/sec minus some overhead?
I defined a RAID0 on the ~55 and ~70 partitions
and got about 110 MB/sec.

Shouldn't adding a 3rd (faster!) drive into the
array make the RAID 5 speed at least this fast?


Here are the details of my setup:

Linux Fedora 7, kernel 2.6.22.

# fdisk -l /dev/sda

Disk /dev/sda: 160.0 GB, 1600 bytes
255 heads, 63 sectors/track, 19452 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/sda1   1 127 1020096   82  Linux swap / Solaris
/dev/sda2   * 128 143  128520   83  Linux
/dev/sda3 144   19452   155099542+  fd  Linux raid autodetect


# fdisk -l /dev/sdb

Disk /dev/sdb: 160.0 GB, 1600 bytes
255 heads, 63 sectors/track, 19452 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/sdb1   *   1 127 1020096   82  Linux swap / Solaris
/dev/sdb2 128 143  128520   83  Linux
/dev/sdb3 144   19452   155099542+  fd  Linux raid autodetect



# fdisk -l /dev/sdc

Disk /dev/sdc: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/sdc1   *   1 127 1020096   82  Linux swap / Solaris
/dev/sdc2 128   19436   155099542+  fd  Linux raid autodetect
/dev/sdc3   19437   60801   332264362+  8e  Linux LVM


The RAID 5 consists of sda3, sdb3, and sdc2.
These partitions have these individual read speeds:

# hdparm -t /dev/sda3 /dev/sdb3 /dev/sdc2

/dev/sda3:
 Timing buffered disk reads:  168 MB in  3.03 seconds =  55.39 MB/sec

/dev/sdb3:
 Timing buffered disk reads:  216 MB in  3.03 seconds =  71.35 MB/sec

/dev/sdc2:
 Timing buffered disk reads:  228 MB in  3.02 seconds =  75.49 MB/sec


After defining RAID 5 with:

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda3 
/dev/sdb3 /dev/sdc2

and waiting the 50 minutes for /proc/mdstat to show it was finished,
I did `hdparm -t /dev/md0' and got ~65MB/sec.

Dean


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help: very slow software RAID 5.

2007-09-18 Thread Justin Piszcz



On Tue, 18 Sep 2007, Dean S. Messing wrote:




I'm not getting nearly the read speed I expected
from a newly defined software RAID 5 array
across three disk partitions (on the 3 drives,
of course!).

Would someone kindly point me straight?

After defining the RAID 5 I did `hdparm -t /dev/md0'
and got the abysmal read speed of ~65MB/sec.
The individual device speeds are ~55, ~70,
and ~75 MB/sec.

Shouldn't this array be running (at the slowest)
at about 55+70 = 125 MB/sec minus some overhead?
I defined a RAID0 on the ~55 and ~70 partitions
and got about 110 MB/sec.

Shouldn't adding a 3rd (faster!) drive into the
array make the RAID 5 speed at least this fast?


Here are the details of my setup:

Linux Fedora 7, kernel 2.6.22.

# fdisk -l /dev/sda

Disk /dev/sda: 160.0 GB, 1600 bytes
255 heads, 63 sectors/track, 19452 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start End  Blocks   Id  System
/dev/sda1   1 127 1020096   82  Linux swap / Solaris
/dev/sda2   * 128 143  128520   83  Linux
/dev/sda3 144   19452   155099542+  fd  Linux raid autodetect


# fdisk -l /dev/sdb

Disk /dev/sdb: 160.0 GB, 1600 bytes
255 heads, 63 sectors/track, 19452 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start End  Blocks   Id  System
/dev/sdb1   *   1 127 1020096   82  Linux swap / Solaris
/dev/sdb2 128 143  128520   83  Linux
/dev/sdb3 144   19452   155099542+  fd  Linux raid autodetect



# fdisk -l /dev/sdc

Disk /dev/sdc: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot  Start End  Blocks   Id  System
/dev/sdc1   *   1 127 1020096   82  Linux swap / Solaris
/dev/sdc2 128   19436   155099542+  fd  Linux raid autodetect
/dev/sdc3   19437   60801   332264362+  8e  Linux LVM


The RAID 5 consists of sda3, sdb3, and sdc2.
These partitions have these individual read speeds:

# hdparm -t /dev/sda3 /dev/sdb3 /dev/sdc2

/dev/sda3:
Timing buffered disk reads:  168 MB in  3.03 seconds =  55.39 MB/sec

/dev/sdb3:
Timing buffered disk reads:  216 MB in  3.03 seconds =  71.35 MB/sec

/dev/sdc2:
Timing buffered disk reads:  228 MB in  3.02 seconds =  75.49 MB/sec


After defining RAID 5 with:

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda3 
/dev/sdb3 /dev/sdc2

and waiting the 50 minutes for /proc/mdstat to show it was finished,
I did `hdparm -t /dev/md0' and got ~65MB/sec.

Dean


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html



Without tuning you will get very slow read speed.

Read the mailing list, there are about 5-10 tunable options, for me, I get 
250 MiB/s no tuning (read/write), after tuning 464 MiB/s write and 622 
MiB/s read.


Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help: very slow software RAID 5.

2007-09-18 Thread Dean S. Messing


Justin Piszcz wrote:
On Tue, 18 Sep 2007, Dean S. Messing wrote:
: 
: 
: 
:  I'm not getting nearly the read speed I expected
:  from a newly defined software RAID 5 array
:  across three disk partitions (on the 3 drives,
:  of course!).
: 
:  Would someone kindly point me straight?
: 
:  After defining the RAID 5 I did `hdparm -t /dev/md0'
:  and got the abysmal read speed of ~65MB/sec.
:  The individual device speeds are ~55, ~70,
:  and ~75 MB/sec.
: 
:  Shouldn't this array be running (at the slowest)
:  at about 55+70 = 125 MB/sec minus some overhead?
:  I defined a RAID0 on the ~55 and ~70 partitions
:  and got about 110 MB/sec.
: 
:  Shouldn't adding a 3rd (faster!) drive into the
:  array make the RAID 5 speed at least this fast?
: 
: 
:  Here are the details of my setup:
: 

snip

: Without tuning you will get very slow read speed.
: 
: Read the mailing list, there are about 5-10 tunable options, for me, I get 
: 250 MiB/s no tuning (read/write), after tuning 464 MiB/s write and 622 
: MiB/s read.
: 
: Justin.

Thanks Justin.

5-10 tunable options!  Good grief.  This sounds worse than regulating
my 80 year old Grandfather Clock.  (I'm quite a n00bie at this.)  Are
there any nasty system side effects to tuning these parameters?  This
is not a server I'm working with.  It's my research desktop machine.
I do lots and lots of different things on it.

I started out with RAID 0, but after reading a lot I learned that this
is Dangerous.  So I bought a 3rd disk to do RAID 5.  I intend to put
LVM on top of the RAID 5 once I get it running at the speed it is
supposed to, and then copy my entire linux system onto it.  Are these
tuned parameters going to mess other things up?

Is there official documentation on this relative to RAID 5?  I don't
see much online.

One thing I did learn is that if I use the --direct switch with
`hdparm' I get much greater read speed.  Goes from ~65 MB/s to ~120 MB/s.

I have no idea what --direct does. Yes, I've read the man page.
It says that it causes bypassing of the page cache.  Ooookay.

Alas using dd I find that the real read speed is still around ~65 MB/s.

Sorry for these musings.  I'm just uncomfortable trying to diddle
with 5-10 system parameters without knowing what I'm doing.

Any help or pointers to documentaion on tuning these for RAID-5 and what
the tradeoffs are would be appreciated.

Thanks.
Dean

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html