On Tue, Feb 19, 2008 at 01:52:21PM -0600, Jon Nelson wrote:
On Feb 19, 2008 1:41 PM, Oliver Martin
[EMAIL PROTECTED] wrote:
Janek Kozicki schrieb:
$ hdparm -t /dev/md0
/dev/md0:
Timing buffered disk reads: 148 MB in 3.01 seconds = 49.13 MB/sec
$ hdparm -t /dev/dm-0
On Fri, Feb 08, 2008 at 08:54:55AM -0500, Justin Piszcz wrote:
The promise tx4 pci works great and supports sata/300+ncq/etc $60-$70.
Wait, I have used tx4 pci up until ~2.6.22 and it didn't support AFAIK
ncq. Are you sure that current driver supports NCQ? I might then revive
that card :)
On Fri, Feb 08, 2008 at 02:24:15PM -0500, Justin Piszcz wrote:
On Fri, 8 Feb 2008, Iustin Pop wrote:
On Fri, Feb 08, 2008 at 08:54:55AM -0500, Justin Piszcz wrote:
The promise tx4 pci works great and supports sata/300+ncq/etc $60-$70.
Wait, I have used tx4 pci up until ~2.6.22
On Thu, Feb 07, 2008 at 01:31:16AM +0100, Keld Jørn Simonsen wrote:
Anyway, why does a SATA-II drive not deliver something like 300 MB/s?
Wait, are you talking about a *single* drive?
In that case, it seems you are confusing the interface speed (300MB/s)
with the mechanical read speed (80MB/s).
On Tue, Jan 22, 2008 at 05:34:14AM -0600, Moshe Yudkowsky wrote:
Carlos Carvalho wrote:
I use reiser3 and xfs. reiser3 is very good with many small files. A
simple test shows interactively perceptible results: removing large
files is faster with xfs, removing large directories (ex. the kernel
On Sun, Jan 20, 2008 at 02:24:46PM -0600, Moshe Yudkowsky wrote:
Question: with the same number of physical drives, do I get better
performance with one large md-based drive, or do I get better
performance if I have several smaller md-based drives?
No expert here, but my opinion:
- md
On Wed, Dec 19, 2007 at 01:18:21PM -0500, Jon Sabo wrote:
So I was trying to copy over some Indiana Jones wav files and it
wasn't going my way. I noticed that my software raid device showed:
/dev/md1 on / type ext3 (rw,errors=remount-ro)
Is this saying that it was remounted, read only
On Sat, Oct 20, 2007 at 10:52:39AM -0400, John Stoffel wrote:
Michael Well, I strongly, completely disagree. You described a
Michael real-world situation, and that's unfortunate, BUT: for at
Michael least raid1, there ARE cases, pretty valid ones, when one
Michael NEEDS to mount the
On Fri, Oct 19, 2007 at 02:39:47PM -0400, John Stoffel wrote:
And if putting the superblock at the end is problematic, why is it the
default? Shouldn't version 1.1 be the default?
In my opinion, having the superblock *only* at the end (e.g. the 0.90
format) is the best option.
It allows one
On Mon, Sep 10, 2007 at 06:51:14PM +0200, Iustin Pop wrote:
The 'degraded' attribute is useful to quickly determine if the array is
degraded, instead of parsing 'mdadm -D' output or relying on the other
techniques (number of working devices against number of defined devices,
etc.).
The md
On Sat, Sep 22, 2007 at 10:28:44AM -0700, Mr. James W. Laferriere wrote:
Hello Bill all ,
Bill Davidsen [EMAIL PROTECTED] Sat, 22 Sep 2007 09:41:40 -0400 , wrote:
My only advice is to try and quantify the data volume and look at nbd
vs. iSCSI to provide the mirror if you go that way.
On Sat, Sep 15, 2007 at 12:28:07AM -0500, Jordan Russell wrote:
(Kernel: 2.6.18, x86_64)
Is it normal for an MD RAID1 partition with 1 active disk to perform
differently from a non-RAID partition?
md0 : active raid1 sda2[0]
8193024 blocks [2/1] [U_]
I'm building a search engine
On Sat, Sep 15, 2007 at 02:18:19PM +0200, Goswin von Brederlow wrote:
Shouldn't it be the other way around? With a barrier the filesystem
can enforce an order on the data written and can then continue writing
data to the cache. More data is queued up for write. Without barriers
the filesystem
On Sun, Sep 09, 2007 at 09:31:54PM -1000, J. David Beutel wrote:
[EMAIL PROTECTED] ~]# mdadm --grow /dev/md5 -n2
mdadm: Cannot set device size/shape for /dev/md5: Device or resource busy
mdadm - v1.6.0 - 4 June 2004
Linux 2.6.12-1.1381_FC3 #1 Fri Oct 21 03:46:55 EDT 2005 i686 athlon i386
it.
Signed-off-by: Iustin Pop [EMAIL PROTECTED]
---
Note: I sent this back in January and it people agreed it was a good
idea. However, it has not been picked up. So here I resend it again.
Patch is against 2.6.23-rc5
Thanks,
Iustin Pop
drivers/md/md.c |7 +++
1 files changed, 7
There are many questions on the mailing list about the RAID1 read
performance profile. This patch adds a new paragraph to the RAID1
section in md.4 that details what kind of speed-up one should expect
from RAID1.
Signed-off-by: Iustin Pop [EMAIL PROTECTED]
---
this patch is against the git tree
On Wed, Aug 29, 2007 at 08:25:59PM -0700, chee wrote:
i,
This is my Filesystem:
Filesystem Size Used Avail Use% Mounted on
/dev/md0 9.7G 6.6G 2.7G 72% /
none 189M 0 189M 0% /dev/shm
/dev/md2 103G 98G 289M 100% /home
and this is mirror settings:
Personalities : [raid1]
md1 :
On Sun, Aug 12, 2007 at 07:03:44PM +0200, Jan Engelhardt wrote:
On Aug 12 2007 09:39, [EMAIL PROTECTED] wrote:
now, I am not an expert on either option, but three are a couple things
that I
would question about the DRDB+MD option
1. when the remote machine is down, how does MD deal
On Wed, Jun 06, 2007 at 01:31:44PM +0200, Peter Rabbitson wrote:
Peter Rabbitson wrote:
Hi,
Is there a way to list the _number_ in addition to the name of a
problematic component? The kernel trend to move all block devices into
the sdX namespace combined with the dynamic name allocation
On Wed, Jun 06, 2007 at 02:23:31PM +0200, Peter Rabbitson wrote:
Iustin Pop wrote:
On Wed, Jun 06, 2007 at 01:31:44PM +0200, Peter Rabbitson wrote:
Peter Rabbitson wrote:
Hi,
Is there a way to list the _number_ in addition to the name of a
problematic component? The kernel trend to move
On Thu, Apr 12, 2007 at 02:57:57PM +0200, Brice Figureau wrote:
Now, I don't know why all the UUID are equals (my other machines are not
affected).
I think at some point either in sarge or in testing between sarge and
etch, there was included a version of mdadm which had this bug (all
arrays had
On Mon, Apr 09, 2007 at 06:53:26AM -0400, Justin Piszcz wrote:
Using 2 threads made no difference either.
It was not until I did 3 simultaneous copies that I saw 110-130MB/s
through vmstat 1, until then, it only used one drive, even with two cp's,
how come it needs to be three or more?
On Wed, Apr 04, 2007 at 07:11:50PM -0400, Bill Davidsen wrote:
You are correct, but I think if an optimization were to be done, some
balance between the read time, seek time, and read size could be done.
Using more than one drive only makes sense when the read transfer time is
significantly
On Thu, Apr 05, 2007 at 04:11:35AM -0400, Justin Piszcz wrote:
On Thu, 5 Apr 2007, Iustin Pop wrote:
On Wed, Apr 04, 2007 at 07:11:50PM -0400, Bill Davidsen wrote:
You are correct, but I think if an optimization were to be done, some
balance between the read time, seek time, and read
On Sun, Mar 04, 2007 at 04:47:19AM -0800, Dan wrote:
Just about the only stat in these tests that show a marked improvement between
one and two drives is Random Seeks (which makes sense). What doesn't make
sense
is that none of the Sequential Input numbers increase. Shouldn't I be seeing
On Sat, Jan 27, 2007 at 02:59:48AM +0100, Iustin Pop wrote:
From: Iustin Pop [EMAIL PROTECTED]
This patch exposes the uuid and the degraded status of an assembled
array through sysfs.
[...]
Sorry to ask, this was my first patch and I'm not sure what is the
procedure to get it considered
On Sun, Feb 11, 2007 at 08:15:31AM +1100, Neil Brown wrote:
Resending after a suitable pause (1-2 weeks) is never a bad idea.
Ok, noted, thanks.
Exposing the UUID isn't - and if it were it should be in
md_default_attrs rather than md_redundancy_attrs.
The UUID isn't an intrinsic aspect of
From: Iustin Pop [EMAIL PROTECTED]
This patch exposes the uuid and the degraded status of an assembled
array through sysfs.
The uuid is useful in the case when multiple arrays exist on a system
and userspace needs to identify them; currently, the only portable way
that I know of is using 'mdadm
28 matches
Mail list logo