?
Regards,
JKB
--
Ming Zhang
@#$%^ purging memory... (*!%
http://blackmagic02881.wordpress.com/
http://www.linkedin.com/in/blackmagic02881
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL
--
Ming Zhang
@#$%^ purging memory... (*!%
http://blackmagic02881.wordpress.com/
http://www.linkedin.com/in/blackmagic02881
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo
On Fri, 2007-10-19 at 16:47 +0200, BERTRAND Joël wrote:
Ming Zhang wrote:
as Ross pointed out, many io pattern only have 1 outstanding io at any
time, so there is only one work thread actively to serve it. so it can
not exploit the multiple core here.
you see 100% at nullio
numbers.
Regards,
JKB
--
Ming Zhang
@#$%^ purging memory... (*!%
http://blackmagic02881.wordpress.com/
http://www.linkedin.com/in/blackmagic02881
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
On Fri, 2007-10-19 at 16:30 +0200, BERTRAND Joël wrote:
Ming Zhang wrote:
On Fri, 2007-10-19 at 09:48 +0200, BERTRAND Joël wrote:
Ross S. W. Walker wrote:
BERTRAND Joël wrote:
BERTRAND Joël wrote:
I can format serveral times (mkfs.ext3) a 1.5 TB volume
over iSCSI
without any
On Thu, 2007-10-18 at 18:33 +0200, BERTRAND Joël wrote:
Ming Zhang wrote:
On Thu, 2007-10-18 at 11:33 -0400, Ross S. W. Walker wrote:
BERTRAND Joël wrote:
BERTRAND Joël wrote:
BERTRAND Joël wrote:
Hello,
When I try to create a raid1 volume over iscsi, process
the sender and permanently delete the
original and any copy or printout thereof.
--
Ming Zhang
@#$%^ purging memory... (*!%
http://blackmagic02881.wordpress.com/
http://www.linkedin.com/in/blackmagic02881
-
To unsubscribe from this list: send
Hi Dean
Thanks a lot for sharing this.
I am not quite understand about these 2 commands. Why we want to add a
pre-failing disk back to md4?
mdadm --zero-superblock /dev/sde1
mdadm /dev/md4 -a /dev/sde1
Ming
On Sun, 2006-04-23 at 18:40 -0700, dean gaudet wrote:
i had a disk in a raid5 which
for the experience of it.
-dean
On Thu, 22 Jun 2006, Ming Zhang wrote:
Hi Dean
Thanks a lot for sharing this.
I am not quite understand about these 2 commands. Why we want to add a
pre-failing disk back to md4?
mdadm --zero-superblock /dev/sde1
mdadm /dev/md4 -a /dev/sde1
Ming
Hi All
Read this
http://www.mail-archive.com/linux-raid@vger.kernel.org/msg01725.html and
wonder if by any change latest mdadm has this implemented now? Thanks!
Ming
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More
Hi all
There are 2 small typos in md.4
Signed-Off-By Ming Zhang [EMAIL PROTECTED]
md.4 |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- md.4.old2006-06-19 16:35:46.0 -0400
+++ md.42006-06-19 16:36:30.0 -0400
@@ -100,8 +100,8 @@
.TP
RAID1
In some
On Wed, 2006-05-24 at 09:21 +0100, Gordon Henderson wrote:
I know this has come up before, but a few quick googles hasn't answered my
questions - I'm after the max. array size that can be created under
bog-standard 32-bit intel Linux, and any issues re. partitioning.
I'm aiming to create a
On Fri, 2006-04-28 at 08:04 +1000, Neil Brown wrote:
On Tuesday April 25, [EMAIL PROTECTED] wrote:
Hi,
I have a setup where I want to use RAID 1 to mirror
my data over the LAN. I am exposing 2x500 GB-sized
devices via iSCSI and was wondering if I could
configure the RAID 1 to write
On Thu, 2006-04-20 at 17:22 +0200, Gabor Gombas wrote:
On Wed, Apr 19, 2006 at 02:16:10PM -0400, Ming Zhang wrote:
is this possible?
* stop RAID5
* set a mirror between current disk X and a new added disk Y, and X as
primary one (which means copy X to Y to full sync, and before
On Wed, 2006-04-19 at 18:31 +0200, Shai wrote:
On 4/19/06, Dexter Filmore [EMAIL PROTECTED] wrote:
Let's say a disk in an array starts yielding smart errors but is still
functional.
So instead of waiting for it to fail completely and start a sync and stress
the other disks, could I clone
On Wed, 2006-04-19 at 10:41 -0700, Brendan Conoboy wrote:
Ming Zhang wrote:
Why can't you just mark that drive as failed, remove it and hotadd a
new drive to replace the failed drive?
because background rebuild is slower than disk to disk copy, since his
disk is still fully functional
On Tue, 2006-04-11 at 22:07 -0400, Guy wrote:
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Ming Zhang
} Sent: Tuesday, April 11, 2006 6:13 PM
} To: Andy Smith
} Cc: linux-raid@vger.kernel.org
} Subject: Re: mdadm + raid1 of 2
On Tue, 2006-04-11 at 20:32 +, Andy Smith wrote:
On Tue, Apr 11, 2006 at 07:25:58PM +0200, Laurent CARON wrote:
Andy Smith wrote:
On Tue, Apr 11, 2006 at 04:41:30PM +0200, Shai wrote:
I have two SCSI disks on raid1.
Since I have lots of reads from that raid, I want to add two more
* pls add a text wrap at ~80 in u email client.
* once u build whole kernel once, and u only make change in md.c, u can
do kernel redo and make process will automatically only rebuild md.c.
then u install the new kernel and check u changes.
ming
On Mon, 2006-03-27 at 16:19 +0800, Zhikun Wang
to rebuild the array.
The short answer is your way will work but it not necessarily.
-Paul
On 3/10/06, Ming Zhang [EMAIL PROTECTED] wrote:
Hi folks
I have a raid5 array that contain 4 disk and 1 spare disk. now i saw one
disk have sign of going fail via smart log.
so i am trying to do
On Sat, 2006-03-11 at 08:55 -0800, Mike Hardy wrote:
I can think of two things I'd do slightly differently...
Do a smartctl -t long on each disk before you do anything, to verify
that you don't have single sector errors on other drives
will this test interfere with normal disk io activity?
thanks a lot!
ming
On Sat, 2006-03-11 at 15:08 -0800, Mike Hardy wrote:
Ming Zhang wrote:
On Sat, 2006-03-11 at 08:55 -0800, Mike Hardy wrote:
I can think of two things I'd do slightly differently...
Do a smartctl -t long on each disk before you do anything, to verify
that you
On Sat, 2006-03-11 at 16:15 -0800, dean gaudet wrote:
On Sat, 11 Mar 2006, Ming Zhang wrote:
On Sat, 2006-03-11 at 06:53 -0500, Paul M. wrote:
Since its raid5 you would be fine just pulling the disk out and
letting the raid driver rebuild the array. If you have a hot spare
yes
On Sat, 2006-03-11 at 16:47 -0800, dean gaudet wrote:
On Sat, 11 Mar 2006, Ming Zhang wrote:
On Sat, 2006-03-11 at 16:31 -0800, dean gaudet wrote:
if you fail the disk from the array, or boot without the failing disk,
then the event counter in the other superblocks will be updated
ic. thx!
so it seems that we still have to use set_bit in a for loop to set
certain particular area, rite?
Ming
On Thu, 2005-11-10 at 09:29 +1100, Neil Brown wrote:
On Wednesday November 9, [EMAIL PROTECTED] wrote:
could anybody help me on this? thanks!
see if we call bitmap_zero(dst,
Hi folks
I have a raid0 on top of 2 sata disk sda and sdb. after i hot unplug
sda, the raid0 still shows online and active. run dd to write to it will
fail and dmesg shows scsi io error. but /proc/mdstat shows everything is
ok.
checked 2.4.27 and 2.6.11.2, both show same problem.
mdadm is 1.8
On Fri, 2005-09-02 at 11:09 -0700, Brad Dameron wrote:
On Thu, 2005-09-01 at 13:50 -0400, berk walker wrote:
I guess if we were all wholesalers with a nice long lead time, that
would be great, Brad. But where, and for how much might one purchase these?
b-
join the party. ;)
8 400GB SATA disk on same Marvel 8 port PCIX-133 card. P4 CPU.
Supermicro SCT board.
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6]
[raid10] [faulty]
md0 : active raid0 sdh[7] sdg[6] sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda
[0]
seems i need to change the mail thread to
why my RAID0 write speed so slow! :P
i also use a Marvell 8 port PCI-X card. 8 SATA DISK RAID0, each single
disk can give me around 55MB/s, but the RAID0 can only give me 203MB/s.
I tried different io scheduler, all lead to same write speed at my side.
usage of
disks).
Regards,
Mirko
Ming Zhang schrieb:
i would like to suggest u to do a 4+1 raid5 configuration and see what
happen.
Ming
On Fri, 2005-08-26 at 09:51 +0200, Mirko Benz wrote:
Hello,
We have created a RAID 0 for the same environment:
Personalities : [raid0
On Thu, 2005-08-25 at 18:38 +0200, Mirko Benz wrote:
Hello,
We intend to export a lvm/md volume via iSCSI or SRP using InfiniBand to
remote clients. There is no local file system processing on the storage
platform. The clients may have a variety of file systems including ext3,
GFS.
On Wed, 2005-08-24 at 10:24 +0200, Mirko Benz wrote:
Hello,
We have recently tested Linux 2.6.12 SW RAID versus HW Raid. For SW Raid
we used Linux 2.6.12 with 8 Seagate SATA NCQ disks no spare on a Dual
Xeon platform. For HW Raid we used a Arc-1120 SATA Raid controller and a
Fibre
the request size to be larger than a stripe to take advantage
of stripe write?
Regards,
Mirko
Ming Zhang schrieb:
On Wed, 2005-08-24 at 10:24 +0200, Mirko Benz wrote:
Hello,
We have recently tested Linux 2.6.12 SW RAID versus HW Raid. For SW Raid
we used Linux 2.6.12 with 8 Seagate
1048576 = 1024 * 1024 = 32 * 32768. :)
so it should be 32 stripe writes.
ming
On Fri, 2005-07-22 at 23:14 -0700, Tyler wrote:
By my calculations, 1048756 is *not* a multiple of 32768 (32
Kilobytes). Did I miscalculate?
Regards,
Tyler.
Ming Zhang wrote:
i created a 32KB chunk size
/dev/sda
raid-disk 0
device /dev/sdb
raid-disk 1
device /dev/sdc
raid-disk 2
Regards,
Tyler.
Ming Zhang wrote:
i created a 32KB chunk size 3 disk raid5. then write this disk with a
small
i created a 32KB chunk size 3 disk raid5. then write this disk with a
small code i wrote. i found that even i write it with 1048756 in unit,
which is multiple of stripe size, it still has a lot of read when seen
from iostat.
any idea? thanks!
i attached the code for reference.
[EMAIL
in my previous test, using SATA, i got better result in 2.6 instead of
2.4. :P
Ming
On Fri, 2005-07-15 at 06:01 +, Holger Kiehl wrote:
I'm trying to figure out why the last two numbers differ.
Have you checked what the performance with a 2.4.x kernel is? If I
remember correctly there
On Wed, 2005-07-13 at 23:58 -0400, Dan Christensen wrote:
David Greaves [EMAIL PROTECTED] writes:
In my setup I get
component partitions, e.g. /dev/sda7: 39MB/s
raid device /dev/md2: 31MB/s
lvm device /dev/main/media: 53MB/s
(oldish system - but note that
my problem here. this only apply to sdX not mdX. pls ignore this.
ming
On Thu, 2005-07-14 at 08:30 -0400, Ming Zhang wrote:
Also, is there a way to disable caching of reads? Having to clear
the cache by reading 900M each time slows down testing. I guess
I could reboot with mem=100M
On Tue, 2005-07-12 at 22:52 -0400, Dan Christensen wrote:
Ming Zhang [EMAIL PROTECTED] writes:
On Mon, 2005-07-11 at 11:11 -0400, Dan Christensen wrote:
I was wondering what I should expect in terms of streaming read
performance when using (software) RAID-5 with four SATA drives. I
On Wed, 2005-07-13 at 08:48 -0400, Dan Christensen wrote:
Ming Zhang [EMAIL PROTECTED] writes:
have u try the parallel write?
I haven't tested it as thoroughly, as it brings lvm and the filesystem
into the mix. (The disks are in production use, and are fairly
full, so I can't do writes
On Wed, 2005-07-13 at 10:23 -0400, Dan Christensen wrote:
Ming Zhang [EMAIL PROTECTED] writes:
test on a production environment is too dangerous. :P
and many benchmark tool u can not perform as well.
Well, I put production in quotes because this is just a home mythtv
box. :-) So
On Wed, 2005-07-13 at 19:02 +0100, David Greaves wrote:
Dan Christensen wrote:
Ming Zhang [EMAIL PROTECTED] writes:
test on a production environment is too dangerous. :P
and many benchmark tool u can not perform as well.
Well, I put production in quotes because
On Wed, 2005-07-13 at 22:18 +0100, David Greaves wrote:
Ming Zhang wrote:
component partitions, e.g. /dev/sda7: 39MB/s
raid device /dev/md2: 31MB/s
lvm device /dev/main/media: 53MB/s
(oldish system - but note that lvm device is *much* faster
On Wed, 2005-07-13 at 22:50 +0100, David Greaves wrote:
Ming Zhang wrote:
On Wed, 2005-07-13 at 22:18 +0100, David Greaves wrote:
Ming Zhang wrote:
component partitions, e.g. /dev/sda7: 39MB/s
raid device /dev/md2: 31MB/s
lvm device /dev/main/media
On Thu, 2005-07-14 at 11:16 +1000, Neil Brown wrote:
On Wednesday July 13, [EMAIL PROTECTED] wrote:
On Thu, 2005-07-14 at 08:38 +1000, Neil Brown wrote:
On Wednesday July 13, [EMAIL PROTECTED] wrote:
Here's a question for people running software raid-5: do you get
significantly
On Mon, 2005-07-11 at 11:11 -0400, Dan Christensen wrote:
I was wondering what I should expect in terms of streaming read
performance when using (software) RAID-5 with four SATA drives. I
thought I would get a noticeable improvement compared to reads from a
single device, but that's not the
Thanks, that is a workaround as well. :P
I already solve this by using mkraid.
Ming
On Wed, 2005-07-06 at 19:45 +0400, Michael Tokarev wrote:
Ming Zhang wrote:
Hi folks
I am testing some HW performance with raid5 with 2.4.x kenrel.
It is really troublesome every time I create
On Tue, 2005-07-05 at 13:44 -0400, Ming Zhang wrote:
Thx.
So it looks like mkraid might do this.
I tried mkraid and it works.
Thanks a lot!
Ming
With this hint, I googled the --danferous-no-resync and try to do this
# mdadm --create /dev/md1 --level=5 --assume-clean --raid-
devices
On Mon, 2005-07-04 at 20:13 -0400, Mark Hahn wrote:
# ./lspci -vv -d 11ab:
02:01.0 Class 0100: 11ab:5081 (rev 03)
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop-
ParErr- Stepping- SERR- FastB2B-
Status: Cap+ 66Mhz+ UDF- FastB2B+ ParErr- DEVSEL=medium TAbort-
50 matches
Mail list logo