Re: mismatch_cnt != 0

2008-02-24 Thread Janek Kozicki
Justin Piszcz said: (by the date of Sun, 24 Feb 2008 04:26:39 -0500 (EST))

 Kernel 2.6.24.2 I've seen it on different occasions, for this last time 
 though it may have been due to a power outage that lasted  2hours and 
 obviously the UPS did not hold up that long.

you should connect UPS through RS-232 or USB, and if a power-down
event is detected - issue hibernate or shutdown. Currently I am
issuing hibernate in this case, works pretty well for 2.6.22 and up.

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 to RAID6 reshape?

2008-02-18 Thread Janek Kozicki
Beolach said: (by the date of Mon, 18 Feb 2008 05:38:15 -0700)

 On Feb 17, 2008 10:26 PM, Janek Kozicki [EMAIL PROTECTED] wrote:
  Conway S. Smith said: (by the date of Sun, 17 Feb 2008 07:45:26 -0700)
 
   Well, I was reading that LVM2 had a 20%-50% performance penalty,
 http://gentoo-wiki.com/HOWTO_Gentoo_Install_on_Software_RAID_mirror_and_LVM2_on_top_of_RAID.

hold on. This might be related to raid chunk positioning with respect
to LVM chunk positioning. If they interfere there indeed may be some
performance drop. Best to make sure that those chunks are aligned together.

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 to RAID6 reshape?

2008-02-17 Thread Janek Kozicki
Beolach said: (by the date of Sat, 16 Feb 2008 20:58:07 -0700)

 I'm also interested in hearing people's opinions about LVM / EVMS.

With LVM it will be possible for you to have several raid5 and raid6:
eg: 5 HHDs (raid6), 5HDDs (raid6) and 4 HDDs (raid5). Here you would
have 14 HDDs and five of them being extra - for safety/redundancy
purposes.

LVM allows you to join several blockdevices and create one huge
partition on top of them. Without LVM you will end up with raid6 on
14 HDDs thus having only 2 drives used for redundancy. Quite risky
IMHO.

It is quite often that a *whole* IO controller dies and takes all 4
drives with it. So when you connect your drives, always make sure
that you are totally safe if any of your IO conrollers dies (taking
down 4 HDDs with it). With 5 redundant discs this may be possible to
solve. Of course when you replace the controller the discs are up
again, and only need to resync (which is done automatically).

LVM can be grown on-line (without rebooting the computer) to join
new block devices. And after that you only `resize2fs /dev/...` and
your partition is bigger. Also in such configuration I suggest you to
use ext3 fs, because no other fs (XFS, JFS, whatever) had that much
testing than ext* filesystems had.


Question to other people here - what is the maximum partition size
that ext3 can handle, am I correct it 4 TB ?

And to go above 4 TB we need to use ext4dev, right?

best regards
-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 to RAID6 reshape?

2008-02-17 Thread Janek Kozicki
Beolach said: (by the date of Sat, 16 Feb 2008 20:58:07 -0700)


 Or would I be better off starting w/ 4 drives in RAID6?

oh, right - Sevrin Robstad has a good idea to solve your problem -
create raid6 with one missing member. And add this member, when you
have it, next year or such.

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 to RAID6 reshape?

2008-02-17 Thread Janek Kozicki
Mark Hahn said: (by the date of Sun, 17 Feb 2008 17:40:12 -0500 (EST))

  I'm also interested in hearing people's opinions about LVM / EVMS.
 
  With LVM it will be possible for you to have several raid5 and raid6:
  eg: 5 HHDs (raid6), 5HDDs (raid6) and 4 HDDs (raid5). Here you would
  have 14 HDDs and five of them being extra - for safety/redundancy
  purposes.
 
 that's a very high price to pay.
 
  partition on top of them. Without LVM you will end up with raid6 on
  14 HDDs thus having only 2 drives used for redundancy. Quite risky
  IMHO.
 
 your risk model is quite strange - 5/14 redundancy means that either 

yeah, sorry. I went too far.

I didn't have IO controller failure so far. But I've read about one
on this list, and that all data was lost.

You're right, better to duplicate a server with backup copy, so it is
independent of the original one.

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 to RAID6 reshape?

2008-02-17 Thread Janek Kozicki
Conway S. Smith said: (by the date of Sun, 17 Feb 2008 07:45:26 -0700)

 Well, I was reading that LVM2 had a 20%-50% performance penalty,

huh? Make a benchmark. Do you really think that anyone would be using
it if there was any penalty bigger than 1-2% ? (random access, r/w).

I have no idea what is the penalty, but I'm totally sure I didn't
notice it.

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 how chage chunck size from 64 to 128, 256 ? is it possible ?

2008-02-09 Thread Janek Kozicki
Justin Piszcz said: (by the date of Sat, 9 Feb 2008 04:14:51 -0500 (EST))

 When you reate the array its --chunk or -c -- I found 256 KiB to 1024 KiB 
 to be optimal.

Hello Justin,

what is your typical bonnie++ invocation, to test your configuration?
Which fields are meaningful for you from this benchmark?

Do you use anything else for benchmarks? 
eg: 'zcav /dev/sda  result' ?


I'm asking becuase I want to make some local benchmarks to determine
best chunk size in my HDD setup.

thanks in advance
-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: which raid level gives maximum overall speed? (raid-10,f2 vs. raid-0)

2008-02-06 Thread Janek Kozicki
Bill Davidsen said: (by the date of Wed, 06 Feb 2008 13:16:14 -0500)

 Janek Kozicki wrote:
  Justin Piszcz said: (by the date of Tue, 5 Feb 2008 17:28:27 -0500 
  (EST))
  writing on raid10 is supposed to be half the speed of reading. That's
  because it must write to both mirrors.
 

 ??? Are you assuming that write to mirrored copies are done sequentially 
 rather than in parallel? Unless you have enough writes to saturate 
 something the effective speed approaches the speed of a single drive. I 
 just checked raid1 and raid5, writing 100MB with an fsync at the end. 
 raid1 leveled off at 85% of a single drive after ~30MB.

Hi,

In above context I'm talking about raid10 (not about raid1, raid0,
raid0+1, raid1+0, raid5 or raid6).

Of course writes are done in parallel. When each chunk has two
copies raid10 reads twice as fast as it writes. 

If each chunk has three copies, then writes are 1/3 speed of reading.
If each chunk has number of copies equal to number of drives, then
write speed drops down to that of a single drive - a 1/Nth of read speed.

But it's all just a theory. I'd like to see more benchmarks :-)

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mdadm 2.6.4 : How i can check out current status of reshaping ?

2008-02-06 Thread Janek Kozicki
Andreas-Sokov said: (by the date of Wed, 6 Feb 2008 22:15:05 +0300)

 Hello, Neil.
 
 .
  Possible you have bad memory, or a bad CPU, or you are overclocking
  the CPU, or it is getting hot, or something.
 
 As seems to me all my problems has been started after i have started update 
 MDADM.

what is the update?

- you installed a new version of mdadm?
- you installed new kernel?
- something else?

- what was the version before, and what version is now?

- can you downgrade to previous version?


best regards
-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Deleting mdadm RAID arrays

2008-02-05 Thread Janek Kozicki
Marcin Krol said: (by the date of Tue, 5 Feb 2008 11:42:19 +0100)

 2. How can I delete that damn array so it doesn't hang my server up in a loop?

dd if=/dev/zero of=/dev/sdb1 bs=1M count=10

I'm not using mdadm.conf at all. Everything is stored in the
superblock of the device. So if you don't erase it - info about raid
array will be still automatically found.

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Auto generation of mdadm.conf (was: Deleting mdadm RAID arrays)

2008-02-05 Thread Janek Kozicki
Michael Tokarev said: (by the date of Tue, 05 Feb 2008 16:52:18 +0300)

 Janek Kozicki wrote:
  I'm not using mdadm.conf at all. 
 
 That's wrong, as you need at least something to identify the array
 components. 

I was afraid of that ;-) So, is that a correct way to automatically
generate a correct mdadm.conf ? I did it after some digging in man pages:

  echo 'DEVICE partitions'  mdadm.conf
  mdadm  --examine  --scan --config=mdadm.conf  ./mdadm.conf 

Now, when I do 'cat mdadm.conf' i get:

 DEVICE partitions
 ARRAY /dev/md/0 level=raid1 metadata=1 num-devices=3 
UUID=75b0f87879:539d6cee:f22092f4:7a6e6f name='backup':0
 ARRAY /dev/md/2 level=raid1 metadata=1 num-devices=3 
UUID=4fd340a6c4:db01d6f7:1e03da2d:bdd574 name=backup:2
 ARRAY /dev/md/1 level=raid5 metadata=1 num-devices=3 
UUID=22f22c3599:613d5231:d407a655:bdeb84 name=backup:1

Looks quite reasonable. Should I append it to /etc/mdadm/mdadm.conf ?
This file currently contains: (commented lines are left out)

  DEVICE partitions
  CREATE owner=root group=disk mode=0660 auto=yes
  HOMEHOST system
  MAILADDR root

This is the default content of /etc/mdadm/mdadm.conf on fresh debian
etch install.

best regards
-- 
Janek Kozicki
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Auto generation of mdadm.conf

2008-02-05 Thread Janek Kozicki
Michael Tokarev said: (by the date of Tue, 05 Feb 2008 18:34:47 +0300)

...

 So.. probably this is the way your arrays are being assembled, since you
 do have HOMEHOST in your mdadm.conf...  Looks like it should work, after
 all... ;)  And in this case there's no need to specify additional array
 information in the config file.

whew, that was a long read. Thanks for detailed analysis. I hope that
your conclusion is correct, since I have no way to decide this by
myself. My knowledge is not enough here :)

best regards
-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: which raid level gives maximum overall speed? (raid-10,f2 vs. raid-0)

2008-02-05 Thread Janek Kozicki
Justin Piszcz said: (by the date of Tue, 5 Feb 2008 17:28:27 -0500 (EST))

 I remember testing with bonnie++ and raid10 was about half the speed 
 (200-265 MiB/s) as RAID5 (400-420 MiB/s) for sequential output, 

writing on raid10 is supposed to be half the speed of reading. That's
because it must write to both mirrors.

IMHO raid5 could perform good here, because in *continuous* write
operation the blocks from other HDDs were just have been written,
they stay in cache and can be used to calculate xor. So you could get
close to almost raid-0 performance here.

Randomly scattered small-sized write operations will kill raid5
performance, for sure. Because corresponding blocks from few other
drives must be read, to calculate parity correctly. I'm wondering
how much raid5 performance would go down... Is there a bonnie++ test
for that, or any other benchmark software for this?


 but input was closer to RAID5 speeds/did not seem affected (~550MiB/s).

reading in raid5 and raid10 is supposed to be close to raid-0 speed.

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid10 on three discs - few questions.

2008-02-03 Thread Janek Kozicki
Neil Brown said: (by the date of Mon, 4 Feb 2008 10:11:27 +1100)

wow, thanks for quick reply :)

  3. Another thing - would raid10,far=2 work when three drives are used?
 Would it increase the read performance?
 
 Yes.

is far=2 the most I could do to squeeze every possible MB/sec
performance in raid10 on three discs ?

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: draft howto on making raids for surviving a disk crash

2008-02-02 Thread Janek Kozicki
Keld Jørn Simonsen said: (by the date of Sat, 2 Feb 2008 20:41:31 +0100)

 This is intended for the linux raid howto. Please give comments.
 It is not fully ready /keld

very nice. do you intend to put it on http://linux-raid.osdl.org/ 

As wiki, it will be much easier for our community to fix errors and
add updates.

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: which raid level gives maximum overall speed? (raid-10,f2 vs. raid-0)

2008-01-31 Thread Janek Kozicki
Keld Jørn Simonsen said: (by the date of Thu, 31 Jan 2008 02:55:07 +0100)

 Given that you want maximum thruput for both reading and writing, I
 think there is only one way to go, that is raid0.
 
 All the raid10's will have double time for writing, and raid5 and raid6
 will also have double or triple writing times, given that you can do
 striped writes on the raid0. 
 
 For random and sequential writing in the normal case (no faulty disks) I would
 guess that all of the raid10's, the raid1 and raid5 are about equally fast, 
 given the
 same amount of hardware.  (raid5, raid6 a little slower given the
 unactive parity chunks).
 
 For random reading, raid0, raid1, raid10 should be equally fast, with
 raid5 a little slower, due to one of the disks virtually out of
 operation, as it is used for the XOR parity chunks. raid6 should be 
 somewhat slower due to 2 non-operationable disks. raid10,f2 may have a
 slight edge due to virtually only using half the disk giving better
 average seek time, and using the faster outer disk halves.
 
 For sequential reading, raid0 and raid10,f2 should be equally fast.
 Possibly raid10,o2 comes quite close. My guess is that raid5 then is
 next, achieving striping rates, but with the loss of one parity drive,
 and then raid1 and raid10,n2 with equal performance.
 
 In degraded mode, I guess for random read/writes the difference is not
 big between any of the raid1, raid5 and raid10 layouts, while sequential
 reads will be especially bad for raid10,f2 approaching the random read
 rate, and others will enjoy the normal speed of the above filesystem
 (ext3, reiserfs, xfs etc).


Wow! Thanks for detailed explanations. 

I was thinking that maybe raid10 on 4 drives could be faster than
raid0. But now it's all logical for me. With 4 drives and raid10,f2
I could get an extra reading speed, but not the writing speed. Makes
a lot of sense.

Perhaps it should be added to linux-raid wiki? (and perhaps a
FAQ there - isn't a question about speed a frequent one?)

  http://linux-raid.osdl.org/index.php/Main_Page  


 Theory, theory theory. Show me some real figures.

yes... that would be great if someone could spend some time
benchmarking all possible configurations :-)

thanks for your help!
-- 
Janek Kozicki
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: linux raid faq

2008-01-30 Thread Janek Kozicki
David Greaves said: (by the date of Wed, 30 Jan 2008 12:46:52 +)
 
 http://linux-raid.osdl.org/index.php/Main_Page

great idea! I belive that wikis are the best way to go.
 
 I have written to faqs.org but got no reply. I'll try again...
  If I searched on google for raid faq, the first say 5-7 items did not
  mention raid10.
 
 Until people link to and use the new wiki, Google won't find it.


Everyone that has a website - link to that wiki RIGHT NOW! And we
will have a central place for all linux-raid documentation. Findable
with google.

There should be a link to it from vger.kernel.org. Or even the
kernel.org itself. Mailing list admins - can you do it?

best regards.
-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


which raid level gives maximum overall speed? (raid-10,f2 vs. raid-0)

2008-01-30 Thread Janek Kozicki
Hello,

Yes, I know that some levels give faster reading and slower writing, etc.

I want to talk here about a typical workstation usage: compiling
stuff (like kernel), editing openoffice docs, browsing web, reading
email (email: I have a webdir format, and in boost mailing list
directory I have 14000 files (posts), opening this directory takes
circa 10 seconds in sylpheed). Moreover, opening .pdf files, more
compiling of C++ stuff, etc...

I have a remote backup system configured (with rsnapshot), which does
backups two times a day. So I'm not afraid to lose all my data due to
disc failure. I want absolute speed.

Currently I have Raid-0, because I was thinking that this one is
fastest. But I also don't need twice the capacity. I could use Raid-1
as well, if it was faster.

Due to recent discussion about Raid-10,f2 I'm getting worried that
Raid-0 is not the fastest solution, but instead a Raid-10,f2 is
faster.

So how really is it, which level gives maximum overall speed?


I would like to make a benchmark, but currently, technically, I'm not
able to. I'll be able to do it next month, and then - as a result of
this discussion - I will switch to other level and post here
benchmark results.

How does overall performance change with the number of available drives?

Perhaps Raid-0 is best for 2 drives, while Raid-10 is best for 3, 4
and more drives?


best regards
-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: which raid level gives maximum overall speed? (raid-10,f2 vs. raid-0)

2008-01-30 Thread Janek Kozicki
Keld Jørn Simonsen said: (by the date of Wed, 30 Jan 2008 23:00:07 +0100)

 Teoretically, raid0 and raid10,f2 should be the same for reading, given the
 same size of the md partition, etc. For writing, raid10,f2 should be half the 
 speed of
 raid0. This should go both for sequential and random read/writes.
 But I would like to have real test numbers. 

Me too. Thanks. Are there any other raid levels that may count here?
Raid-10 with some other options?

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: linux raid faq

2008-01-29 Thread Janek Kozicki
Keld Jørn Simonsen said: (by the date of Tue, 29 Jan 2008 20:17:55 +0100)

 Hmm, I read the Linux raid faq on
 http://www.faqs.org/contrib/linux-raid/x37.html

I've found some information in 

/usr/share/doc/mdadm/FAQ.gz

I'm wondering why this file is not advertised anywhere
(eg. in 'man mdadm'). Does it exist only in debian packages, or what?
With 'man 4 md' I've found a little sparse info about raid10. But
still I don't get it.

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid10: unfair disk load?

2007-12-22 Thread Janek Kozicki
Michael Tokarev said: (by the date of Fri, 21 Dec 2007 23:56:09 +0300)

 Janek Kozicki wrote:
  what's your kernel version? I recall that recently there have been
  some works regarding load balancing.
 
 It was in my original email:
 The kernel is 2.6.23

 Strange I missed the new raid10 development you
 mentioned (I follow linux-raid quite closely).
 What change(s) you're referring to?

oh sorry it was a patch for raid1, not raid10:

  http://www.spinics.net/lists/raid/msg17708.html

I'm wondering if it could be adapted for raid10 ...

Konstantin Sharlaimov said: (by the date of Sat, 03 Nov 2007
20:08:42 +1000)

 This patch adds RAID1 read balancing to device mapper. A read operation
 that is close (in terms of sectors) to a previous read or write goes to 
 the same mirror.
snip

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid5 reshape/resync - BUGREPORT

2007-12-18 Thread Janek Kozicki
 - Message from [EMAIL PROTECTED] -
Nagilum said: (by the date of Tue, 18 Dec 2007 11:09:38 +0100)

  Ok, I've recreated the problem in form of a semiautomatic testcase.
  All necessary files (plus the old xfs_repair output) are at:
 
http://www.nagilum.de/md/
 
  After running the test.sh the created xfs filesystem on the raid
  device is broken and (at last in my case) cannot be mounted anymore.
 
  I think that you should file a bugreport

 - End message from [EMAIL PROTECTED] -
 
 Where would I file this bug report? I thought this is the place?
 I could also really use a way to fix that corruption. :(

ouch. To be honest I subscribed here just a month ago, so I'm not
sure. But I haven't seen other bugreports here so far. 

I was expecting that there is some bugzilla?

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid5 reshape/resync

2007-12-16 Thread Janek Kozicki
Nagilum said: (by the date of Tue, 11 Dec 2007 22:56:13 +0100)

 Ok, I've recreated the problem in form of a semiautomatic testcase.
 All necessary files (plus the old xfs_repair output) are at:
   http://www.nagilum.de/md/

 After running the test.sh the created xfs filesystem on the raid  
 device is broken and (at last in my case) cannot be mounted anymore.

I think that you should file a bugreport, and provide there the
explanations you have put in there. An automated test case that leads
to xfs corruption is a neat snack for bug squashers ;-)

I wonder however where to report this - the xfs or raid ? Eventually
cross report to both places and write in the bugreport that you are
not sure on which side there is a bug.

best regards
-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


mailing list configuration (was: raid6 check/repair)

2007-12-03 Thread Janek Kozicki
Thiemo Nagel said: (by the date of Mon, 03 Dec 2007 20:59:21 +0100)

 Dear Michael,
 
 Michael Schmitt wrote:
  Hi folks,
 
 Probably erroneously, you have sent this mail only to me, not to the list...

I have a similar problem all the time on this list. it would be
really nice to reconfigure the mailing list server, so that reply
does not reply to the sender but to the mailing list.

Moreover, in sylpheed I have two reply options: reply to sender and
reply to mailing list and both are using the *sender* address!
I doubt that sylpheed is broken - it works on nearly 20 other lists,
so I conclude that the server is seriously misconfigured.

apologies for my stance. Anyone can comment on this?

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Spontaneous rebuild

2007-12-02 Thread Janek Kozicki
 Justin Piszcz schrieb:
 
  Naturally, when it is reset, the device is disconnected and then
  re-appears, when MD see's this it rebuilds the array.

Least you can do is to add an internal bitmap to your raid, this will
make rebuilds faster :-/

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel 2.6.23.9 / P35 Chipset + WD 750GB Drives (reset port)

2007-12-01 Thread Janek Kozicki
Justin Piszcz said: (by the date of Sat, 1 Dec 2007 07:23:41 -0500 (EST))

  dd if=/dev/zero of=/dev/sdc

 The purpose is with any new disk its good to write to all the blocks and 
 let the drive to all of the re-mapping before you put 'real' data on it. 
 Let it crap out or fail before I put my data on it.

better use badblocks. It writes data, then reads it afterwards:
In this example the data is semi random (quicker than /dev/urandom ;)

badblocks -c 10240 -s -w -t random -v /dev/sdc

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: telling mdadm to use spare drive.

2007-11-09 Thread Janek Kozicki
Richard Scobie said: (by the date of Fri, 09 Nov 2007 10:32:08 +1300)

 This was the bug I was thinking of:
 
 http://marc.info/?l=linux-raidm=116003247912732w=2

This bug says that it only with mdadm 1.x:

   If a drive is added to a raid1 using older tools
(mdadm-1.x or raidtools) then it will be included
in the array without any resync happening.

But I have here:

# mdadm --version
mdadm - v2.5.6 - 9 November 2006

maybe I stumbled on another bug?

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: telling mdadm to use spare drive.

2007-11-08 Thread Janek Kozicki
Richard Scobie said: (by the date of Thu, 08 Nov 2007 08:13:19 +1300)

 What kernel and RAID level is this?
 
 If it's RAID 1, I seem to recall there was a relatively recently fixed 
 bug for this.

debian etch, stock install
Linux 2.6.18-5-k7 #1 SMP i686 GNU/Linux

The problem was with was RAID 5.

But also I have RAID 1 there, and after --add the drives
automatically resynced.

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: telling mdadm to use spare drive.

2007-11-07 Thread Janek Kozicki
Goswin von Brederlow said: (by the date of Wed, 07 Nov 2007 10:17:51 +0100)

 Strange. That is exactly how I always do it and it always just worked.
 mdadm should start syncing on any spare as soon as a disk fails or you
 add the spare to a degraded array afaik. No special start now
 interaction needed.

Thanks for your confirmation. I cannot explain this behaviour - I
just started using mdadm. If anybody here wants, I can remove the
drive and add this again, to see if I can duplicate this bug (?).
If so - then tell me what debug information you do need and I will
give it to you.

Anyway, it seems that this command 

  mdadm --assemble --update=resync /dev/md1 /dev/hda3 /dev/sda3 /dev/hdc3

worked, becasue `mdadm -D /dev/md1` says that array is in
State : active (not degraded).

best regards
-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


man mdadm - suggested correction.

2007-11-05 Thread Janek Kozicki
Hello, 

I did read 'man mdadm' from top to bottom, but I totally forgot to
look into /usr/share/doc/mdadm !

And there is much more - FAQs, recipes, etc!

Can you please add to the manual under 'SEE ALSO' a reference
to /usr/share/doc/mdadm ?

thanks :-)
-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


man mdadm - suggested correction.

2007-11-05 Thread Janek Kozicki
Hello, 

I did read 'man mdadm' from top to bottom, but I totally forgot to
look into /usr/share/doc/mdadm !

And there is much more - FAQs, recipes, etc!

Can you please add do the manual under 'SEE ALSO' a reference
to /usr/share/doc/mdadm ?

thanks :-)
-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: man mdadm - suggested correction.

2007-11-05 Thread Janek Kozicki
Janek Kozicki said: (by the date of Mon, 5 Nov 2007 11:58:15 +0100)

 I did read 'man mdadm' from top to bottom, but I totally forgot to
 look into /usr/share/doc/mdadm !

PS: this why I asked so much questions on this list ;-)

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


telling mdadm to use spare drive.

2007-11-04 Thread Janek Kozicki
Hi,

I finished copying all data from old disc hdc to my shiny new
RAID5 array (/dev/hda3 /dev/sda3 missing). Next step is to create a
partition on hdc and add it to the array. And so I did this:

# mdadm --add /dev/md1 /dev/hdc3

But then I had a problem - the /dev/hdc3 was a spare, it didn't
resync automatically:

# mdadm -D /dev/md1
[]
Number   Major   Minor   RaidDevice State
   0   330  active sync   /dev/hda3
   1   831  active sync   /dev/sda3
   2   002  removed

   3  223-  spare   /dev/hdc3


I wanted to tell mdadm to use the spare device, and I wasn't sure how
to do this, so I tried following:

# mdadm --stop /dev/md1
# mdadm --assemble --update=resync /dev/md1 /dev/hda3 /dev/sda3 /dev/hdc3

Now, 'mdadm -D /dev/md1' says:
[...]
Number   Major   Minor   RaidDevice State
   0   330  active sync   /dev/hda3
   1   831  active sync   /dev/sda3
   3  2232  spare rebuilding   /dev/hdc3


I'm writing here just because I want to be sure that I added this new
device correctly, I don't want to make any stupid mistake here...

# cat /proc/mdstat

md1 : active raid5 hda3[0] hdc3[3] sda3[1]
  966807296 blocks super 1.1 level 5, 128k chunk, algorithm 2 [3/2] [UU_]
  [=...]  recovery =  6.2% (30068096/483403648) 
finish=254.9min speed=29639K/sec
  bitmap: 8/8 pages [32KB], 32768KB chunk

Was there a better way to do this, is it OK?

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


doesm mdadm try to use fastest HDD ?

2007-11-02 Thread Janek Kozicki
Hello,

My three HHDs have following speeds:

  hda - speed 70 MB/sec
  hdc - speed 27 MB/sec
  sda - speed 60 MB/sec

They create a raid1 /dev/md0 and raid5 /dev/md1 arrays. I wanted to
ask if mdadm is trying to pick the fastest HDD during operation?

Maybe I can tell which HDD is preferred?

This came to my mind when I saw this:

  # mdadm --query --detail /dev/md1 | grep Prefer
 
  Preferred Minor : 1

And also in the manual:

  -W, --write-mostly [...] can be useful if mirroring over a slow link.


many thanks for all your help!
-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


stride / stripe alignment on LVM ?

2007-11-01 Thread Janek Kozicki
Hello,

I have raid5 /dev/md1, --chunk=128 --metadata=1.1. On it I have
created LVM volume called 'raid5', and finally a logical volume
'backup'.

Then I formatted it with command:

   mkfs.ext3 -b 4096 -E stride=32 -E resize=550292480 /dev/raid5/backup

And because LVM is putting its own metadata on /dev/md1, the ext3
partition is shifted by some (unknown for me) amount of bytes from
the beginning of /dev/md1.

I was wondering, how big is the shift, and would it hurt the
performance/safety if the `ext3 stride=32` didn't align perfectly
with the physical stripes on HDD?

PS: the resize option is to make sure that I can grow this fs
in the future.

PSS: I looked in the archive but didn't find this question asked
before. I'm sorry if it really was asked.

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: switching root fs '/' to boot from RAID1 with grub

2007-11-01 Thread Janek Kozicki
Doug Ledford said: (by the date of Thu, 01 Nov 2007 14:30:58 -0400)

 So, what I said is true, the MBR will search on the disk it is being run
 from for the files it needs: 0x80.

my motherboard allows to pick a boot device if I press F11 during
boot. Do you mean, that no matter which HDD I will choose it will
have 0x80 number?

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: xosview + RAID (was: switching root fs '/'...)

2007-10-31 Thread Janek Kozicki
Doug Ledford said: (by the date of Wed, 31 Oct 2007 13:38:08 -0400)


 Now that grub's installed, you won't have to do anything manual again.
 The only time you might have to repeat that grub install procedure is if
 you loose a drive and need to add a new one back in, then the new one
 will need it.

great! many thanks again.

Another thing.. 

I'm using xosview to monitor my system activity
(others prefer gkremml, or sth else ;-). To see RAID I can run
xosview like this:

  xosview -xrm xosview*RAID:true -xrm xosview*RAIDdevicecount:2

but I have three devices (md0, md1, md2), so I should use
RAIDdevicecount:3 but it gives following error:

  terminate called after throwing an instance of 'std::bad_alloc'
what():  St9bad_alloc
  Aborted

anybody else here is using xosview?

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


switching root fs '/' to boot from RAID1 with grub

2007-10-30 Thread Janek Kozicki
Hello,

I have and olde HDD and two new HDDs:

- hda1 - my current root filesystem '/'
- sda1 - part of raid1 /dev/md0 [U_U]
- hdc1 - part of raid1 /dev/md0 [U_U]

I want all hda1, sda1, hdc1 to be a raid1. I remounted hda1 readonly 
then I did 'dd if=/dev/hda1 of=/dev/md0'. I carefully checked that
the partition sizes match exactly. So now md0 contains the same thing
as hda1. 

But hda1 is still outside of the array. I want to add it to the array.
But before I do this I think that I should boot from /dev/md0 ?
Otherwise I might hose this system. I tried `grub-install /dev/sda1`
(assuming that grub would see no problem with reading raid1
partition, and boot from it, until mdadm detects an array). I tried
`grub-install /dev/sda` as well as on /dev/hdc and /dev/hdc1.
I turned off 'active' flag for partition hda1 and turned it on for hdc1
and sda1. But still grub is booting from hda1.

I did all this with version 1.1

mdadm --create --verbose /dev/md0 --chunk=64 --level=raid1 \
  --metadata=1.1  --bitmap=internal --raid-devices=3 /dev/sda1 \
  missing /dev/hdc1

I'm NOT using LVM here.

Can someone tell me how should I switch grub to boot from /dev/md0 ?

After the boot I will add hda1 to the array, and all three partitions
should become a raid1.

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: switching root fs '/' to boot from RAID1 with grub

2007-10-30 Thread Janek Kozicki
Janek Kozicki said: (by the date of Tue, 30 Oct 2007 21:07:21 +0100)

 then I did 'dd if=/dev/hda1 of=/dev/md0'. I carefully checked that
 the partition sizes match exactly. So now md0 contains the same thing
 as hda1. 

in fact, to check the size I was using 'fdisk -l' because it gives
size in bytes (not in blocks), like this:

backup:~# fdisk -l /dev/md0

Disk /dev/md0: 1003 MB, 1003356160 bytes

And the same for /dev/hda1

But that's a detail, just so you know that I dd'ed my root partition
correctly and I can mount /dev/md0 without problems.

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Test 2

2007-10-26 Thread Janek Kozicki
Daniel L. Miller said: (by the date of Thu, 25 Oct 2007 16:32:31 -0700)

 Thanks for the test responses - I have re-subscribed...if I see this 
 myself...I'm back!

I know that gmail doesn't allow to see your own posts on mailing
lists. Only posts from other people. Maybe you have a similar problem?


-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


deleting mdadm array?

2007-10-25 Thread Janek Kozicki
Hello,

I just created a new array /dev/md1 like this:

mdadm --create --verbose /dev/md1 --chunk=64 --level=raid5 \
   --metadata=1.1  --bitmap=internal \
   --raid-devices=3 /dev/hdc2 /dev/sda2 missing


But later I changed my mind, and I wanted to use chunk 128. Do I need
to delete this array somehow first, or can I just create an array
again (overwriting the current one)?

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Partitionable raid array... How to create devices ?

2007-10-16 Thread Janek Kozicki
BERTRAND Joël said: (by the date of Tue, 16 Oct 2007 10:22:46 +0200)


 
 Root gershwin:[/dev]  ls -l md*
 brw-rw 1 root disk  9,   0 Oct 15 10:29 md0
 brw-rw 1 root disk  9,   1 Oct 15 10:29 md1
 brw-rw 1 root disk  9, 127 Oct 16 09:59 md127
 brw-rw 1 root disk  9,   2 Oct 15 10:29 md2
 brw-rw 1 root disk  9,   3 Oct 15 10:29 md3
 brw-rw 1 root disk  9,   4 Oct 15 10:29 md4
 brw-rw 1 root disk  9,   5 Oct 15 10:29 md5
 brw-rw 1 root disk  9,   6 Oct 15 10:29 md6
 brw-rw 1 root disk  9,   7 Oct 15 10:29 md7
 brw-rw 1 root disk  9,   8 Oct 15 10:29 md8
 crw-rw 1 root root 10,  63 Oct 15 10:29 mdesc
 brw-rw 1 root disk  9, 127 Oct 16 10:03 mdp0



... crazy. Much better to create just /dev/md0 and use LVM

http://tldp.org/HOWTO/Software-RAID-HOWTO-11.html

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: very degraded RAID5, or increasing capacity by adding discs

2007-10-09 Thread Janek Kozicki
Michael Tokarev said: (by the date of Tue, 09 Oct 2007 02:52:06 +0400)

 Janek Kozicki wrote:
  Hello,
  
  Recently I started to use mdadm and I'm very impressed by its
  capabilities. 
  
  I have raid0 (250+250 GB) on my workstation. And I want to have
  raid5 (4*500 = 1500 GB) on my backup machine.
 
 Hmm.  Are you sure you need that much space on the backup, to
 start with?  Maybe better backup strategy will help to avoid
 hardware costs?  Such as using rsync for backups as discussed
 on this mailinglist about a month back (rsync is able to keep
 many ready to use copies of your filesystems but only store
 files that actually changed since the last backup, thus
 requiring much less space than many full backups).

yes, exactly. I am using rsnapshot, which is based on rsync and
hardlinks. It works exceptionally well - to my knowledge it's the
best backup solution I have ever seen. With plugin scripts I am even
mounting an lvm-snapshot of the drive being backupped.

from command 'rsnapshot du' I can see how many space is used (but
each directory tree is a full backup (made with hardlinks)):

278G/backup/.sync
454M/backup/hourly.0/
515M/backup/hourly.1/
527M/backup/daily.0/
30G /backup/daily.1/
21G /backup/daily.2/
561M/backup/daily.3/
1.6G/backup/daily.4/
3.0G/backup/daily.5/
594M/backup/daily.6/
1.4G/backup/weekly.0/
11G /backup/weekly.1/
9.3G/backup/weekly.2/
23G /backup/weekly.3/
33G /backup/monthly.0/
3.7G/backup/monthly.1/
415Gtotal


 It's definitely not possible with raid5.  Only option is to create a
 raid5 array consisting of less drives than it should contain at the
 end, and reshape it when you get more drives, as others noted in this
 thread.  But do note the following points:

..snip..

yes, I am aware of all those problems you listed. The data I'm
talking about is already a backup. While the real data is on my
workstation (a different linux box - albeit only the newest version
of my data). Only losing both of them simultaneoulsy will be
catastrophic for me. 

So I am inclined to do some experiments with the backup drives
configuration, while still doing my best at not losing it. An
exercise, you know :)

  is it just a pipe dream?
 
 I'd say it is... ;)

oh well. But I learnt a lot from your answers, thanks a lot!


PS: I'm receiving some mailing list posts twice, anybody knows why?
I'm used to mailman but looks like majordomo is being configured in a
different way - I cannot find a configure page. (I just subscribed).

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: very degraded RAID5, or increasing capacity by adding discs

2007-10-09 Thread Janek Kozicki
Neil Brown said: (by the date of Tue, 9 Oct 2007 13:32:09 +1000)

 On Tuesday October 9, [EMAIL PROTECTED] wrote:
  
  Problems at step 4.: 'man mdadm' doesn't tell if it's possible to
  grow an array to a degraded array (non existant disc). Is it possible?
 
 Why not experiment with loop devices on files and find out?
 
 But yes:  you can grow to a degraded array providing you specify a
 --backup-file.

Thanks! I'll test this on loopback devices :)


-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


very degraded RAID5, or increasing capacity by adding discs

2007-10-08 Thread Janek Kozicki
Hello,

Recently I started to use mdadm and I'm very impressed by its
capabilities. 

I have raid0 (250+250 GB) on my workstation. And I want to have
raid5 (4*500 = 1500 GB) on my backup machine.

The backup machine currently doesn't have raid, just a single 500 GB
drive. I plan to buy more HDDs to have a bigger space for my
backups but since I cannot afford all HDDs at once I face a problem
of expanding an array. I'm able to add one 500 GB drive every few
months until I have all 4 drives.

But I cannot make a backup of a backup... so reformatting/copying all
data each time when I add new disc to the array is not possible for me.

Is it possible anyhow to create a very degraded raid array - a one
that consists of 4 drives, but has only TWO ?

This would involve some very tricky *hole* management on the block
device... A one that places holes in stripes on the block device,
until more discs are added to fill the holes. When the holes are
filled, the block device grows bigger, and with lvm I just increase
the filesystem size. This is perhaps coupled with some unstripping
that moves/reorganizes blocks around to fill/defragment the holes.

is it just a pipe dream?

best regards


PS: yes it's simple to make a degraded array of 3 drives, but I
cannot afford two discs at once...

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: very degraded RAID5, or increasing capacity by adding discs

2007-10-08 Thread Janek Kozicki
Richard Scobie said: (by the date of Tue, 09 Oct 2007 08:26:35 +1300)

 No, but you can make a degraded 3 drive array, containing 2 drives and 
 then add the next drive to complete it.
 
 The array can then be grown (man mdadm, GROW section), to add the fourth.

Oh, good. Thanks, I must've been blind that I missed this.
This completely solves my problem.

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: very degraded RAID5, or increasing capacity by adding discs

2007-10-08 Thread Janek Kozicki
Janek Kozicki said: (by the date of Tue, 9 Oct 2007 00:25:50 +0200)

 Richard Scobie said: (by the date of Tue, 09 Oct 2007 08:26:35 +1300)
 
  No, but you can make a degraded 3 drive array, containing 2 drives and 
  then add the next drive to complete it.
  
  The array can then be grown (man mdadm, GROW section), to add the fourth.
 
 Oh, good. Thanks, I must've been blind that I missed this.
 This completely solves my problem.

Uh, actually not :)

My 1st 500 GB drive is full now. When I buy a 2nd one I want to
create a 3-disc degraded array using just 2 discs, one of which
contains unbackupable data.

steps:
1. create degraded two-disc RAID5 on 1 new disc
2. copy data from old disc to new one
3. rebuild the array with old and new discs (now I have 500 GB on 2 discs)
4. GROW this array to a degraded 3 discs RAID5 (so I have 1000 GB on 2 discs)
...
5. when I buy 3rd drive I either grow the array, or just rebuild and
wait with growing until I buy a 4th drive.

Problems at step 4.: 'man mdadm' doesn't tell if it's possible to
grow an array to a degraded array (non existant disc). Is it possible?


PS: the fact, that degraded array will be unsafe for the data is an
intented motivating factor for buying next drive ;)

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html