Re: LINEAR RAID, little help

2007-04-10 Thread Henrik Holst
On Sat, 2007-04-07 at 21:34 +0100, Gavin McCullagh wrote:
 Hi,
 
 On Sat, 07 Apr 2007, Rich wrote:
 
  Er, I went with Linear as reading around people seemed to recommend this 
  for odd sized drives (my old drives are 80's, 120 and 320's) also a read 
  somewhere that data on the other drives is more recoverable that most of 
  the other RAID's.
 
 You just want to make a large filesystem out of odd sized disks.  I guess
 that's fair enough.  The only reservation I'd have is that with N disks
 your likelihood of failure is multiplied by N as any disk failure takes the
 array down.  Personally, I'd be more inclined to try and put a separate
 filesystem on each disk and use symlinks to pull them together into one
 tree.
 
 But now I know why linear raid can be more useful than raid-0, thanks.
 

(This is maybe sort of off topic since this list about raid and not
filesystems and virtual such either.)

Maybe you just want the appearance of things being as one volume? I
might recommend you looking into unionfs:

http://www.filesystems.org/project-unionfs.html

It's actually quite useful. You might have some logical division of the
data. Put TV stuff on one disk, Movies on another. The unionfs overlay
maps everything together into a virtual bigdisk. It's also very simple
and no superblocks or headers needed either.

-- 
Henrik Holst [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid1 does not seem faster

2007-04-01 Thread Henrik Holst
On Sun, 2007-04-01 at 14:19 +0200, Jan Engelhardt wrote:
 Hello list,
 
 
 normally, I'd think that combining drives into a raid1 array would give 
 me at least a little improvement in read speed. In my setup however, 
 this does not seem to be the case.
 
 14:16 opteron:/var/log # hdparm -t /dev/sda
  Timing buffered disk reads:  170 MB in  3.01 seconds =  56.52 MB/sec
 14:17 opteron:/var/log # hdparm -t /dev/md3
  Timing buffered disk reads:  170 MB in  3.01 seconds =  56.45 MB/sec
 (and dd_rescue shows the same numbers)
 
 The raid array was created using
 # mdadm -C /dev/md3 -b internal -e 1.0 -l 1 -n 2 /dev/sd[ab]3
 
 
 Jan

From section 9.5 in [FAQ]

To check out speed and performance of your RAID systems, do NOT use
hdparm. It won't do real benchmarking of the arrays. snip

I might recommend bonnie++. I think I've seen accepted benchmarks here
using bonnie++.

[FAQ] http://tldp.org/HOWTO/html_single/Software-RAID-HOWTO/#s9

-- 
Henrik Holst [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Checksums wrong on one disk of mirror

2006-11-13 Thread Henrik Holst
David wrote:

snip

 mdadm is version 1.12.  Looking at the most recently available version
 this seems incredibly out of date, but seems to be the default installed
 in Ubuntu.  Even Debian stable seems to have 1.9.  I can bug this with
 them for an update if necessary.

It's already on it's way. Update to the comming Debian release Etch
(due to be Stable in December 2006; if I remember correctly). In Etch
the mdadm version is v2.5.3 (7 August 2006).

Henrik Holst

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: why partition arrays?

2006-10-21 Thread Henrik Holst
Bodo Thiesen wrote:
 Ken Walker [EMAIL PROTECTED] wrote:
 
 Is LVM stable, or can it cause more problems than separate raids on a array.

[description of street smart raid setup]

(The same function could probably be achieved with logical partitions
and ordinary software raid levels.)

 So, now decide for your own, if you consider LVM stable - I would ;)
 
 Regards, Bodo

Have you lost any disc (i.e. physical volumes) since February? Or lost
the meta-data?

I would not recommend anyone to use LVM if they are less than experts on
Linux systems. Setting up a LVM system is easy: administrating and
salvaging the same, was much more work. (I used it ~3 years ago)

/Henrik Holst
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: access *existing* array from knoppix

2006-09-12 Thread Henrik Holst
Dexter Filmore wrote:
 Am Dienstag, 12. September 2006 17:47 schrieb Justin Piszcz:
 Strange, what knoppix are you using? I recall doing it to fix an XFS bug
 with 4.x and 5.x.
 
 5.0.1. Maybe they gone udev now and didn't handle md devices.
 I'll see if I can so something in /lib/udev/devices.

Try this:

mdadm -Av /dev/md0 --auto=yes /dev/hda1 /dev/hdb1
   ^^

I have used this with success on many Udev systems. It will create
/dev/md0 automatic.

Henrik Holst
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Superblock checksum problems

2006-09-04 Thread Henrik Holst
Josh Litherland wrote:
 Feh, disregard that.  I've beaten it up some more, and occasional errors
 are cropping up.  Bad card.  Nothing more to see here, move along.

It would be good to have an analog to memtest but for PATA and SATA
ports. Anyone seen something like that out there on the web?

Henrik Holst
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Making bootable SATA RAID1 array in Mandriva 2006

2006-08-14 Thread Henrik Holst
Luca Berra wrote:
 On Sun, Aug 13, 2006 at 07:51:42PM -0400, andy liebman wrote:
 -- I copied the contents of /dev/sda1 (/ partition) and /dev/sda6
 (/home partition) to /dev/sdb1 and /dev/sdb6 using rsync.
 this is not really important, but you should have used the raid devices
 as a target.

Just a question. Did you copy the file system or the block device? I am
not familiar with rsync but I can think that it is a problem to sync the
file system so the block devices become 1:1 with each other.

Henrik Holst
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Resize on dirty array?

2006-08-09 Thread Henrik Holst
James Peverill wrote:

 I'll try the force assemble but it sounds like I'm screwed.  It 
 sounds like what happened was that two of my drives developed bad 
 sectors in different places that weren't found until I accessed 
 certain areas (in the case of the first failure) and did the drive 
 rebuild (for the second failure).

The file /sys/block/mdX/md/sync_action can be used to issue a recheck of
the data. Read Documentation/md.txt in kernel source for details about
the exact procedure. My advice (if you still want to continue using
software raid) is that you run such a check before any add/grow or other
action in the future. Also, if the raid has been unused for a long while
it might be a good idea to recheck the data.

[snip]

I feel your pain. Massive data loss is the worst. I have had my share of
crashes. Once due to bad disk and no redundancy, the other time due to
good old stupidity.

Henrik Holst
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: only 4 spares and no access to my data

2006-07-10 Thread Henrik Holst
Karl Voit wrote:
[snip]
 Well this is because of the false(?) superblocks of sda-sdd in comparison to
 sda1 to sdd1.

I don't understand this. Do you have more than a single partion on sda?
Is sda1 occupying the entire disk? since the superblock is the /last/
128Kb (I'm assuming 128*1024 bytes) the superblocks should be one and
the same.

Henrik Holst

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Is shrinking raid5 possible?

2006-06-23 Thread Henrik Holst
Neil Brown wrote:
 In short, reducing a raid5 to a particular size isn't something that
 really makes sense to me.  Reducing the amount of each device that is
 used does - though I would much more expect people to want to increase
 that size.

Think about the poor people! :-) Those who can't afford to buy a new
disk after a failure but can give up some free space. I actually don't
think that that scenario is /highly unlikely/ to occur?

And also for the sake of symmetry: If growing is allowed- why should not
shrinking be just as valid?

Neil Brown wrote:
 If Paul really has a reason to reduce the array to a particular size
 then fine.  I'm mildly curious, but it's his business and I'm happy
 for mdadm to support it, though indirectly.  But I strongly suspect
 that most people who want to resize their array will be thinking in
 terms of the amount of each device that is used, so that is how mdadm
 works.

I agree with you here- keep the parameters low level. In that way the
administrator (users use a GUI) have more control over the operation at
hand. (kmdadm anyone? :-))

--
Henrik Holst

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: And then there was Bryce...

2006-06-08 Thread Henrik Holst
Bryce wrote:

 Gosh, where to start,..

 Ok general setup

 I'm using  kernel version 2.6.17-rc5 and  Raid 5 over 5 500Gb SATA disks

You have just upgraded to udev havn't you? :-)

[snip!]


 Hum ho,.. I survived the horror but umm, well, I'll leave the above as
 a story to frighten young sysadmins with.

The same happened to me with eth0-2. I _could_ not for my life
understand why I didn't get internet connect to work. But then I
realized that eth0 and eth1 had been swapped after I upgraded to udev.
Please advice your distribution udev documentation how to lock down
scsi and network cards to specific kernel names.

Regards,
Henrik Holst
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 000 of 7] md: Introduction - raid5 reshape mark-2

2006-02-07 Thread Henrik Holst
Hello linux world!

Excuse me for being so ignorant but /exactly how/ do I go about to find
out which files to download from kernel.org that will approve these
patches?

[snip from N. Browns initial post]


 [PATCH 001 of 7] md: Split disks array out of raid5 conf structure so it is 
 easier to grow.
 [PATCH 002 of 7] md: Allow stripes to be expanded in preparation for 
 expanding an array.
 [PATCH 003 of 7] md: Infrastructure to allow normal IO to continue while 
 array is expanding.
 [PATCH 004 of 7] md: Core of raid5 resize process
 [PATCH 005 of 7] md: Final stages of raid5 expand code.
 [PATCH 006 of 7] md: Checkpoint and allow restart of raid5 reshape
 [PATCH 007 of 7] md: Only checkpoint expansion progress occasionally.


I only get lot's of chunk failed when running patch command on my
src.tar.gz kernels. :-(

Thanks for advice,

Henrik Holst. Certified kernel patch noob.


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html