Re: confusion about partitionable md arrays

2006-12-30 Thread Bill Davidsen

Michael Schmitt wrote:

Hi Bill,

this `thingĀ“ works in another computer and as I moved it to another one,
I could not get it working. So the problem is... somewhere in between :)
As I had other non md-related problems in this computer (power supply
insufficient, system could not boot from hda anymore when sata disks
were connected to the pci-controller so I needed to unplug the sata
cables until grub has loaded, BIOS upgrade did not help), I moved the
array back in the original one, and it worked without a change.

But I noticed there a different behavior Generating udev events for MD
arrays showed up during boot, listing the partitions md0_d0p1 to
md0_d0p4 and of course /dev/md0_d0pN showed up. Then I cogitated about
all that a bit and came to some possible conclusions. Maybe the nameing
scheme for the arrays need to reflect somehow if it is a partibionable
array or not, if there are partitions and if udev needs to add the
appropriate devices... or not. Then I remembered some of the strange
things which happend when I did set up the array in the first place. I
tried to name the array md0 but /dev/md_0 was generated too. /dev/md0
was somehow dead and my array was accessible as /dev/md0_d0. A bit
later as I tried to build an array with IDE discs in the troublesome
computer, I noticed similar effects. I tried to name the array
simple /dev/array1 but /dev/md127_d0 (iirc) was generated
too. /dev/array1 was dead and the raid was accessible at /dev/md127_d0.
So, whats all that? Even if I had read somehwere the name of the device
can be anything, it may be that that's not really true.

So the question remains the same, the confusion is persistent as the
superblocks in my arrays :) Any input is appreciated.

I already did read some mails in this list but did not get the point and
no real explanation to my question. I will read on, maybe I get
enlightenment.
  
I just reread the man page, and there is a paragraph on partitionable 
arrays following the -a option. Having warned you that I'm not 
experienced in this, I wonder if there is some interaction between 
device names created by -amdp and rules for udev, and how much of this 
partition info is carried in the superblock. There's a lot of discussion 
of names in that paragraph.


I also wonder if using the -amdp vs. just -ap will work differently. 
Hopefully Neil will have words of wisdom if we continue to thrash 
without a good understanding.

greetings
Michael

Am Donnerstag, den 28.12.2006, 22:20 -0500 schrieb Bill Davidsen:
  

Michael Schmitt wrote:


Hi list,

since recent releases obviously it is possible to build an array and
partition that instead of building an array out of partitions. This was
somehow confusing but it worked in the first place. Now I moved the
array from one machine to another and... now it gets somehow strange. I
have nothing in /dev/md/ but /dev/md0. If I do fdisk -l it lists the
partitions /dev/md0p1 to /dev/md0p4, set to type 83 but there are no
devices under /dev/ or in /proc/partitions for them. I've read the
archives and googled around but there was no real solution just
different meanings on how it should be and how such things come. I'd
really appreciate a definite answer how that should work with
partitionable arrays and in best cases, what my problem may be here :)

At the end of this mail are the mdstat and mdadm outputs for reference

  
  
I'm not sure you have a problem, if this whole thing works correctly. 
However, there has been discussion about the implications of using whole 
drives instead of partitions to build your array. Having avoided that 
particular path I'm not going to rehash something I marginally 
understand, but some reading of post in the last few months may shed 
understanding.





--
bill davidsen [EMAIL PROTECTED]
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I abuse md like this?

2006-12-30 Thread Bill Davidsen

Bill Davidsen wrote:

Neil Brown wrote:

On Saturday December 23, [EMAIL PROTECTED] wrote:
 
I hope I can use the md code to solve a problem, although in a way 
probably not envisioned by the author(s).


I have a disk image, a physical dump of every sector from start to 
finish, including the partition table. What I hope I can do is to 
create a one drive RAID-1 partitionable array, and then access it 
with fdisk or similar. These partitions are not nice types such as 
FAT, VFAT, ext2, etc, this is an odd disk, and I saved it by 
saving everything. Now I'd like to start dismembering the 
information and putting it into useful pieces. I even dare to hope 
that I could get the original software running on a virtual machine 
at some point.


The other alternative is to loopback mount it, I'm somewhat 
reluctant to do that if I can avoid it.


Yes, the partition table is standard in format if not in content.



Maybe...
Is this image in a file?
md only works with block devices, so you would need to use the 'loop'
driver to create a block-device /dev/loopX.
  

I was thinking nbd, actually.

But as loop devices cannot be partitioned, you could then
  mdadm -Bf /dev/md/d9 -amdp8 -l1 -f -n1 /dev/loopX
  and then look at the partitions in /dev/md/d9_*

Should work.

Sounds worth a try. Will be a learning experience if nothing else.

Rather than setup nbd I did try a loop mount, and the whole process 
worked flawlessly. I was able to look at partitions, read the partition 
table, and generally do anything I could from a device. It worked so 
well I backed it up as an image, just in case I ever want to do 
something else with it.


Many thanks.

--
bill davidsen [EMAIL PROTECTED]
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: A few questions before assembling Linux 7.5TB RAID 5 array

2006-12-30 Thread Bill Davidsen

Yeechang Lee wrote:

[Also posted to 
comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware.]

I'm shortly going to be setting up a Linux software RAID 5 array using
16 500GB SATA drives with one HighPoint RocketRAID 2240 PCI-X
controller (i.e., the controller will be used for its 16 SATA ports,
not its hardware fakeraid). The array will be used to store and
serve locally and via gigabit Ethernet large, mostly high-definition
video recordings (up to six or eight files being written to and/or
read from simultaneously, as I envision it). The smallest files will
be 175MB-700MB, the largest will be 25GB+, and most files will be from
4GB to 12GB with a median of about 7.5GB. I plan on using JFS as the
filesystem, without LVM.

A few performance-related questions:

* What chunk size should I use? In previous RAID 5 arrays I've built
  for similar purposes I've used 512K. For the setup I'm describing,
  should I go bigger? Smaller?
  


I am doing some tests on this right now (this weekend), because I don't 
have an answer. If I get data I trust I'll share it. See the previous 
thread on poor RAID-5 performance, use a BIG stripe buffer and/or wait 
for a better answer on chunk size.

* Should I stick with the default of 0.4% of the array as given over
  to the JFS journal? If I can safely go smaller without a
  rebuilding-performance penalty, I'd like to. Conversely, if a larger
  journal is recommended, I can do that.
  
I do know something about that, having run AIX for a long time. If you 
have a high rate of metadata events, like create or delete file, large 
journal is a must, and I had one on another array with small stripe size 
to spread the head motion, otherwise the log drive became a bottleneck. 
If you are going to write a lot of data to this array, mount it 
noatime to avoid beating the journal and slowing your access.


Be sure you tune your readahead on each drive after looking at the 
actual load data. Think more is better but too much is worse, on that.

* I'm wondering whether I should have ordered two RocketRAID 2220
  (each with eight SATA ports) instead of the 2240. Would two cards,
  each in a PCI-X slot, perform better? I'll be using the Supermicro
  X7DVL-E
  
(URL:http://www.supermicro.com/products/motherboard/Xeon1333/5000V/X7DVL-E.cfm)
  as the motherboard.

  
My guess is that unless your m/b has dual PCI bus (it might), and you 
have 2 and 4 way memory interleave (my supermicro boards did the last 
time I used one), you are going to be able to swamp the bus and/or 
memory with a single controller.


Now, in terms of perform better, I'm not sure you would be able to 
measure it, and unless you have some $tate of the art network, you will 
run out of bandwidth to the outside world long before you run out of 
disk performance.


--
bill davidsen [EMAIL PROTECTED]
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: A few questions before assembling Linux 7.5TB RAID 5 array

2006-12-30 Thread Gordon Henderson


  Yeechang Lee wrote:
[Also posted to 
comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware.]


I'm shortly going to be setting up a Linux software RAID 5 array using
16 500GB SATA drives [...]


I'm of the opinion that more drives means more chance of failure, but 
maybe it's just me. I got bitten with a 2-drive failure once in an 8-drive 
RAID-5 set a couple of years ago. Fortunately with the aid of mdadm, etc. 
and having direct access to the drives rather than having them hidden away 
behind some hardware device I was able to recover the data that time, and 
now SMART is getting cleverer ... Howerver ...


Did you consider RAID-6?

I've been using it for some time now (over a year?)

But maybe drives are becoming more reliable though - not lost a drive in 
the past year!


Gordon
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


unknown ioctl32 cmd

2006-12-30 Thread Jan Engelhardt
Hi,



this line in mdadm-2.5.4

Detail.c:
185:  ioctl(fd, GET_BITMAP_FILE, bmf) == 0 

causes a dmesg warning when running `mdadm -D /dev/md0`:

ioctl32(mdadm:2946): Unknown cmd fd(7) cmd(5915){10} arg(ff2905d0) 
on /dev/md0

on Aurora Linux corona_2.90 with 2.6.18-1.2798.al3.1smp(sparc64). The 
raid array was created using `mdadm -C /dev/md0 -l 1 -n 2 missing 
/dev/sdb2 -e 1.0`. Given that case GET_BITMAP_FILE is handled in 
(2.6.18.5), I wonder what exactly is causing this.


Keep me on Cc, but you always do that. Thanks :)

-`J'
-- 
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html