SX8 docs (was: The SX4 challenge)

2008-01-17 Thread James Andrewartha
On Wed, 2008-01-16 at 19:55 -0500, Jeff Garzik wrote:
 Promise just gave permission to post the docs for their PDC20621 (i.e. 
 SX4) hardware:
 http://gkernel.sourceforge.net/specs/promise/pdc20621-pguide-1.2.pdf.bz2
 
 joining the existing PDC20621 DIMM and PLL docs:
 http://gkernel.sourceforge.net/specs/promise/pdc20621-pguide-dimm-1.6.pdf.bz2
 http://gkernel.sourceforge.net/specs/promise/pdc20621-pguide-pll-ata-timing-1.2.pdf.bz2
 
 
 So, the SX4 is now open.  Yay :)  I am hoping to talk Mikael into 
 becoming the sata_sx4 maintainer, and finally integrating my 'new-eh' 
 conversion in libata-dev.git.
 
 But now is a good time to remind people how lame the sata_sx4 driver 
 software really is -- and I should know, I wrote it.

Hi Jeff,

What are the chances of the SX8 docs being opened? The vendor GPL driver
has bitrotted and the kernel driver offers poor performance or data
corruption. IIRC it's not part of libata but its own block device,
although you were considering porting it to get ATAPI support?

Thanks,

James Andrewartha
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Raid over 48 disks ... for real now

2008-01-17 Thread Norman Elton
I posed the question a few weeks ago about how to best accommodate
software RAID over an array of 48 disks (a Sun X4500 server, a.k.a.
Thumper). I appreciate all the suggestions.

Well, the hardware is here. It is indeed six Marvell 88SX6081 SATA
controllers, each with eight 1TB drives, for a total raw storage of
48TB. I must admit, it's quite impressive. And loud. More information
about the hardware is available online...

http://www.sun.com/servers/x64/x4500/arch-wp.pdf

It came loaded with Solaris, configured with ZFS. Things seemed to
work fine. I did not do any benchmarks, but I can revert to that
configuration if necessary.

Now I've loaded RHEL onto the box. For a first-shot, I've created one
RAID-5 array (+ 1 spare) on each of the controllers, then used LVM to
create a VolGroup across the arrays.

So now I'm trying to figure out what to do with this space. So far,
I've tested mke2fs on a 1TB and a 5TB LogVol.

I wish RHEL would support XFS/ZFS, but for now, I'm stuck with ext3.
Am I better off sticking with relatively small partitions (2-5 TB), or
should I crank up the block size and go for one big partition?

Thoughts?

Norman Elton
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Raid over 48 disks ... for real now

2008-01-17 Thread Norman Elton
 Hi, sounds like a monster server. I am interested in how you will make
 the space useful to remote machines- iscsi? this is what I am
 researching currently.

Yes, it's a honker of a box. It will be collecting data from various
collector servers. The plan right now is to collect the file to
binary files using a daemon (already running on a smaller box), then
make the last 30/60/90/?? days available in a database that is
populated from these files. If we need to gather older data, then the
individual files must be consulted locally.

So, in production, I would probably setup the database partition on
it's own set of 6 disks, then dedicate the rest to handling/archiving
the raw binary files. These files are small (a few MB each), as they
get rotated every five minutes.

Hope this makes sense, and provides a little background info on what
we're trying to do.

Norman
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Incompatibility between Linux and Windows

2008-01-17 Thread Leopoldo Ghielmetti
Hi,

I've some questions on the Linux software raid. I have assembled a raid1
array with 2 external disks and what we want is to mount those disks in
Linux (using raid), in Windows (using a read-only mount of only one
disk) and in a Linux without the raid support (also with a read-only
mount).

I remember that it was possible when I've used the raid tools but today
it seems to be impossible, the Windows don't recognize the partition and
Linux answer that he can't mount a linux_raid_member partition when I
choose to mount only one of the devices (in read-only mode).

Before it was possible to mount the discs independently or using the
raidtools.

How can I make this operations?

It's a problem in the superblock? I remember that normally the
superblock is at the end of the partition so he don't interfere with the
using of the filesystem by an OS that don't known Linux Raid Arrays but
the error of the Linux itself and the non recognition by Windows let me
suppose that now the mdadm create the superblock at the beginning of the
filesystem. There is a method to move the superblock at the end and
leave the system compatible with both OS?

There is a method to access only one disk from a Linux that don't known
raid arrays?

Why the implementation (that worked very well) has been changed to block
the easy use of the Linux Raid partitions (that was the real improvement
over all other raid solutions)?

We seriously think to remove the mdadm tool and recompile the old
version of the raidtool that worked better than mdadm but I don't known
if the new kernels supports it.

Thank-you.

Leopoldo Ghielmetti

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: do_md_run returned -22 [Was: 2.6.24-rc8-mm1]

2008-01-17 Thread Neil Brown
On Thursday January 17, [EMAIL PROTECTED] wrote:
 On Thu, 17 Jan 2008 16:23:30 +0100 Jiri Slaby [EMAIL PROTECTED] wrote:
 
  On 01/17/2008 11:35 AM, Andrew Morton wrote:
   ftp://ftp.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.24-rc8/2.6.24-rc8-mm1/
  
  still the same md issue (do_md_run returns -22=EINVAL) as in -rc6-mm1 
  reported 
  by Thorsten here:
  http://lkml.org/lkml/2007/12/27/45
 
 hm, I must have been asleep when that was reported.  Neil, did you see it?

No, even though it was Cc:ed to me - sorry.
Maybe a revised subject line would have helped... maybe not.

 
  Is there around any fix for this?
 
 Well, we could bitbucket md-allow-devices-to-be-shared-between-md-arrays.patch

Yeah, do that.  I'll send you something new.
I'll move that chunk into a different patch and add the extra bits
needed to make that test correct in *all* cases rather than just the
ones I was thinking about at the time.
My test suit does try in-kernel-autodetect (the problem case) but it
didn't catch this bug due to another bug.  I'll fix that too.

Thanks,
NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html