ok thanks, I will look into this.
Professor Samuel Aparicio BM BCh PhD FRCPath
Nan and Lorraine Robertson Chair UBC/BC Cancer Agency
675 West 10th, Vancouver V5Z 1L3, Canada.
office: +1 604 675 8200 cellphone: +1 604 762 5178: lab website
http://molonc.bccrc.ca
On Jan 21, 2011, at 2:00 PM, A
On 2011-01-21, at 14:50, Samuel Aparicio wrote:
> modinfo reports as follows. seems like the ext4 modules.
> the odd thing is that the format works when the disk array is already
> presented as a raid set, rather than making the raidset with mdadm on the OSS
>
>
> filename:
> /lib
e2fsprogs-1.41.10.sun2-0redhat.rhel5.x86_64
mkfs.lustre --ost --fsname=lustre --reformat --mgsnode=11.1.254.3@tcp0 /dev/md2
mdadm -v --create /dev/md2 --chunk=256 --level=raid10 --raid-devices=16
--spare-devices=1 --assume-clean --layout=n2 /dev/etherd/e5.9 /dev/etherd/e5.10
/dev/etherd/e5.11
modinfo reports as follows. seems like the ext4 modules.
the odd thing is that the format works when the disk array is already presented
as a raid set, rather than making the raidset with mdadm on the OSS
filename:
/lib/modules/2.6.18-194.3.1.el5_lustre.1.8.4/updates/kernel/fs/lus
Im not an expert on lustre, just begin with it J but
What is your version of e2fsprogs?
What is your command line to format your raid?
Regards.
De : lustre-discuss-boun...@lists.lustre.org
[mailto:lustre-discuss-boun...@lists.lustre.org] De la part de Samuel
Aparicio
Envoyé : vend
On 2011-01-21, at 13:36, Samuel Aparicio wrote:
> trying to create an ext4 lustre filesystem attached to an OSS.
> the disks being used are exported from an external disk enclosure.
> i create a raid10 set with mdadm from 16 2Tb disks, this part seems fine.
> I am able to format such an array with
Our lustre 1.8.4 system sits primarily on subnet A. However, we also
have a small number of clients that sit on subnet B. In setting up the
subnet B clients, we provided lnet router machines that have addresses
on both subnet A and on subnet B, the MGS machine has addresses on both
subnet A a
I am having the following issue:
trying to create an ext4 lustre filesystem attached to an OSS.
the disks being used are exported from an external disk enclosure.
i create a raid10 set with mdadm from 16 2Tb disks, this part seems fine.
I am able to format such an array with normal ext4, mount a f
On Fri, Jan 21, 2011 at 3:43 AM, Thomas Roth wrote:
> Hi all,
>
> we have gotten new MDS hardware, and I've got two questions:
>
> What are the recommendations for the RAID configuration and formatting
> options?
> I was following the recent discussion about these aspects on an OST:
> chunk size,
It does look exactly what I need. Thanks Aurllien.
From bugzilla, the patch has been checked in to 1.8. Could someone please
point me to the source location? Is it in 1.8.5 - don't believe so,
but I thought I would check. If not, would it be in later release of 1.8?
thanks all,
Haisong
On Thu,
On 2011-01-21, at 06:55, Ben Evans wrote:
> In our lab, we've never had a problem with simply having 1 MGS per
> filesystem. Mountpoints will be unique for all of them, but functionally it
> works just fine.
While this "runs", it is definitely not correct. The problem is that the
client will
In our lab, we've never had a problem with simply having 1 MGS per filesystem.
Mountpoints will be unique for all of them, but functionally it works just fine.
-Ben Evans
-Original Message-
From: lustre-discuss-boun...@lists.lustre.org on behalf of Thomas Roth
Sent: Fri 1/21/2011 6:43
Hi all,
we have gotten new MDS hardware, and I've got two questions:
What are the recommendations for the RAID configuration and formatting
options?
I was following the recent discussion about these aspects on an OST:
chunk size, strip size, stride-size, stripe-width etc. in the light of
the 1
13 matches
Mail list logo