Re: [Lustre-discuss] md and mdadm in 1.8.7-wc1

2012-03-20 Thread Samuel Aparicio
Hello, thanks for this - it's a 16 disk raid10 (with one spare) so 24Tb. I previously tried 1.0 and 1.2 metadata, to no effect. we are using 256 chunk sizes, I haven't tried reverting to 64k but will do so. this looks to me like something different - there was an md patch for 2.6.18 kernels

Re: [Lustre-discuss] md and mdadm in 1.8.7-wc1

2012-03-20 Thread Joe Landman
On 03/20/2012 12:18 PM, Samuel Aparicio wrote: Hello, thanks for this - it's a 16 disk raid10 (with one spare) so 24Tb. I previously tried 1.0 and 1.2 metadata, to no effect. we are using 256 chunk sizes, I haven't tried reverting to 64k but will do so. this looks to me like something

Re: [Lustre-discuss] md and mdadm in 1.8.7-wc1

2012-03-20 Thread Samuel Aparicio
Thanks for this. we have zero experience of patching lustre 1.8.7 against newer kernels, are there any known pitfalls we should avoid. Professor Samuel Aparicio BM BCh PhD FRCPath Nan and Lorraine Robertson Chair UBC/BC Cancer Agency 675 West 10th, Vancouver V5Z 1L3, Canada. office: +1 604 675

Re: [Lustre-discuss] md and mdadm in 1.8.7-wc1

2012-03-20 Thread Samuel Aparicio
I appreciate your taking the time to do that. If it looks like it basically works we'll give it a try and see if the md issues go away. we have a pair of OSS servers we can try this on with a 200Tb filesystem and not risk losing anything important, so I think what we may do is compare the 1.8.7

[Lustre-discuss] md and mdadm in 1.8.7-wc1

2012-03-19 Thread Samuel Aparicio
I am wondering if anyone has experienced issues with md / mdadm in the 1.8.7-wc1 patched server kernels.? we have historically used software raid on our OSS machines because it provided a 20-30% throughput in our hands, over raid provided from our storage arrays (coraid ATA over ethernet