Thanks for the quick reply Bernd,

I have not put any valuable data on the filesystem yet, so no worries about data loss, just 1.2T of zeros. How can I disable extents and mballoc? is it a ../configure option? I am away from the machines right now or I would check myself.

As for my speed question, I apologize for not adding any info, I have the following setup:
6 Supermicro servers each has:
   2x SATA2 320GB drives with 16MB cache
   2x Intel(R) Xeon(R) CPU  3060  @ 2.40GHz
2X Intel e1000 cards with TSO turned on, and a gigabit switch (I am thinking of bonding the interfaces if that would help me with performance)

I can also set the MTU's up to 9000 on my switch.

-Joel Robison



On Jul 5, 2007, at 11:30 AM, Bernd Schubert wrote:

Joel Robison wrote:

Apologies for my reply to this fairly dated thread, however I would
like to report that I have successfully built 1.6.0.1 on Feisty 7.04
(amd64) using 2.6.18-vanilla and modifying only lustre/obdfilter/
filter_io_26.c as described here:
https://mail.clusterfs.com/pipermail/lustre-discuss/2006-October/
002263.html

Lustre works well, but there are numerous errors in dmesg on the
OST's and MGST that I can only speculate on.

Be carefull, probably your filesystem will suffer from silent data
corruption. Just copy the hole linux source on your lustre filesystem and
then umount and e2fsck the OSSs.
If you want to use 2.6.0.1 as it is, at least you will need to disable
extents,mballoc.


I do have a question regarding throughput in terms of what I should
expect from a simple dd copy from /dev/zero to a striped file across
4 OST's.  I am new to working with Lustre and I am very excited about

That stronlgy depends on the speed of your OSTs ;)

what has been accomplished by this project so far.

Would it be possible for me to get some Deb packages if anyone has
made any? so I can compare what I have to learn a bit more.

I have amd64 debs for the utils, but not for the kernel modules.

Cheers,
Bernd


_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to