Hi everyone,

I'm testing lustre 2.0 in a pre-production environment and I encountered some problems.

My test environment consists of:
1 MDS/MGS (not combined) with 48G of ram and 16 xeon E5520 core
1 OSS with the same characteristics.
1 Client
Both MDS and OSS are connected to an ISCSI shared storage with different volumes.

I've also production environment with lustre 1.8.2 and 2 filesystem served by:
2 MDS/MGS where each server is active for one file system and passive for other.
2 OSS: Same configuration of MDS.
50 Client that runs apache web-server.

I've performed a 8 hours stress test in both environment using apache log's for replicate real traffic, this is the results:

Lustre 1.8.2
Avg load: on Mds 0,8 on Oss 0,2 and  network load on mds (on gigabit NIC dedicated to lustre traffic) about 50.000 Kb/s.

Lustre 2.0.0
Avg load: on Mds 2,8 on Oss 0,1 and  network load on mds (on gigabit NIC dedicated to lustre traffic) about 180.000 Kb/s.

On lustre 2.0 the network traffic is much higher than on Lustre 1.8.2, if we considered that in addition to the stress test, on Lustre 1.8.2, also production runs, i think there is a problem.

With TCPDump I saw that when I run multiple "ls -l" on Client connected to lustre 1.8.2 only some time client send packet to MDS, with client connected to lustre 2.0 all the time.

I Think that client 2.0 doesn' t cache files' attr.


could someone help me?
Thanks in advice and have a very nice day. ;)

P.S.
Sorry for my not perfect English.

--
Gianluca Tresoldi
***SysAdmin***
***Demon's Trainer***
Tuttogratis Italia Spa
E-mail: [email protected]
http://www.tuttogratis.it
Tel Centralino 02-57313101
Tel Diretto 02-57313136
Be open...
*********** Confidentiality Notice & Disclaimer *************
This message, together with any attachments, is for the confidential
and exclusive use of the addressee(s). If you receive it in error,
please delete the message and its attachments from your system
immediately and notify us by return e-mail.
Do not disclose, copy, circulate or use any information contained in
this e-mail.
_______________________________________________
Lustre-discuss mailing list
[email protected]
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to