Tony Kew wrote:
Dear Phil,
That fixed it - I successfully ran a 64 node PBS job with a built on the
fly PVFS filesystem & ran a 64 node iozone job against the filesystem
Great- glad that fixed your problem.
The test dir was set up with 16K strips on each node for a 1024K stripe.
iozone set up to write in 1024K blocks. The job took about an hour and
five minutes to run...
Initial write 668,321.05 KB/sec
Rewrite 677,999.64 KB/sec
Read 300,937.30 KB/sec
Re-read 313,756.56 KB/sec
You realize that you can create this same setup with the default simple
stripe distribution too if you use a 16K strip size, right?
3 of the 64 nodes showed some timeouts in their server longs from the
job, but I believe these are non fatal, e.g.:
[E 10/24 17:25] job_time_mgr_expire: job time out: cancelling flow
operation, jo
b_id: 3679067.
These aren't fatal, but if you get them frequently in your workload then
it is probably going to have a negative impact on your performance.
I haven't tried setting <StorageHints> TroveMethod alt-aio
or increasing "ServerJobFlowTimeoutSecs" yet as you suggested to Brian yet.
In the latter case, the pvfs2-genconfig option: --server-job-timeout 300
sets both "ServerJobFlowTimeoutSecs" and "ServerJobBMITimeoutSecs" to 300
presumably this is what you want ?
Yes, that should work.
Incidentally, what does the "alt-io" option do?
It switches the pvfs2 server from using the standard glibc aio
implementation to using its own alternative implementation. The latter
works better for PVFS workloads because it eliminates some thread
synchronization and imposes fewer ordering constraints on the I/O.
-Phil
Thanks Much,
Tony
Tony Kew
SAN Administrator
The Center for Computational Research
New York State Center of Excellence
in Bioinformatics & Life Sciences
701 Ellicott Street, Buffalo, NY 14203
CoE Office: (716) 881-8930 Fax: (716) 849-6656
CSE Office: (716) 645-3797 x2174
Cell: (716) 560-0910 Home: (716) 874-2126
"I love deadlines, I love the whooshing noise they make as they go by."
Douglas Adams
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users