Hi,
I've been hacking around with PVFS for a bit now and have a two-node PVFS
cluster set up to facilitate some internal testing systems. Everything is
working but I'm seeing really dismal read performance from the IO nodes (I
haven't really tested writes yet). Using a compressed 225MB file as a test
case, it's generally taking between 35-40 seconds to pull it off of the PVFS
volume.
Here are some details of the environment and what I've checked so far:
- IO nodes are 2-proc Xeons running CentOS 4.4 x86_64 current
- Clients are 2-proc Xeons running FC6-xen current
- I've also tested from a CentOS 4.4 x86_64 client with similar results
- Both the client and server binaries were compiled with defaults (64-bit)
- IO nodes and client(s) are GigE connected to the same switch
-- link speed and duplex settings have been confirmed
- Copying the same file via scp takes ~5-6sec
Copy tests were done by simply timing 'cp' from the mounted PVFS volume to
local /dev/shm (to avoid filesystem latency).
When I strace the pvfs server process, I see the server writing to the TCP
socket in chunks no larger than ~4k:
writev(11, [{"\277\312\0\0\4\0\0\0\213^\0\0\0\0\0\0(\20\0\0\0\0\0\0", 24},
{"\274\v\0\0\2\0\0\0!\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0\0\0>"..., 4136}], 2) =
4160
I've tried tuning kernel-level TCP settings and
TCPBufferSend/TCPBufferReceive in the pvfs config, but I never see a change
in the size of the writes to the TCP socket. Is this normal? Is there a
tuning parameter that I'm missing that will influence the size of the
writes?
Thanks in advance,
---
Chris Halstead
SourceLabs - http://www.sourcelabs.com
Dependable Open Source Systems
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users