Thanks for the fast reply, Kyle, Sam, and Kevin. These results are on
a single server and client, by the way. I'm trying to get that
performanc eup before moving to more servers.
On Jul 23, 2009, at 2:24 PM, Kyle Schochenmaier wrote:
I would first start with examining the potential for network issues
here.
-Try netpipe or something?
I've used iperf. It reports 1.95 Gigabit/second between the nodes.
-About your first test, are you copying FROM your home directory,
that could be an obbvious source of slowness.
The home directory is not networked mounted if that's the concern.
Just in case, though, I've tried copying from the / root direction
instead with no change.
-Iozone will almost always give bad numbers for pvfs2.
Because it's a single process on a single node sending one write at a
time? I understand that our speed in PVFS2 comes from scalability
rather than single node performance. But I'd like to get my per server
performance higher than 5 MB/s, something closer to the local
performance (which is 60-70 MB/s at the moment).
As a general rule of thumb, in PVFS2 on a single node, if the storage
gets X MB/s locally, what's the expected bandwidth to that node from a
client on the same high speed network accessing through pvfs2fuse?
0.8* X MB/s? 0.5 * X MB/s?
- any reason for the 4mb strip size for a single server test? It
shouldn't technically affect performance here.
No reason--it's just what I plan to use for the full deployment across
multiple nodes.
- if you are just doing single server tests what does a bone-stock
pvfs config file produce?
Using the output of a genconfig with default answers (except for
hostname and storage location, obviously) 7 MB/s for a pvfs2-cp
operation and 4.7 MB/s for dd through pvfs2fuse.
- I'm still leaning towards network issues, anything in the server
logs ?
Nothing other than the server starting and stopping, but I could turn
on more Gossip. Suggestions?
On Jul 23, 2009, at 2:45 PM, Sam Lang wrote:
Doing a component test of the device with local file system is a
good idea, but you should see what you get with 1 MB records, since
that's what you will be doing with PVFS (using a 1 MB flow buffer
size)
With both -c and -e on and 1 MB records, around 70 MB/s write speeds
locally. The runs of iozone are 4 gigabytes (twice the amount of
physical memory in the machine). sgdd is a neat idea, though. I'll
read up on that.
On Jul 23, 2009, at 2:45 PM, Kevin Harms wrote:
Does your RAID hardware tell you what i/o request size is best for
it? Then set FlowBufferSizeBytes to that as well as max_sectors_kb
in the directory listed below.
It's an interesting idea. I set chunk size to 128k when I made the
array, and might be getting odd interactions with parity. Perhaps I'll
try experiments with RAID-0, until I sort out network issues.
~Milo
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users