Dean,
Here is the snippet from the output.
Command line used: /opt/iozone/bin/iozone -r 128k -w -f file1 -s
4g -w -i 1
random
random bkwd record stride
KB reclen write rewrite read reread read
write read rewrite read fwrite frewrite fread freread
4194304 128 63600
63100
-Praveen
Dean Hildebrand wrote:
65MB/s for 128K is really suprising. My experience with pvfs2 is that
performance is worse for smaller block size. (although the graph is
sometimes wavy with an upward trend). I wonder if iozone is behaving
properly....... What is your iozone command line? Instead of
iozone -aec -i 0 -i 1 -+n -f /path/to/pvfs2/file
have you tried specific file sizes and ranges
iozone -ec -i 0 -i 1 -+n -r 128K -s 100M -f /path/to/pvfs2/file
iozone -ec -i 0 -i 1 -+n -r 1M -s 100M -f /path/to/pvfs2/file
iozone -ec -i 0 -i 1 -+n -r 4M -s 100M -f /path/to/pvfs2/file
Dean
On Tue, 6 Dec 2005, Praveen KJ wrote:
Rob/Murali/Dean,
The read throughput seems to be variable
considering the iozone record size.
The experimental setup was same as described before. The variation of
record size and throughput for a sample ..
---------------------------------------------------------------------------------------------
record_size throughput comments
---------------------------------------------------------------------------------------------
( Steady increment of
throughput till 128k)
128k - 65 MB/s ( Steady decrement of throughput
till 1M)
1M - 6 MB/s ( Steady increment till - )
4 M - 16 MB/s
A graph plot would look wavy.
Iozone is not our final application :) We just
wanted to observe the performance space and also make sure installation
was done right. So your inputs were definitely helpful.
Thanks,
Praveen
Murali Vilayannur wrote:
Thanks Dean!
I will take a look at the VFS code paths/performance analysis.
Murali
On Wed, 30 Nov 2005, Dean Hildebrand wrote:
I've seen this through VFS. Here are some graphs (ignore the pNFS lines)
with 4MB block size and the standard read/write interfaces.
Overview:
http://www.citi.umich.edu/projects/asci/pnfs/darkstar-exp/
Single file:
http://www.citi.umich.edu/projects/asci/pnfs/darkstar-exp/experiments-5_13095_image001.gif
Separate files:
http://www.citi.umich.edu/projects/asci/pnfs/darkstar-exp/experiments-5_15651_image001.gif
As the # of data servers increases, aggregate read performance increases
but individual client read performance suffers a little.
Dean
On Wed, 30 Nov 2005, Murali Vilayannur wrote:
Hi Praveen,
Sorry for the late response. Unfortunately, we have not seen this kind of
behavior on many of our setups thru the MPI-I/O/ADIO-PVFS2 interfaces.
It is conceivable that performance thru the VFS do not scale with
number of servers.
Could you tell us how the file is being read (i.e using what interfaces,
what block sizes etc)?
There still aren't that many tweakable settings just yet, but we are
working on some I/O performance tuning and we will hopefully have a better
handle on what settings affect performance.
Thanks for the reports,
Murali
On Mon, 21 Nov 2005, Praveen KJ wrote:
Hi,
We have a pvfs2 client reading a 5 GB file sequentially from
multiple pvfs2 data servers.
There is just one metadata server. All the nodes are connected via
GigE network.
The problem we are seeing is that the single-client read throughput
decreases as the number of servers increase.
With 24 servers we see around 5 MB/s read compared to 50 MB/s read
with just 2 servers.
The setup between each experiment is exactly same, except the
number of servers.
We have tried a couple of pvfs2 versions including the latest 1.3.1.
Is there any setting we have to tweak ? We have taken only pvfs2
defaults till now.
Some of the relevant options are pasted below.
Thanks,
Praveen
------------------------------------------
pvfs2-fs.conf ( except hostnames )
------------------------------------------
<Defaults>
UnexpectedRequests 50
LogFile /tmp/pvfs2-server.log
EventLogging none
LogStamp usec
BMIModules bmi_tcp
FlowModules flowproto_multiqueue
PerfUpdateInterval 1000
ServerJobBMITimeoutSecs 30
ServerJobFlowTimeoutSecs 30
ClientJobBMITimeoutSecs 300
ClientJobFlowTimeoutSecs 300
ClientRetryLimit 5
ClientRetryDelayMilliSecs 2000
</Defaults>
<Filesystem>
Name pvfs2-fs
ID 1175641402
RootHandle 1048576
<StorageHints>
TroveSyncMeta yes
TroveSyncData no
AttrCacheKeywords datafile_handles,metafile_dist
AttrCacheKeywords dir_ent, symlink_target
AttrCacheSize 4093
AttrCacheMaxNumElems 32768
</StorageHints>
</Filesystem>
_______________________________________________
PVFS2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
_______________________________________________
PVFS2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
_______________________________________________
PVFS2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users