FYI, more information from Praveen.

Rob

-------- Original Message --------
Subject: Re: [PVFS2-users] pvfs2 scaling with addition of data servers
Date: Tue, 22 Nov 2005 14:24:17 -0800
From: Praveen KJ <[EMAIL PROTECTED]>
To: Rob Ross <[EMAIL PROTECTED]>
References: <[EMAIL PROTECTED]> <[EMAIL PROTECTED]>

Rob,
       Thanks for the prompt response.  I am using iozone on a single
client to read the file.
The throughput is reported by iozone. There is nothing happening on the
system. The whole system
including client and servers are totally homogenous and run Rocks 4.1

       Iozone command line looks like ..   'iozone -r 1m -w -s 5g -i 1'
Let me know if I have to tweak something in the system. Like the stripe
size etc...
We have the ability to experiment fast to try out different options.

       Also when I start the system, *df -h* reports 9%-14% of pvfs
partition space ( depending on number of servers)   as filled.
This is with zero files to begin with.  Is this the usual overhead ?


Thanks,
Praveen

Rob Ross wrote:

Hi,

Can you tell us more about what you're doing? How are you reading the file? How are you measuring the performance? Is anything else happening on the system?

Thanks,

Rob

Praveen KJ wrote:

Hi,
We have a pvfs2 client reading a 5 GB file sequentially from multiple pvfs2 data servers. There is just one metadata server. All the nodes are connected via GigE network.

The problem we are seeing is that the single-client read throughput decreases as the number of servers increase. With 24 servers we see around 5 MB/s read compared to 50 MB/s read with just 2 servers. The setup between each experiment is exactly same, except the number of servers.

    We have tried a couple of pvfs2 versions including the latest 1.3.1.
Is there any setting we have to tweak ? We have taken only pvfs2 defaults till now.
    Some of the relevant options are pasted below.

Thanks,
Praveen

------------------------------------------
pvfs2-fs.conf ( except hostnames )
------------------------------------------
<Defaults>
      UnexpectedRequests 50
      LogFile /tmp/pvfs2-server.log
      EventLogging none
      LogStamp usec
      BMIModules bmi_tcp
      FlowModules flowproto_multiqueue
      PerfUpdateInterval 1000
      ServerJobBMITimeoutSecs 30
      ServerJobFlowTimeoutSecs 30
      ClientJobBMITimeoutSecs 300
      ClientJobFlowTimeoutSecs 300
      ClientRetryLimit 5
      ClientRetryDelayMilliSecs 2000
</Defaults>

<Filesystem>
      Name pvfs2-fs
      ID 1175641402
      RootHandle 1048576
      <StorageHints>
              TroveSyncMeta yes
              TroveSyncData no
              AttrCacheKeywords datafile_handles,metafile_dist
              AttrCacheKeywords dir_ent, symlink_target
              AttrCacheSize 4093
              AttrCacheMaxNumElems 32768
      </StorageHints>
</Filesystem>
_______________________________________________
PVFS2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users


_______________________________________________
PVFS2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to