Hi Sam, we start the pvfs2 servers on different machines than the
compute nodes (picking the nodes from the list provided by the batch
system). Was that your question?

And I should have said, all the measurements are done with collective
I/O of ROMIO.

On 7/17/07, Sam Lang <[EMAIL PROTECTED]> wrote:

Ah, I read your email wrong.  Hmm...so writes really tank.  Are you
using the storage nodes as servers, or other compute nodes?

-sam

On Jul 17, 2007, at 11:15 AM, Sam Lang wrote:

>
> Hi Florin,
>
> Just one clarification question...are those are bandwidth numbers
> not seconds as the plot label suggests?
>
> -sam
>
> On Jul 17, 2007, at 11:03 AM, Florin Isaila wrote:
>
>> Hi everybody,
>>
>> I have a question about the PVFS2 write performance.
>>
>> We did some measurements with BTIO over PVFS2 on lonestar at TACC
>> (http://www.tacc.utexas.edu/services/userguides/lonestar/)
>>
>> and we get pretty bad write results with classes B and C:
>>
>> http://www.arcos.inf.uc3m.es/~florin/btio.htm
>>
>> We used 16 I/O servers, the default configuration parameters and upto
>> 100 processes. We realized that all I/O servers were used also as
>> metadata servers, but BTIO uses just one file.
>>
>> The times are in seconds, contain only I/O time (no compute time) and
>> are aggregated per each BTIO run (BTIO performs several writes).
>>
>> TroveSyncMeta was set to yes (by default). Could this cause the I/
>> O to
>> be serialized? It looks as if there were a serialization.
>>
>> Or could the fact that all nodes were also launched as metadata
>> managers affect the performance?
>>
>> Any clue why this happens?
>>
>> Many thanks
>> Florin
>> _______________________________________________
>> Pvfs2-users mailing list
>> [email protected]
>> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>>
>


_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to