Sam,
> In the <StorageHints> context:
>
> TroveMethod alt-aio
Ah! thanks!
>
> Unless the data he's just written is sitting in the kernel
buffers, I
> would expect reads to have the same problem as writes if aio is the
> cause. What makes you suspect AIO libraries for his platform?
Oh I didn't realize his reads were better.. I just jumped to a
conclusion
because Florin gave me access to his cluster machine and when I
set it up
I ran configure with --disable-aio-threaded callbacks since
without that
his pvfs2 setup just sat there and did nothing..:)
No I/Os were being completed.
If it is the same cluster that he is talking about here then I
assumed it was
most likely due to that.
>
>
> Do you mean configure is disabling threaded callbacks for his
build,
> or that we should ask it to? AIO results we've seen without
threaded
> callbacks are worse than with them.
Without disabling his setup does not even work. It is a ppc64 based
cluster with a fairly ancient glibc if I am not mistaken.
yeah.. thats what I thought too..
> :-) I hear you. Are you running over ext3 Murali? I've seen
> results that suggest xfs might be better for large IOs and multiple
> threads.
On my home machine, yes.
On my latop, no. I run over NTFS eventually since my "virtual disk
files" are hosted off NTFS ;)
XFS rocks for such workloads indeed.
thanks,
Murali
>
> -sam
>
> > Thanks,
> > Murali
> >
> > On 7/17/07, Florin Isaila <[EMAIL PROTECTED]> wrote:
> >> Hi Sam, we start the pvfs2 servers on different machines than
the
> >> compute nodes (picking the nodes from the list provided by
the batch
> >> system). Was that your question?
> >>
> >> And I should have said, all the measurements are done with
collective
> >> I/O of ROMIO.
> >>
> >> On 7/17/07, Sam Lang <[EMAIL PROTECTED]> wrote:
> >> >
> >> > Ah, I read your email wrong. Hmm...so writes really tank.
Are you
> >> > using the storage nodes as servers, or other compute nodes?
> >> >
> >> > -sam
> >> >
> >> > On Jul 17, 2007, at 11:15 AM, Sam Lang wrote:
> >> >
> >> > >
> >> > > Hi Florin,
> >> > >
> >> > > Just one clarification question...are those are bandwidth
numbers
> >> > > not seconds as the plot label suggests?
> >> > >
> >> > > -sam
> >> > >
> >> > > On Jul 17, 2007, at 11:03 AM, Florin Isaila wrote:
> >> > >
> >> > >> Hi everybody,
> >> > >>
> >> > >> I have a question about the PVFS2 write performance.
> >> > >>
> >> > >> We did some measurements with BTIO over PVFS2 on
lonestar at
> >> TACC
> >> > >> (http://www.tacc.utexas.edu/services/userguides/lonestar/)
> >> > >>
> >> > >> and we get pretty bad write results with classes B and C:
> >> > >>
> >> > >> http://www.arcos.inf.uc3m.es/~florin/btio.htm
> >> > >>
> >> > >> We used 16 I/O servers, the default configuration
parameters
> >> and upto
> >> > >> 100 processes. We realized that all I/O servers were used
> >> also as
> >> > >> metadata servers, but BTIO uses just one file.
> >> > >>
> >> > >> The times are in seconds, contain only I/O time (no compute
> >> time) and
> >> > >> are aggregated per each BTIO run (BTIO performs several
writes).
> >> > >>
> >> > >> TroveSyncMeta was set to yes (by default). Could this cause
> >> the I/
> >> > >> O to
> >> > >> be serialized? It looks as if there were a serialization.
> >> > >>
> >> > >> Or could the fact that all nodes were also launched as
metadata
> >> > >> managers affect the performance?
> >> > >>
> >> > >> Any clue why this happens?
> >> > >>
> >> > >> Many thanks
> >> > >> Florin
> >> > >> _______________________________________________
> >> > >> Pvfs2-users mailing list
> >> > >> [email protected]
> >> > >> http://www.beowulf-underground.org/mailman/listinfo/
pvfs2-users
> >> > >>
> >> > >
> >> >
> >> >
> >> _______________________________________________
> >> Pvfs2-users mailing list
> >> [email protected]
> >> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
> >>
> >
>
>