On Wed, Nov 29, 2006 at 04:05:35PM -0600, Sam Lang wrote:
> I had a note that we should change the default aio data-sync code to
> only sync at the end of an IO request, instead of for each trove
> operation (in FlowBufferSize chunks). Doing this at the end of an
> io.sm seemed a little mess
On Nov 29, 2006, at 3:44 PM, Rob Ross wrote:
That's what I was thinking -- that we could ask the I/O thread to
do the syncing rather than stalling out other progress.
Wanna try it and see if it helps :)?
Rob
Phil Carns wrote:
No. Both alt aio and the normal dbpf method sync as a seperate
That's what I was thinking -- that we could ask the I/O thread to do the
syncing rather than stalling out other progress.
Wanna try it and see if it helps :)?
Rob
Phil Carns wrote:
No. Both alt aio and the normal dbpf method sync as a seperate step
after the aio list operation completes.
T
No. Both alt aio and the normal dbpf method sync as a seperate step
after the aio list operation completes.
This is technically possible with alt aio, though- you would just need
to pass a flag through to tell the I/O thread to sync after the
pwrite(). That would probably be pretty helpful,
This is similar to using O_DIRECT, which has also shown benefits.
With alt aio, do we sync in the context of the I/O thread?
Thanks,
Rob
Phil Carns wrote:
One thing that we noticed while testing for storage challenge was that
(and everyone correct me if I'm wrong here) enabling the data-syn
One thing that we noticed while testing for storage challenge was that
(and everyone correct me if I'm wrong here) enabling the data-sync
causes a flush/sync to occur after every sizeof(FlowBuffer) bytes had
been written. I can imagine how this would help a SAN, but I'm
perplexed how it help
Phil Carns wrote:
We recently ran some tests trying different sync settings in PVFS2.
We ran into one pleasant surprise, although probably it is already
obvious to others. Here is the setup:
12 clients
4 servers
read/write test application, 100 MB operations, large files
fibre channel SAN st
We recently ran some tests trying different sync settings in PVFS2. We
ran into one pleasant surprise, although probably it is already obvious
to others. Here is the setup:
12 clients
4 servers
read/write test application, 100 MB operations, large files
fibre channel SAN storage
The test app