We recently ran some tests that we thought would be interesting to
share. We used the following setup:
- single client
- 16 servers
- gigabit ethernet
- read/write tests, with 40 GB files
- using reads and writes of 100 MB each in size
- varying number of processes running concurrently on the
We recently ran some tests trying different sync settings in PVFS2. We
ran into one pleasant surprise, although probably it is already obvious
to others. Here is the setup:
12 clients
4 servers
read/write test application, 100 MB operations, large files
fibre channel SAN storage
The test
Phil Carns wrote:
We recently ran some tests trying different sync settings in PVFS2.
We ran into one pleasant surprise, although probably it is already
obvious to others. Here is the setup:
12 clients
4 servers
read/write test application, 100 MB operations, large files
fibre channel SAN
This is similar to using O_DIRECT, which has also shown benefits.
With alt aio, do we sync in the context of the I/O thread?
Thanks,
Rob
Phil Carns wrote:
One thing that we noticed while testing for storage challenge was that
(and everyone correct me if I'm wrong here) enabling the
On Nov 28, 2006, at 12:34 PM, Phil Carns wrote:
The main issue here is that the trove initialize doesn't know
which method to use, so there needs to be a global default.
Each collection also gets its own method as you've noticed, so
you can specify that on a per collection basis.
I ran into a problem today with the 2.6.0 release. This happened to
show up in the read04 LTP test, but not reliably. I have attached a
test program that I think does trigger it reliably, though.
When run on ext3:
/home/pcarns ./testme /tmp/foo.txt
read returned: 7,
No. Both alt aio and the normal dbpf method sync as a seperate step
after the aio list operation completes.
This is technically possible with alt aio, though- you would just need
to pass a flag through to tell the I/O thread to sync after the
pwrite(). That would probably be pretty helpful,
That's what I was thinking -- that we could ask the I/O thread to do the
syncing rather than stalling out other progress.
Wanna try it and see if it helps :)?
Rob
Phil Carns wrote:
No. Both alt aio and the normal dbpf method sync as a seperate step
after the aio list operation completes.
These are great results Phil. Its nice to have you guys doing this
testing. Did you get a chance to run any of your tests with the
threaded version of pvfs2-client? I added -threaded option, which
runs pvfs2-client-core-threaded instead of pvfs2-client-core. For
the case where you're
On Nov 29, 2006, at 3:44 PM, Rob Ross wrote:
That's what I was thinking -- that we could ask the I/O thread to
do the syncing rather than stalling out other progress.
Wanna try it and see if it helps :)?
Rob
Phil Carns wrote:
No. Both alt aio and the normal dbpf method sync as a seperate
Hi guys,
I am really sorry about this. I am surprised we did not catch this
earlier. This was basically introduced by the file.c/bufmap.c cleanups
that I had done a while back.
Attached patch should fix this error.
thanks for the testcase, Phil!
Murali
On 11/29/06, Phil Carns [EMAIL PROTECTED]
Hi Phil,
Thanks for running these tests.
I think this buffer size will be dependant on the machine configuration right?
If we work out a simple formula for the buffer size based on say
memory b/w (and/or latency), network b/w (and/or latency), we could
plug that in as a sane default (bandwidth .
Hi Phil,
Attached patch fixes the read buffer bug that you had mentioned and
also implements the variable sized buffer counts and lengths that we
can pass as command line options to pvfs2-client-core.
I did not implement module time options for buffer size settings since
that is fairly
13 matches
Mail list logo