On Feb 22, 2006, at 7:46 PM, Avery Ching wrote:

Hi guys.  I'm sending you guys a link for a somewhat complex I/O
benchmark I wrote for doing all kinds of noncontiguous I/O through
MPI-IO. It's called HPIO and does all kinds of weird tests. Anyway, it is really overkill for this small I/O bug, but the nice thing is that I
have a verify mode which allows me check what values the writes and
reads are against the actual buffer read/written to.

Okay, there is a README in the top level directory, but basically, you
just need to change the Makefile to point to your MPICH2 install
(compiled against PVFS2), compile, and then run a very simple mem=contig
to file=contig test (512 bytes).

Basically, you can run something like

mpiexec -n 1 ./hpio-debug -t 10 -m 1 -n 1000  -v 1 -c 1 -d
pvfs2:/mnt/pvfs2


I get the following when I run the above command:

#################### Experimental Settings #################
procs                            = 1
dir                              = pvfs2:/home/slang/pvfs2.mnt/
default: region count            = 1
default: region size             = 512
default: region spacing          = 16
reps                             = 1
rep maximum time (secs)          = 600.000000
average method                   = 0
test: region_count               = 0
test: region_size                = 1
test: region_spacing             = 0
(M) Contig      | (F) Contig     = 1
(M) NonContig   | (F) Contig     = 0
(M) Contig      | (F) NonContig  = 0
(M) NonContig   | (F) NonContig  = 0
posix I/O                        = 0
data sieving I/O                 = 0
two phase I/O                    = 0
list I/O                         = 0
datatype I/O                     = 1
rw: WRITE                        = 1
rw: READ                         = 0
verify                           = 1
minimum parameter value          = 0
enable fsync                     = 0
fsync method                     = posix
enable cache                     = 0
cache size (MBytes)              = 2048
settle time (secs between runs)  = 0
calculate space                  = 0
total data accessed              = 512 bytes - 0.000488 MBytes
total data extent                = 528 bytes - 0.000504 MBytes
############################################################

write | region_size | c-c | datatype
----------------time (seconds)--------------|-bandwidth (MB/s)|--- test type--- open | io | sync | close | total | IO | IOsyn | region_size
  0.073 |  0.002 |  0.000 |  0.000 |  0.075 |  0.211 |  0.210 | 512
  0.038 |  0.002 |  0.000 |  0.000 |  0.040 |  0.409 |  0.408 | 1024
  0.045 |  0.002 |  0.000 |  0.000 |  0.048 |  0.809 |  0.806 | 2048

--

I assume this means it completed successfully? How many pvfs2- servers do you have running?

-sam

And it will error with

count=0,elements=0
region_count=1,region_size=512
Which means that the status reported count=0, elements=0 from the write operation. If we look at the ad_pvfs2_write.c code we see that this is
set in the line

MPIR_Status_set_bytes(status, datatype, (int)resp_io.total_completed);

And when I further look at the resp struct, the total_completed=0,
(instead of what it should be (512)).

This seems to only happen during the small I/O case.  When it's normal
I/O, the total_completed seems to be set correctly.  To further verify
this, if you run the test without the -d option (use UFS instead of
PVFS2)

mpiexec -n 1 ./hpio-debug -t 10 -m 1 -n 1000  -v 1 -c 1

it will complete (and verify the buffer) since the status is set
correctly when using a UFS.

Anyway, grab the benchmark from

http://www.ece.northwestern.edu/~aching/hpio.0.9.tar.gz

Let me know how else I can help.

Avery

On Wed, 2006-02-22 at 17:14 -0600, Rob Ross wrote:
nevermind - i'm not reading carefully. -- rob

Robert Latham wrote:
On Wed, Feb 22, 2006 at 03:22:35PM -0600, Avery Ching wrote:
Sure. I'm actually have a single client just doing a small contiguous write of 50 bytes. But I think it occurs for pretty much any small I/O operation I do. It seems to happen with any amount of PVFS2 servers. Both the memory request and file request structures are contig I think.
The total_completed isn't filled in during the write.  I think it
happens also during the read (althought I'm not quite sure on that).
I'm using the HEAD CVS version, (just did an update this morning).

Avery,

can you send me your benchmark?  One of our nightly tests is a
noncontiguous benchmark that writes small little 2k pieces out to a
file exercising {noncontig,contig} in {memory,file}, and that test has been passing for a while now (since we fixed the immediate completion
for zero-length io case).

If you've found a bug, we'll have to beef up our noncontiguous and
small i/o test cases.

==rob



_______________________________________________
Pvfs2-developers mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-developers

Reply via email to