Hello Dr. Murali,
[cc. Pvfs2-users];

[meta: removed -irrelevant history]
[meta: request -new topic: "PVFS2 performance analysis : [how to]"]

Thank you for your feedback abour 5 days ago.

First, I wish you and all pvfs2-users a happy new year. Sincerely best
wishes for the future.

Besides, you say:

There are tons of benchmark programs each of which measure different things.

PVFS is really good at parallel I/O so it would be best if you could
run MPI I/O programs


I understand that benchmark programs should support MPI-I/O to closely
measure PVFS2. I have found the following tools specific to IO throughput
measurement in parallel environments:
- b_eff_io
- IOR

(multi-threaded apps running on the same cpu using the kernel/posix
interfaces is  not a good workload for us because we havent spent time
figuring out where performance loss occurs and how to fix them).


I understand that  IO performance using posix pthreads-enabled programs have
not been tested yet on PVFS2?

There is a program called iox.c under pvfs/test/posix/. you can run
that MPI program with -m 0 and -s 1 options to get an idea of the
aggregate rd/wr bandwidth to a single file.


I tried to compile the program (mpicc -o a.out iox.c) , but I get this
error:
iox.c:37: error: 'O_LARGEFILE' undeclared here (not in a function)
(I have openmpi installed but not mpich2)
I had to use the following command: mpicc -D_GNU_SOURCE -o a.out iox.c
please see appendix [PS1-iox.c] for results of mpirun on a single machine
I have gone through the code carefully yet, but I would highly appreciate a
brief primer on the iox.c functioning. Please.

It would be best if you could tell us what your goals are, how your
eventual workload looks like etc to save you the trouble of running
all these benchmarks before determining if it is appropriate or not
for you.


Since I am new to this topic (Parallel File Systems), I am not sure exactly
what I might do for research. My goal in general is to do some performance
analysis of IO throughtput for PVFS2 mainly, and optionally compare it with
Apache's Hadoop and other PFS.. Ideally I would want to run a set of
operation (e.g.: cp, ls, rm) on a large set of data distributed on a 4-PCs
cluster, and would measure IO throughtput of each read/write operations set.

This is my very basic theoretical approach to the problem, any suggestions
are very welcome.

(...) So if you need some
sorting capabilities, you could write one yourself or modify the ones
mentioned here to use MPI and run on Linux.
http://research.microsoft.com/barc/SortBenchmark/


Thanks. I shall work then on porting such sorting functions to linux.
Results will be posted when available.


it should be possible for lustre and pvfs2 to sit on the same kernel
since we take great pains to ensure that pvfs2's kernel module runs on
most distro/vanilla kernels.
that said, I don't know if that will impact performance for either due
to interference/memory pressure, who knows? :)


I see. Then, I delay lustre installation for  after running the first tests
with pvfs2. In any case, I can still install lustre on a single machine.
Results will be posted when available.


Thank you very much for your patience,

K. Honsali

---------------------------------------------------------------------------------------------------------------------------
[PS1-iox.c]

   mpirun -np 2 iox.out -m 0 -s 1
# Using read/write mode
# nr_procs = 2, nr_iter = 1, blk_sz = 65536, stream_ct = 1
# total_size = 131072
# Write:  min_t = 0.001239, max_t = 0.001436, mean_t = 0.001338, var_t =
0.000000
# Read:  min_t = 0.000159, max_t = 0.000163, mean_t = 0.000161, var_t =
0.000000
Write bandwidth = 91.2761 Mbytes/sec
Read bandwidth = 804.913 Mbytes/sec

  mpirun -np 20 iox.out -m 0 -s 1
# Using read/write mode
# nr_procs = 20, nr_iter = 1, blk_sz = 65536, stream_ct = 1
# total_size = 1310720
# Write:  min_t = 0.000764, max_t = 0.009601, mean_t = 0.003343, var_t =
0.000006
# Read:  min_t = 0.000157, max_t = 0.000295, mean_t = 0.000193, var_t =
0.000000
Write bandwidth = 136.517 Mbytes/sec
Read bandwidth = 4444.27 Mbytes/sec

  mpirun -np 200 iox.out -m 0 -s 1
# Using read/write mode
# nr_procs = 200, nr_iter = 1, blk_sz = 65536, stream_ct = 1
# total_size = 13107200
# Write:  min_t = 1.341870, max_t = 14.611615, mean_t = 6.855819, var_t =
4.484518
# Read:  min_t = 0.000152, max_t = 7.557576, mean_t = 0.725230, var_t =
2.805418
Write bandwidth = 0.89704 Mbytes/sec
Read bandwidth = 1.73431 Mbytes/sec
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to