Hi Leigh,

I've found that the overhead from writing to the trace files isn't
usually noticeable unless you have a pathological case where there are
many read/write operations with small amounts of data. For instance,
if you have a case where you intend to do 1MB writes, but they get
broken down into 4KB writes and 256 as many, the overhead is bad.

There could also be some overhead associated with opening 30K files,
but this should occur during MPI_Init, so you can easily exclude it
from any timings you are doing by starting your timer after MPI_Init
(which you would have to do anyway if you are using MPI_Wtime).

Mark

On Wed, Apr 6, 2011 at 6:49 PM, Leigh Orf <[email protected]> wrote:
> On Wed, Apr 6, 2011 at 2:39 PM, Rob Latham <[email protected]> wrote:
>>
>> On Wed, Apr 06, 2011 at 02:01:17PM -0600, Leigh Orf wrote:
>> > > You have to relink your application against the libipm.a that his
>> > > produces (or you can enable the shared library and do an LD_PRELOAD).
>> > > After you application runs, you'll have a text file for each MPI rank
>> > > with the POSIX calls and their arguments.
>> >
>> > Is it possible that having 30,000 text files being written could
>> > actually
>> > affect timings when trying to ascertain what's going on with I/O? If so,
>> > is
>> > there any way around this?
>>
>> It's the classic tradeoff: you can have a lightweight tracing approach
>> that generates summaries of the behavior or you can record every
>> operation (and potentially perturb the results).
>
> I was hoping perhaps that writes were buffered, and since the files are
> small, performance might not be impacted beyond opening the file and
> flushing at the end. So far as I know, there is no way to profile the
> profiling software with the profiling software!
>
>>
>> the Argonne 'darshan' project might give enough of a big picture
>> summary, but it was designed foremost to be lightweight, not
>> exhaustive:
>>
>> http://press.mcs.anl.gov/darshan/
>
> Thank you, I will check it out.
>
> Leigh
>
>>
>> --
>> Rob Latham
>> Mathematics and Computer Science Division
>> Argonne National Lab, IL USA
>>
>> _______________________________________________
>> Hdf-forum is for HDF software users discussion.
>> [email protected]
>> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>
>
>
> --
> Leigh Orf
> Associate Professor of Atmospheric Science
> Department of Geology and Meteorology
> Central Michigan University
> Currently on sabbatical at the National Center for Atmospheric Research
> inĀ Boulder, CO
> NCAR office phone: (303) 497-8200
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> [email protected]
> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>
>

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to