[OMPI users] freeing attributes on communicators

2009-03-12 Thread Robert Latham
Hello all. I'm using openmpi-1.3 in this example, linux, gcc-4.3.2, configured with nothing special. If I run the following simple C code under valgrind, single process, I get some errors about reading and writing already-freed memory: --- #include #include int delete_

Re: [OMPI users] MPI_File_write_ordered does not truncate files

2009-02-18 Thread Robert Latham
On Wed, Feb 18, 2009 at 02:44:09PM -0800, Brian Austin wrote: > I don't know whether this is the correct behavior, but it is the > correct origin of my confusion. > I suspected this would be attributed to the standard, but it is > contrary to what I'm used to with C's fopen: > I expected MPI_File_

Re: [OMPI users] MPI_File_write_ordered does not truncate files

2009-02-18 Thread Robert Latham
On Wed, Feb 18, 2009 at 02:24:03PM -0700, Ralph Castain wrote: > Hi Rob > > Guess I'll display my own ignorance here: > >>> MPI_File_open( MPI_COMM_WORLD, "foo.txt", >>>MPI_MODE_CREATE | MPI_MODE_WRONLY, >>>MPI_INFO_NULL, &fh ); > > > Since the file was opened with MPI_MODE

Re: [OMPI users] MPI_File_write_ordered does not truncate files

2009-02-18 Thread Robert Latham
On Wed, Feb 18, 2009 at 11:10:51AM -0800, Brian Austin wrote: > >> Can you confirm - are you -really- using 1.1.2??? > >> > >> You might consider updating to something more recent, like 1.3.0 or > >>at least 1.2.8. It would be interesting to know if you see the same > >> problem. > > > Also, if yo

Re: [OMPI users] MPI_Type_create_darray causes MPI_File_set_view to crash when ndims=2, array_of_gsizes[0]>array_of_gsizes[1]

2008-11-12 Thread Robert Latham
On Fri, Oct 31, 2008 at 11:19:39AM -0400, Antonio Molins wrote: > Hi again, > > The problem in a nutshell: it looks like, when I use > MPI_Type_create_darray with an argument array_of_gsizes where > array_of_gsizes[0]>array_of_gsizes[1], the datatype returned goes > through MPI_Type_commit()

Re: [OMPI users] ADIOI_GEN_DELETE

2008-11-12 Thread Robert Latham
On Thu, Oct 23, 2008 at 12:41:45AM -0200, Davi Vercillo C. Garcia (ダヴィ) wrote: > Hi, > > I'm trying to run a code using OpenMPI and I'm getting this error: > > ADIOI_GEN_DELETE (line 22): **io No such file or directory > > I don't know why this occurs, I only know this happens when I use more >

Re: [OMPI users] bug in MPI_File_get_position_shared ?

2008-09-15 Thread Robert Latham
On Sat, Aug 16, 2008 at 08:05:14AM -0400, Jeff Squyres wrote: > On Aug 13, 2008, at 7:06 PM, Yvan Fournier wrote: > >> I seem to have encountered a bug in MPI-IO, in which >> MPI_File_get_position_shared hangs when called by multiple processes >> in >> a communicator. It can be illustrated by the

Re: [OMPI users] Parallel I/O with MPI-1

2008-07-24 Thread Robert Latham
On Wed, Jul 23, 2008 at 09:47:56AM -0400, Robert Kubrick wrote: > HDF5 supports parallel I/O through MPI-I/O. I've never used it, but I > think the API is easier than direct MPI-I/O, maybe even easier than raw > read/writes given its support for hierarchal objects and metadata. In addition to t

Re: [OMPI users] Parallel I/O with MPI-1

2008-07-24 Thread Robert Latham
On Wed, Jul 23, 2008 at 01:28:53PM +0100, Neil Storer wrote: > Unless you have a parallel filesystem, such as GPFS, which is > well-defined and does support file-locking, I would suggest writing to > different files, or doing I/O via a single MPI task, or via MPI-IO. I concur that NFS for a parall

Re: [OMPI users] Parallel I/O with MPI-1

2008-07-24 Thread Robert Latham
On Wed, Jul 23, 2008 at 02:24:03PM +0200, Gabriele Fatigati wrote: > >You could always effect your own parallel IO (e.g., use MPI sends and > receives to coordinate parallel reads and writes), but >why? It's already > done in the MPI-IO implementation. > > Just a moment: you're saying that i can

Re: [OMPI users] Problem with NFS + PVFS2 + OpenMPI

2008-05-29 Thread Robert Latham
On Thu, May 29, 2008 at 04:48:49PM -0300, Davi Vercillo C. Garcia wrote: > > Oh, I see you want to use ordered i/o in your application. PVFS > > doesn't support that mode. However, since you know how much data each > > process wants to write, a combination of MPI_Scan (to compute each > > process

Re: [OMPI users] Problem with NFS + PVFS2 + OpenMPI

2008-05-29 Thread Robert Latham
On Thu, May 29, 2008 at 04:24:18PM -0300, Davi Vercillo C. Garcia wrote: > Hi, > > I'm trying to run my program in my environment and some problems are > happening. My environment is based on PVFS2 over NFS (PVFS is mounted > over NFS partition), OpenMPI and Ubuntu. My program uses MPI-IO and > BZ

Re: [OMPI users] MPI-IO problems

2008-01-31 Thread Robert Latham
On Mon, Jan 28, 2008 at 03:26:14PM -0800, R C wrote: > Hi, > I compiled a molecular dynamics program DLPOLY3.09 on an AMD64 cluster > running > openmpi 1.2.4 with Portland group compilers.The program seems to run alright, > however, each processor outputs: > > ADIOI_GEN_DELETE (line 22): **io N

Re: [OMPI users] problems with flash

2008-01-31 Thread Robert Latham
On Tue, Jan 22, 2008 at 11:25:25AM -0500, Brock Palen wrote: > Has anyone had trouble using flash with openmpi? We get segfaults > when flash tries to write checkpoints. segfaults are good if you also get core files. do the backtraces from those core files look at all interesting? ==rob --

Re: [OMPI users] ADIOI_Set_lock failure

2008-01-31 Thread Robert Latham
On Fri, Jan 18, 2008 at 07:44:12PM -0500, Jeff Squyres wrote: > FWIW, you might want to ask the ROMIO maintainers if this is a known > problem. I unfortunately have no idea. :-\ Sorry, we're not much more help either... I know hdf5+pvfs+openMPI works. What if you run the test programs in the

Re: [OMPI users] MPI_Request and attributes

2007-11-05 Thread Robert Latham
On Fri, Nov 02, 2007 at 12:18:54PM +0100, Oleg Morajko wrote: > Is there any standard way of attaching/retrieving attributes to MPI_Request > object? > > Eg. Typically there are dynamic user data created when starting the > asynchronous operation and freed when it completes. It would be convenient

Re: [OMPI users] mpiio romio etc

2007-09-14 Thread Robert Latham
On Fri, Sep 14, 2007 at 02:31:51PM -0400, Jeff Squyres wrote: > Ok. Maybe we'll just make a hard-coded string somewhere "ROMIO from > MPICH2 vABC, on AA/BB/" or somesuch. That'll at least give some > indication of what version you've got. That sort-of reminds me: ROMIO (well, all of MPI

Re: [OMPI users] mpiio romio etc

2007-09-14 Thread Robert Latham
On Fri, Sep 14, 2007 at 02:16:46PM -0400, Jeff Squyres wrote: > Rob -- is there a public constant/symbol somewhere where we can > access some form of ROMIO's version number? If so, we can also make > that query-able via ompi_info. There really isn't. We used to have a VERSION variable in con

Re: [OMPI users] mpiio romio etc

2007-09-14 Thread Robert Latham
On Fri, Sep 07, 2007 at 10:18:55AM -0400, Brock Palen wrote: > Is there a way to find out which ADIO options romio was built with? not easily. You can use 'nm' and look at the symbols :> > Also does OpenMPI's romio come with pvfs2 support included? What > about Luster or GPFS. OpenMPI has ship

Re: [OMPI users] buildsystem / adio-lustre-mpich2-v02.patch

2007-08-30 Thread Robert Latham
On Sun, Aug 26, 2007 at 06:44:18PM +0200, Bernd Schubert wrote: > I'm presently trying to add lustre support to open-mpi's romio using this > patch http://ft.ornl.gov/projects/io/src/adio-lustre-mpich2-v02.patch. > > It basically applies, only a few C files have been renamed in open-mpi, but > the

Re: [OMPI users] DataTypes with "holes" for writing files

2007-07-18 Thread Robert Latham
On Tue, Jul 10, 2007 at 04:36:01PM +, jody wrote: > I think there is still some problem. > I create different datatypes by resizing MPI_SHORT with > different negative lower bounds (depending on the rank) > and the same extent (only depending on the number of processes). > > However, I get an

Re: [OMPI users] DataTypes with "holes" for writing files

2007-07-13 Thread Robert Latham
On Tue, Jul 10, 2007 at 04:36:01PM +, jody wrote: > Error: Unsupported datatype passed to ADIOI_Count_contiguous_blocks > [aim-nano_02:9] MPI_ABORT invoked on rank 0 in communicator > MPI_COMM_WORLD with errorcode 1 Hi Jody: OpenMPI uses an old version of ROMIO. You get this error becaus

Re: [OMPI users] nfs romio

2007-07-05 Thread Robert Latham
On Mon, Jul 02, 2007 at 12:49:27PM -0500, Adams, Samuel D Contr AFRL/HEDR wrote: > Anyway, so if anyone can tell me how I should configure my NFS server, > or OpenMPI to make ROMIO work properly, I would appreciate it. Well, as Jeff said, the only safe way to run NFS servers for ROMIO is by dis

Re: [OMPI users] MPI_Type_create_subarray fails!

2007-02-02 Thread Robert Latham
On Tue, Jan 30, 2007 at 04:55:09PM -0500, Ivan de Jesus Deras Tabora wrote: > Then I find all the references to the MPI_Type_create_subarray and > create a little program just to test that part of the code, the code I > created is: ... > After running this little program using mpirun, it raises the

Re: [OMPI users] external32 i/o not implemented?

2007-01-09 Thread Robert Latham
On Tue, Jan 09, 2007 at 02:53:24PM -0700, Tom Lund wrote: > Rob, >Thank you for your informative reply. I had no luck finding the > external32 data representation in any of several mpi implementations and > thus I do need to devise an alternative strategy. Do you know of a good > reference

Re: [OMPI users] external32 i/o not implemented?

2007-01-09 Thread Robert Latham
On Mon, Jan 08, 2007 at 02:32:14PM -0700, Tom Lund wrote: > Rainer, >Thank you for taking time to reply to my querry. Do I understand > correctly that external32 data representation for i/o is not > implemented? I am puzzled since the MPI-2 standard clearly indicates > the existence of ext

Re: [OMPI users] pvfs2 and romio

2006-08-31 Thread Robert Latham
On Mon, Aug 14, 2006 at 10:57:34AM -0400, Brock Palen wrote: > We will be evaluating pvfs2 (www.pvfs.org) in the future. Is their > any special considerations to take to get romio support with openmpi > with pvfs2 ? Hi Since I wrote the ad_pvfs2 driver for ROMIO, and spend a lot of time on P

Re: [OMPI users] MPI_Info for MPI_Open_port

2006-07-18 Thread Robert Latham
On Tue, Jul 11, 2006 at 12:14:51PM -0400, Abhishek Agarwal wrote: > Hello, > > Is there a way of providing a specific port number in MPI_Info when using a > MPI_Open_port command so that clients know which port number to connect. The other replies have covered this pretty well but if you are dea

Re: [OMPI users] comm_join and singleton init

2006-05-03 Thread Robert Latham
On Tue, Mar 14, 2006 at 12:37:52PM -0600, Edgar Gabriel wrote: > I think I know what goes wrong. Since they are in different 'universes', > they will have exactly the same 'Open MPI name', and therefore the > algorithm in intercomm_merge can not determine which process should be > first and whic

Re: [OMPI users] pnetcdf & Open MPI

2006-05-03 Thread Robert Latham
On Tue, May 02, 2006 at 10:32:56PM +0200, Dries Kimpe wrote: > It looks as if the problem is not really due to Open MPI, but to the > included ROM-IO: > > All tests fail with the same error message: > > For example, test/test_double/test_write shows: > > Testing write ... Error: Unsupported data

Re: [OMPI users] MPI_Comm_connect and singleton init

2006-03-14 Thread Robert Latham
On Tue, Mar 14, 2006 at 12:00:57PM -0600, Edgar Gabriel wrote: > you are touching here a difficult area in Open MPI: I don't doubt it. I haven't found an MPI implementation yet that does this without any quirks or oddities :> > - name publishing across independent jobs does unfortunatly not work

[OMPI users] MPI_Comm_connect and singleton init

2006-03-14 Thread Robert Latham
Hello In playing around with process management routines, I found another issue. This one might very well be operator error, or something implementation specific. I've got two processes (a and b), linked with openmpi, but started independently (no mpiexec). - A starts up and calls MPI_Init - A

[OMPI users] comm_join and singleton init

2006-03-14 Thread Robert Latham
Hi I've got a bit of an odd bug here. I've been playing around with MPI process management routines and I notied the following behavior with openmpi-1.0.1: Two processes (a and b), linked with ompi, but started independently (no mpiexec, just started the programs directly). - a and b: call MPI_