On Mon, Jul 02, 2007 at 12:49:27PM -0500, Adams, Samuel D Contr AFRL/HEDR wrote:
> Anyway, so if anyone can tell me how I should configure my NFS server,
> or OpenMPI to make ROMIO work properly, I would appreciate it.
Well, as Jeff said, the only safe way to run NFS servers for ROMIO is
by
On Tue, Jul 10, 2007 at 04:36:01PM +, jody wrote:
> Error: Unsupported datatype passed to ADIOI_Count_contiguous_blocks
> [aim-nano_02:9] MPI_ABORT invoked on rank 0 in communicator
> MPI_COMM_WORLD with errorcode 1
Hi Jody:
OpenMPI uses an old version of ROMIO. You get this error
On Fri, Sep 07, 2007 at 10:18:55AM -0400, Brock Palen wrote:
> Is there a way to find out which ADIO options romio was built with?
not easily. You can use 'nm' and look at the symbols :>
> Also does OpenMPI's romio come with pvfs2 support included? What
> about Luster or GPFS.
OpenMPI has
On Fri, Sep 14, 2007 at 02:16:46PM -0400, Jeff Squyres wrote:
> Rob -- is there a public constant/symbol somewhere where we can
> access some form of ROMIO's version number? If so, we can also make
> that query-able via ompi_info.
There really isn't. We used to have a VERSION variable in
On Fri, Sep 14, 2007 at 02:31:51PM -0400, Jeff Squyres wrote:
> Ok. Maybe we'll just make a hard-coded string somewhere "ROMIO from
> MPICH2 vABC, on AA/BB/" or somesuch. That'll at least give some
> indication of what version you've got.
That sort-of reminds me: ROMIO (well, all of
On Fri, Nov 02, 2007 at 12:18:54PM +0100, Oleg Morajko wrote:
> Is there any standard way of attaching/retrieving attributes to MPI_Request
> object?
>
> Eg. Typically there are dynamic user data created when starting the
> asynchronous operation and freed when it completes. It would be
On Thu, May 29, 2008 at 04:24:18PM -0300, Davi Vercillo C. Garcia wrote:
> Hi,
>
> I'm trying to run my program in my environment and some problems are
> happening. My environment is based on PVFS2 over NFS (PVFS is mounted
> over NFS partition), OpenMPI and Ubuntu. My program uses MPI-IO and
>
On Thu, May 29, 2008 at 04:48:49PM -0300, Davi Vercillo C. Garcia wrote:
> > Oh, I see you want to use ordered i/o in your application. PVFS
> > doesn't support that mode. However, since you know how much data each
> > process wants to write, a combination of MPI_Scan (to compute each
> >
On Wed, Jul 23, 2008 at 02:24:03PM +0200, Gabriele Fatigati wrote:
> >You could always effect your own parallel IO (e.g., use MPI sends and
> receives to coordinate parallel reads and writes), but >why? It's already
> done in the MPI-IO implementation.
>
> Just a moment: you're saying that i can
On Wed, Jul 23, 2008 at 09:47:56AM -0400, Robert Kubrick wrote:
> HDF5 supports parallel I/O through MPI-I/O. I've never used it, but I
> think the API is easier than direct MPI-I/O, maybe even easier than raw
> read/writes given its support for hierarchal objects and metadata.
In addition to
On Sat, Aug 16, 2008 at 08:05:14AM -0400, Jeff Squyres wrote:
> On Aug 13, 2008, at 7:06 PM, Yvan Fournier wrote:
>
>> I seem to have encountered a bug in MPI-IO, in which
>> MPI_File_get_position_shared hangs when called by multiple processes
>> in
>> a communicator. It can be illustrated by
On Thu, Oct 23, 2008 at 12:41:45AM -0200, Davi Vercillo C. Garcia (ダヴィ) wrote:
> Hi,
>
> I'm trying to run a code using OpenMPI and I'm getting this error:
>
> ADIOI_GEN_DELETE (line 22): **io No such file or directory
>
> I don't know why this occurs, I only know this happens when I use more
>
On Fri, Oct 31, 2008 at 11:19:39AM -0400, Antonio Molins wrote:
> Hi again,
>
> The problem in a nutshell: it looks like, when I use
> MPI_Type_create_darray with an argument array_of_gsizes where
> array_of_gsizes[0]>array_of_gsizes[1], the datatype returned goes
> through MPI_Type_commit()
On Wed, Feb 18, 2009 at 02:24:03PM -0700, Ralph Castain wrote:
> Hi Rob
>
> Guess I'll display my own ignorance here:
>
>>> MPI_File_open( MPI_COMM_WORLD, "foo.txt",
>>>MPI_MODE_CREATE | MPI_MODE_WRONLY,
>>>MPI_INFO_NULL, );
>
>
> Since the file was opened with
Hello all.
I'm using openmpi-1.3 in this example, linux, gcc-4.3.2, configured
with nothing special.
If I run the following simple C code under valgrind, single process, I
get some errors about reading and writing already-freed memory:
---
#include
#include
int
Hi
I've got a bit of an odd bug here. I've been playing around with MPI
process management routines and I notied the following behavior with
openmpi-1.0.1:
Two processes (a and b), linked with ompi, but started independently
(no mpiexec, just started the programs directly).
- a and b: call
Hello
In playing around with process management routines, I found another
issue. This one might very well be operator error, or something
implementation specific.
I've got two processes (a and b), linked with openmpi, but started
independently (no mpiexec).
- A starts up and calls MPI_Init
-
On Tue, Jul 11, 2006 at 12:14:51PM -0400, Abhishek Agarwal wrote:
> Hello,
>
> Is there a way of providing a specific port number in MPI_Info when using a
> MPI_Open_port command so that clients know which port number to connect.
The other replies have covered this pretty well but if you are
On Mon, Aug 14, 2006 at 10:57:34AM -0400, Brock Palen wrote:
> We will be evaluating pvfs2 (www.pvfs.org) in the future. Is their
> any special considerations to take to get romio support with openmpi
> with pvfs2 ?
Hi
Since I wrote the ad_pvfs2 driver for ROMIO, and spend a lot of time
on
On Mon, Jan 08, 2007 at 02:32:14PM -0700, Tom Lund wrote:
> Rainer,
>Thank you for taking time to reply to my querry. Do I understand
> correctly that external32 data representation for i/o is not
> implemented? I am puzzled since the MPI-2 standard clearly indicates
> the existence of
On Tue, Jan 09, 2007 at 02:53:24PM -0700, Tom Lund wrote:
> Rob,
>Thank you for your informative reply. I had no luck finding the
> external32 data representation in any of several mpi implementations and
> thus I do need to devise an alternative strategy. Do you know of a good
>
On Tue, Jan 30, 2007 at 04:55:09PM -0500, Ivan de Jesus Deras Tabora wrote:
> Then I find all the references to the MPI_Type_create_subarray and
> create a little program just to test that part of the code, the code I
> created is:
...
> After running this little program using mpirun, it raises
22 matches
Mail list logo