Done - thanks!
On Nov 12, 2013, at 7:35 PM, tmish...@jcity.maeda.co.jp wrote:
>
>
> Dear openmpi developers,
>
> I got a segmentation fault in traial use of openmpi-1.7.4a1r29646 built by
> PGI13.10 as shown below:
>
> [mishima@manage testbed-openmpi-1.7.3]$ mpirun -np 4 -cpus-per-proc 2
> -r
Dear openmpi developers,
I got a segmentation fault in traial use of openmpi-1.7.4a1r29646 built by
PGI13.10 as shown below:
[mishima@manage testbed-openmpi-1.7.3]$ mpirun -np 4 -cpus-per-proc 2
-report-bindings mPre
[manage.cluster:23082] MCW rank 2 bound to socket 0[core 4[hwt 0]], socket
0[c
Hi,
I was using MPI.OBJECT as datatype for custom Java classes, but this is no
longer available. Could you please let me, which datatype should be used
for such cases?
Thank you,
Saliya
--
Saliya Ekanayake esal...@gmail.com
Cell 812-391-4914 Home 812-961-6383
http://saliya.org
On Nov 12, 2013, at 19:47 , Jeff Squyres (jsquyres) wrote:
> On Nov 12, 2013, at 4:42 AM, George Bosilca wrote:
>
>>> 2. In the 64 bit case, you'll have a difficult time extracting the MPI
>>> status values from the 8-byte INTEGERs in the status array in Fortran
>>> (because the first 2 of 3
On Nov 12, 2013, at 4:42 AM, George Bosilca wrote:
>> 2. In the 64 bit case, you'll have a difficult time extracting the MPI
>> status values from the 8-byte INTEGERs in the status array in Fortran
>> (because the first 2 of 3 each really be 2 4-byte integers).
>
> My understanding is that in
Ralph,
in
http://www.open-std.org/jtc1/sc22/wg14/www/C99RationaleV5.10.pdf
it is written:
-
5.1.2.2.1 Program startup
15
The behavior of the arguments to main, and of the interaction of exit, main
and atexit
(see ยง7.20.4.2) has been codified to curb some unwanted variety in the
repre
After appending an additional NULL the code works now. I admit such use of
argv/argc could be confusing... thanks for pointing that out. And thank you
all for figuring out my problem!
Best,
Yu-Hang
On Tue, Nov 12, 2013 at 12:18 PM, Ralph Castain wrote:
> Kernighan and Richie's C programming la
Kernighan and Richie's C programming language manual - it goes all the way back
to the original C definition.
On Nov 12, 2013, at 9:15 AM, Alex A. Granovsky wrote:
> Hello,
>
>> It seems that argv[argc] should always be NULL according to the
>> standard. So OMPI failure is not actually a bug!
Hello,
It seems that argv[argc] should always be NULL according to the
standard. So OMPI failure is not actually a bug!
could you please point to the exact document where this is explicitly
stated?
Otherwise, I'd assume this is a bug.
Kind regards,
Alex Granovsky
-Original Message
I don't think that's true in the case of argv as that is a pointer...but
either way, this isn't an OMPI problem.
On Nov 12, 2013, at 9:09 AM, Matthieu Brucher
wrote:
> I understand why he did this, it's only the main argc/argv values that
> are changed, not the actual system values (my mista
I understand why he did this, it's only the main argc/argv values that
are changed, not the actual system values (my mistake as well, I
overlooked his code, not paying attention to the details!).
Still, keeping different names would be best for code reviews and code
understanding.
The fact that th
On Nov 12, 2013, at 8:56 AM, Matthieu Brucher
wrote:
> It seems that argv[argc] should always be NULL according to the
> standard.
That is definitely true.
> So OMPI failure is not actually a bug!
I think that is true as well, though I suppose we could try to catch it
(doubtful - what if it
It seems that argv[argc] should always be NULL according to the
standard. So OMPI failure is not actually a bug!
Cheers,
2013/11/12 Matthieu Brucher :
> Interestingly enough, in ompi_mpi_init, opal_argv_join is called
> without then array length, so I suppose that in the usual argc/argv
> couple,
Interestingly enough, in ompi_mpi_init, opal_argv_join is called
without then array length, so I suppose that in the usual argc/argv
couple, you have an additional value to argv which may be NULL. So try
allocating 3 additional values, the last being NULL, and it may work.
Cheers,
Matthieu
2013/
I tried the following code without CUDA, the error is still there:
#include "mpi.h"
#include
#include
#include
int main(int argc, char **argv)
{
// override command line arguments to make sure cudaengine get the
correct one
char **argv_new = new char*[ argc + 2 ];
for( int i = 0 ;
Hi,
Are you sure this is the correct code? This seems strange and not a good idea:
MPI_Init(&argc,&argv);
// do something...
for( int i = 0 ; i < argc ; i++ ) delete [] argv[i];
delete [] argv;
Did you mean argc_new and argv_new instead?
Do you have the same error without CUDA?
Hi,
I tried to augment the command line argument list by allocating my own list
of strings and passing them to MPI_Init, yet I got a segmentation fault for
both OpenMPI 1.6.3 and 1.7.2, while the code works fine with MPICH2. The
code is:
#include "mpi.h"
#include "cuda_runtime.h"
#include
#inclu
On Nov 12, 2013, at 00:38 , Jeff Squyres (jsquyres) wrote:
> 2. In the 64 bit case, you'll have a difficult time extracting the MPI status
> values from the 8-byte INTEGERs in the status array in Fortran (because the
> first 2 of 3 each really be 2 4-byte integers).
My understanding is that in
18 matches
Mail list logo