Props for that testio script. I think you win the award for "most easy to
reproduce test case." :-)
I notice that some of the lines went over 72 columns, so I renamed the file
x.f90 and changed all the comments from "c" to "!" and joined the two &-split
lines. The error about implicit type
On May 13, 2011, at 8:31 AM, francoise.r...@obs.ujf-grenoble.fr wrote:
> Here is the MUMPS portion of code (in zmumps_part1.F file) where the slaves
> call MPI_COMM_DUP , id%PAR and MASTER are initialized to 0 before :
>
> CALL MPI_COMM_SIZE(id%COMM, id%NPROCS, IERR )
I re-indented so that I
Sorry for the late reply.
Other users have seen something similar but we have never been able to
reproduce it. Is this only when using IB? If you use "mpirun --mca
btl_openib_cpc_if_include rdmacm", does the problem go away?
On May 11, 2011, at 6:00 PM, Marcus R. Epperson wrote:
> I've
On May 19, 2011, at 10:54 AM, Zhangping Wei wrote:
> 4, I use command window to run it in this way: ‘mpirun –n 4 **.exe ‘,then I
> met the error: ‘entry point not found: the procedure entry point inet_pton
> could not be located in the dynamic link library WS2_32.dll’
Unfortunately our
Unfortunately, our Windows guy (Shiqing) is off getting married and will be out
for a little while. :-(
All that I can cite is the README.WINDOWS.txt file in the top-level directory.
I'm afraid that I don't know much else about Windows. :-(
On May 18, 2011, at 8:17 PM, Jason Mackay wrote:
David,
I do not see any mechanism for protecting the accesses to the requests to a
single thread? What is the thread model you're using?
>From an implementation perspective, your code is correct only if you
>initialize the MPI library with MPI_THREAD_MULTIPLE and if the library
>accepts.
Dear Paul,
I checked the way 'mpirun -np N ' you mentioned, but it was the same
problem.
I guess it may related to the system I used, because I have used it correctly
in
another XP 32 bit system.
I look forward to more advice.Thanks.
Zhangping
发件人:
Hi,
On May 19, 2011, at 9:37 AM, Robert Horton wrote
> On Thu, 2011-05-19 at 08:27 -0600, Samuel K. Gutierrez wrote:
>> Hi,
>>
>> Try the following QP parameters that only use shared receive queues.
>>
>> -mca btl_openib_receive_queues S,12288,128,64,32:S,65536,128,64,32
>>
>
> Thanks for
On Thu, 2011-05-19 at 08:27 -0600, Samuel K. Gutierrez wrote:
> Hi,
>
> Try the following QP parameters that only use shared receive queues.
>
> -mca btl_openib_receive_queues S,12288,128,64,32:S,65536,128,64,32
>
Thanks for that. If I run the job over 2 x 48 cores it now works and the
Dear all,
I tried to configure open MPI on a Win XP sp2 64 bit system, but I met an error
’entry point not found’ when I run the executable file, I really hope you can
give me some help. I list what I did when I run my program in the following
parts;
1, I downloaded the
Hi,
Try the following QP parameters that only use shared receive queues.
-mca btl_openib_receive_queues S,12288,128,64,32:S,65536,128,64,32
Samuel K. Gutierrez
Los Alamos National Laboratory
On May 19, 2011, at 5:28 AM, Robert Horton wrote:
> Hi,
>
> I'm having problems getting the
Hi Jason,
I'm afraid I won't be of much help but have you run your tests with UAC
completely disabled or not?
>From my experience, access to network shares and network drives is very
>problematic with UAC enabled and simply disabling it has solved a few problems
>in the past.
Running
Hi,
I'm having problems getting the MPIRandomAccess part of the HPCC
benchmark to run with more than 32 processes on each node (each node has
4 x AMD 6172 so 48 cores total). Once I go past 32 processes I get an
error like:
Hello,
I am working on a hybrid MPI (OpenMPI 1.4.3) and Pthread code. I am
using MPI_Isend and MPI_Irecv for communication and MPI_Test/MPI_Wait to
check if it is done. I do this repeatedly in the outer loop of my code.
The MPI_Test is used in the inner loop to check if some function can be
Hi Ralph,
I tried following...
1) C:\test> mpirun -mca orte_headnode_name
where is returned by 'hostname' command.
2) C:\test> mpirun -mca ras ^ccp
but still observing same errors...
BTW: for further inforamtion on ompi_info you can see thread
15 matches
Mail list logo