Re: [OMPI users] Error with MPI_GET_ADDRESS and MPI_TYPE_CREATE_RESIZED?

2020-05-17 Thread Diego Avesani via users
Dear Gilles, dear All, as far I remember no. The compiler is the same as the options which I use. Maybe, the error is some other places of my code. However, the results look errors in allocation of sent and received vector of datatype. The import is that at least mydata type definitions in corre

[OMPI users] Error with MPI_GET_ADDRESS and MPI_TYPE_CREATE_RESIZED ?

2020-05-17 Thread Diego Avesani via users
Dear all, I would like to share with what I have done in oder to create my own MPI data type. The strange thing is that it worked until some day ago and then it stopped working. This because probably I have changed my data type and I miss some knowledge about MPI data type This is my data type: *

Re: [OMPI users] MPI advantages over PBS

2018-09-01 Thread Diego Avesani
, SLURM...) which can launch many > parallel mpi applications at the same time depending on the results of > previous runs. > Look at : > - Dakota https://dakota.sandia.gov/ (open source) > - Modefrontier https://www.esteco.com/modefrontier (commercial) > > Patrick > > Dieg

Re: [OMPI users] MPI_MAXLOC problems

2018-09-01 Thread Diego Avesani
OUBLE_PRECISION, then your count is > actually 1. > > > On Aug 22, 2018, at 8:02 AM, Gilles Gouaillardet < > gilles.gouaillar...@gmail.com> wrote: > > > Diego, > > > Try calling allreduce with count=1 > > > Cheers, > > > Gilles > > &g

Re: [OMPI users] MPI advantages over PBS

2018-08-28 Thread Diego Avesani
rite your first MPI program > then use mpirun from the command line. > > If you have a cluster which has the PBS batch system you can then use PBS > to run your MPI program. > IF that is not clear please let us know what help you need. > > > > > > > > > > &g

[OMPI users] MPI advantages over PBS

2018-08-24 Thread Diego Avesani
Dear all, I have a philosophical question. I am reading a lot of papers where people use Portable Batch System or job scheduler in order to parallelize their code. What are the advantages in using MPI instead? I am writing a report on my code, where of course I use openMPI. So tell me please ho

[OMPI users] MPI_MAXLOC problems

2018-08-22 Thread Diego Avesani
Dear all, I am going to start again the discussion about MPI_MAXLOC. We had one a couple of week before with George, Ray, Nathan, Jeff S, Jeff S., Gus. This because I have a problem. I have two groups and two communicators. The first one takes care of compute the maximum vale and to which process

Re: [OMPI users] MPI group and stuck in communication

2018-08-21 Thread Diego Avesani
to add an extra broadcast in local_comm > > > Cheers, > > > Gilles > > > > On 8/20/2018 3:56 PM, Diego Avesani wrote: > >> Dear George, Dear Gilles, Dear Jeff, Deal all, >> >> Thank for all the suggestions. >> The problem is that I do not want to FI

Re: [OMPI users] MPI group and stuck in communication

2018-08-20 Thread Diego Avesani
, MPI_MASTER_COMM,iErr) ! IF(counter.GT.1)THEN EXIT ENDIF ENDDO My original code stucks on the cycle and I do not know why. Thanks Diego On 13 August 2018 at 23:44, George Reeke wrote: > > > On Aug 12, 2018, at 2:18 PM, Diego Avesani > >

Re: [OMPI users] MPI group and stuck in communication

2018-08-13 Thread Diego Avesani
dear Jeff, dear all, its my fault. Can I send an attachment? thanks Diego On 13 August 2018 at 19:06, Jeff Squyres (jsquyres) wrote: > On Aug 12, 2018, at 2:18 PM, Diego Avesani > wrote: > > > > Dear all, Dear Jeff, > > I have three communicator:

Re: [OMPI users] know which CPU has the maximum value

2018-08-12 Thread Diego Avesani
know, Nathan hasn't advanced a proposal to kill them in >>> MPI-4, meaning that they'll likely continue to be in MPI for at >>> least another 10 years. :-) >>> >>> (And even if they did get killed in MPI-4, implementations like Open >>

Re: [OMPI users] MPI group and stuck in communication

2018-08-12 Thread Diego Avesani
8, at 6:27 PM, Diego Avesani > wrote: > > > > The question is: > > Is it possible to have a barrier for all CPUs despite they belong to > different group? > > If the answer is yes I will go in more details. > > By "CPUs", I assume you mean "MPI p

Re: [OMPI users] MPI group and stuck in communication

2018-08-10 Thread Diego Avesani
a hard to tell what is happening in the code snippet below > because there's a lot of variables used that are not defined in your > snippet -- so we have no way of knowing what is going on just from these > few lines of code. > > > > > On Aug 10, 2018, at 11:52 AM, Diego A

Re: [OMPI users] know which CPU has the maximum value

2018-08-10 Thread Diego Avesani
them in MPI-4 I > would. > > On Aug 10, 2018, at 9:47 AM, Diego Avesani > wrote: > > Dear all, > I have just implemented MAXLOC, why should they go away? > it seems working pretty well. > > thanks > > Diego > > > On 10 August 2018 at 17:39, Nathan Hjelm via

Re: [OMPI users] know which CPU has the maximum value

2018-08-10 Thread Diego Avesani
Dear all, I did it, but I am still afraid about Nathan concern. What do you think? thanks again Diego On 10 August 2018 at 17:41, Reuti wrote: > > > Am 10.08.2018 um 17:24 schrieb Diego Avesani : > > > > Dear all, > > I have probably understood. > > The tri

[OMPI users] MPI group and stuck in communication

2018-08-10 Thread Diego Avesani
Dear all, I have a MPI program with three groups with some CPUs in common. I have some problem with MPI_barrier. I try to make my self clear. I have three communicator: INTEGER :: MPI_GROUP_WORLD INTEGER :: MPI_LOCAL_COMM INTEGER :: MPI_MASTER_COMM when I apply: IF(MPIworld%rank.EQ.0) W

Re: [OMPI users] know which CPU has the maximum value

2018-08-10 Thread Diego Avesani
use it on the MPI Forum [2]. > > George. > > > [1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php > [2] https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html > > On Fri, Aug 10, 2018 at 11:25 AM Diego Avesani > wrote: > >> Dear all, >> I

Re: [OMPI users] know which CPU has the maximum value

2018-08-10 Thread Diego Avesani
Dear all, I have probably understood. The trick is to use a real vector and to memorize also the rank. Have I understood correctly? thanks Diego On 10 August 2018 at 17:19, Diego Avesani wrote: > Deal all, > I do not understand how MPI_MINLOC works. it seem locate the maximum in a &g

Re: [OMPI users] know which CPU has the maximum value

2018-08-10 Thread Diego Avesani
e I get back to the group > and ask whoever owns it to kindly reply back with its rank. > Ray > > > On 8/10/2018 10:49 AM, Reuti wrote: > >> Hi, >> >> Am 10.08.2018 um 16:39 schrieb Diego Avesani : >>> >>> Dear all, >>> >>

[OMPI users] know which CPU has the maximum value

2018-08-10 Thread Diego Avesani
Dear all, I have a problem: In my parallel program each CPU compute a value, let's say eff. First of all, I would like to know the maximum value. This for me is quite simple, I apply the following: CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX, MPI_MASTER_COMM, MPIworld%i

Re: [OMPI users] local communicator and crash of the code

2018-08-03 Thread Diego Avesani
Deal all, probably I have found the error. Let's me check. Probably I have not properly set-up colors. Thanks a lot, I hope that you have not lost too much time for me, I will let you know If that was the problem. Thanks again Diego On 3 August 2018 at 19:57, Diego Avesani wrote: >

Re: [OMPI users] local communicator and crash of the code

2018-08-03 Thread Diego Avesani
, MPIlocal%nCPU,MPIlocal%iErr) openMPI seems not able to create properly MPIlocal%rank. what should be? a bug? thanks again Diego On 3 August 2018 at 19:47, Ralph H Castain wrote: > Those two command lines look exactly the same to me - what am I missing? > > > On Aug 3, 2018, at 10:

[OMPI users] local communicator and crash of the code

2018-08-03 Thread Diego Avesani
Dear all, I am experiencing a strange error. In my code I use three group communications: MPI_COMM_WORLD MPI_MASTERS_COMM LOCAL_COMM which have in common some CPUs. when I run my code as mpirun -np 4 --oversubscribe ./MPIHyperStrem I have no problem, while when I run it as mpirun -np 4 --ov

Re: [OMPI users] openMPI and ifort debuggin flags, is it possible?

2018-08-03 Thread Diego Avesani
> On Friday, July 27, 2018, Diego Avesani wrote: > >> Dear all, >> >> I am developing a code for hydrological applications. It is written in >> FORTRAN and I am using ifort combine with openMPI. >> >> In this moment, I am debugging my code due to the fact

[OMPI users] openMPI and ifort debuggin flags, is it possible?

2018-07-27 Thread Diego Avesani
Dear all, I am developing a code for hydrological applications. It is written in FORTRAN and I am using ifort combine with openMPI. In this moment, I am debugging my code due to the fact that I have some NaN errors. As a consequence, I have introduce in my Makefile some flags for the ifort compil

Re: [OMPI users] Groups and Communicators

2017-08-02 Thread Diego Avesani
good. > > On Aug 2, 2017, at 7:17 AM, Diego Avesani wrote: > > Dear all, Dear Jeff, > > I am very sorry, but I do not know how to do this kind of comparison. > > this is my peace of code: > > CALL MPI_GROUP_INCL(GROUP_WORLD, nPSObranch, MRANKS, MASTER_GROUP,ierr) > CAL

Re: [OMPI users] Groups and Communicators

2017-08-02 Thread Diego Avesani
, MASTER_COMM, iErr) ENDIF What I should compare? Thanks again Diego On 1 August 2017 at 16:18, Jeff Squyres (jsquyres) wrote: > On Aug 1, 2017, at 5:56 AM, Diego Avesani wrote: > > > > If I do this: > > > > CALL MPI_SCATTER(PP, npart, MPI_DOUBLE, PPL, 10,MPI_DOU

Re: [OMPI users] Groups and Communicators

2017-08-01 Thread Diego Avesani
, MPI_DOUBLE, PPL, 10,MPI_DOUBLE, 0, *MASTER_COMM*, iErr) ENDIF Is there any smarter way do to this? Thanks again Diego On 28 July 2017 at 20:07, Diego Avesani wrote: > Dear George, Dear all, > > I have just rewritten the code to make it more clear: > > * INTEGER :: col

Re: [OMPI users] Groups and Communicators

2017-07-28 Thread Diego Avesani
roups despite I set "*colorglobal = MPI_COMM_NULL*" What do think? Is there something that I haven't understood properly? Thanks again, I am trying to learn better MPI_Comm_create_group. Thanks Diego On 28 July 2017 at 16:59, Diego Avesani wrote: > Dear George, Dear

Re: [OMPI users] Groups and Communicators

2017-07-28 Thread Diego Avesani
different from MPI_SPLIT_COMM. Again, really, really thanks Diego On 28 July 2017 at 16:02, George Bosilca wrote: > I guess the second comm_rank call is invalid on all non-leader processes, > as their LEADER_COMM communicator is MPI_COMM_NULL. > > george > > On Fri, Jul 28,

Re: [OMPI users] Groups and Communicators

2017-07-28 Thread Diego Avesani
e first MPI_COMM_SPLIT by the same approach. > I would be curious to see the outcome. > > George. > > > On Thu, Jul 27, 2017 at 9:44 AM, Diego Avesani > wrote: > >> Dear George, Dear all, >> >> I have tried to create a simple example. In particular, I wou

Re: [OMPI users] Groups and Communicators

2017-07-27 Thread Diego Avesani
design. Now, this example does not work, but probably there is some coding error. Really, Really thanks Diego Diego On 27 July 2017 at 10:42, Diego Avesani wrote: > Dear George, Dear all, > > A question regarding program design: > The draft that I have sent to you has to be

Re: [OMPI users] Groups and Communicators

2017-07-27 Thread Diego Avesani
_am_leader(small_comm) ? 1 : MPI_UNDEFINED, > rank_in_comm_world, > &leader_Comm); > > The leader_comm will be a valid communicator on all leaders processes, and > MPI_COMM_NULL on all others. > > George. > > > > On Wed, Jul 26, 2

Re: [OMPI users] Groups and Communicators

2017-07-26 Thread Diego Avesani
erflow on this at https://stackoverflow.com/ > questions/24806782/mpi-merge-multiple-intercoms-into-a-single-intracomm > > How is your MPI environment started (single mpirun or mpi_comm_spawn) ? > > George. > > > > On Tue, Jul 25, 2017 at 10:44 AM, Diego Avesani >

[OMPI users] Groups and Communicators

2017-07-25 Thread Diego Avesani
Dear All, I am studying Groups and Communicators, but before start going in detail, I have a question about groups. I would like to know if is it possible to create a group of masters of the other groups and then a intra-communication in the new group. I have spent sometime reading different tuto

[OMPI users] open program to profiling my openMPI code

2017-06-22 Thread Diego Avesani
Dear all, I have done a program with gfortran\fortran and openMPI. I would like to profile it. Can someone suggest me a open program to profile it? I have done some Internet researches but I have no enough information to choose the best one. Thanks in advance to all of you. Diego _

[OMPI users] Cast MPI inside another MPI?

2016-11-25 Thread Diego Avesani
Dear all, I have the following question. Is it possible to cast an MPI inside another MPI? I would like to have to level of parallelization, but I would like to avoid the MPI-openMP paradigm. Another question. I normally use openMPI but I would like to read something to understand and learn all i

Re: [OMPI users] difference between OpenMPI - intel MPI -- how to understand where\why

2016-02-17 Thread Diego Avesani
lts? Are both > results correct? Do you have ways of assessing correctness of your results? > > On February 16, 2016 at 5:19:16 AM, Diego Avesani (diego.aves...@gmail.com) >> wrote: >> >>> Dear all, >>> I have written an fortran-MPI code. >>> Usually,

[OMPI users] difference between OpenMPI - intel MPI -- how to understand where\why

2016-02-16 Thread Diego Avesani
Dear all, I have written an fortran-MPI code. Usually, I compile it in MPI or in openMPI according to the cluster where it runs. Unfortunately, I get complitly a different result and I do not know why. Where could I look? Do you know why? Thanks Diego

Re: [OMPI users] difference between OpenMPI - intel MPI mpi_waitall

2016-02-01 Thread Diego Avesani
Dear all, Dear Jeff S., DearJeff H., I had to set nMSG equal to 2. Now, the program works. Thansks, for you time and helps. Diego On 30 January 2016 at 00:11, Jeff Hammond wrote: > > > On Fri, Jan 29, 2016 at 2:45 AM, Diego Avesani > wrote: > >> Dear all, >>

Re: [OMPI users] difference between OpenMPI - intel MPI mpi_waitall

2016-01-29 Thread Diego Avesani
far as I am concerned, this is extremely unlikely. > > Cheers, > > Gilles > > On Friday, January 29, 2016, Diego Avesani > wrote: > >> Dear all, Dear Jeff, Dear Gilles, >> >> I am sorry, porblably I am a stubborn. >> >> In all my code I have >

Re: [OMPI users] difference between OpenMPI - intel MPI mpi_waitall

2016-01-29 Thread Diego Avesani
th a first argument > of 3: > > -- > MPI_Waitall(271): MPI_Waitall(count=3, req_array=0x7445f0, > status_array=0x744600) failed > -- > > We can't really help you with problems with Intel MPI; sorry. You'll need > to contact their tech support for assistance. > > &g

Re: [OMPI users] difference between OpenMPI - intel MPI mpi_waitall

2016-01-29 Thread Diego Avesani
Diego, > > your code snippet does MPI_Waitall(2,...) > but the error is about MPI_Waitall(3,...) > > Cheers, > > Gilles > > > On Friday, January 29, 2016, Diego Avesani > wrote: > >> Dear all, >> >> I have created a program in fortran and Open

[OMPI users] difference between OpenMPI - intel MPI mpi_waitall

2016-01-29 Thread Diego Avesani
Dear all, I have created a program in fortran and OpenMPI, I test it on my laptop and it works. I would like to use it on a cluster that has, unfortunately, intel MPI. The program crushes on the cluster and I get the following error: *Fatal error in MPI_Waitall: Invalid MPI_Request, error stack:

Re: [OMPI users] single CPU vs four CPU result differences, is it normal?

2015-10-28 Thread Diego Avesani
oximately the same number of iterations. > > Damien > > > On 2015-10-28 3:51 PM, Diego Avesani wrote: > > dear Andreas, dear all, > The code is quite long. It is a conjugate gradient algorithm to solve a > complex system. > > I have noticed that when a do cycle is sma

Re: [OMPI users] single CPU vs four CPU result differences, is it normal?

2015-10-28 Thread Diego Avesani
nt and the difference increase with the number of iterations. What do you think? Diego On 28 October 2015 at 22:32, Andreas Schäfer wrote: > On 22:03 Wed 28 Oct , Diego Avesani wrote: > > When I use a single CPU a get a results, when I use 4 CPU I get another > > one. I do not

[OMPI users] single CPU vs four CPU result differences, is it normal?

2015-10-28 Thread Diego Avesani
Dear all, I have problem with my code. When I use a single CPU a get a results, when I use 4 CPU I get another one. I do not think that very is a bug. Do you think that these small differences are normal? Is there any way to get the same results? is some align problem? Really really thanks Di

Re: [OMPI users] MPI_GATHERV error

2015-10-16 Thread Diego Avesani
u just pass an > pointer to this field to MPI and declare that it contains size(A)=12 > entries. All displacements are relative to the first entry of that field, > so a displacement of 0 points to A(-1), a displacement of 1 to A(0) and so > on. > > Best > > Georg > >

Re: [OMPI users] MPI_GATHERV error

2015-10-14 Thread Diego Avesani
ements by one. > > Best > > Georg > > > Am 14.10.2015 um 15:51 schrieb Diego Avesani: > > dear all, > I have some problem with MPI_GATHERV. > > In my code I generate a complex number > > DO ij=iNS,iNE > X11(ij) = cmplx(1.,0.) > ENDDO > &

[OMPI users] MPI_GATHERV error

2015-10-14 Thread Diego Avesani
dear all, I have some problem with MPI_GATHERV. In my code I generate a complex number DO ij=iNS,iNE X11(ij) = cmplx(1.,0.) ENDDO where iNS,INE change according to the CPU rank, in may case cpu 0 1 10050 cpu 1 10051 20100 cpu 2 20101 301

Re: [OMPI users] send_request error with allocate

2015-09-30 Thread Diego Avesani
Dear Jeff, Dear Gilles, Dear All, now is all more clear. I use CALL MPI_ISEND and CALL MPI_IRECV. Each CPU send once and revive once, this implies that I have REQUEST(2) for WAITALL. However, sometimes dome CPU does not send or receive anything, so I have to set REQUEST = MPI_REQUEST_NULL in order

Re: [OMPI users] send_request error with allocate

2015-09-30 Thread Diego Avesani
quests, isend with the > first element and irecv with the second element, and then waitall the array > of size 2 > note this is not equivalent to doing two MPI_Wait in a row, since that > would be prone to deadlock > > Cheers, > > Gilles > > On Wednesday, September 30, 2

Re: [OMPI users] send_request error with allocate

2015-09-30 Thread Diego Avesani
; I don't think that this pattern was obvious from the code snippet you > sent, which is why I asked for a small, self-contained reproducer. :-) > > I don't know offhand how send_request(:) will be passed to C. > > > > On Sep 30, 2015, at 10:16 AM, Diego Avesani > w

[OMPI users] understanding mpi_gather-mpi_gatherv

2015-09-30 Thread Diego Avesani
dear all, I am not sure if I have understood correctly mpi_gather and mpi_gatherv. This is my problem: I have a complex vector, let's say: X1, where X1 is (1:625). Each CPU work only with some element of X1, let say: CPU 0 --> X1(iEnd-iStart) 150 elements CPU 1 --> X1(iEnd-iStart) 150 element

Re: [OMPI users] send_request error with allocate

2015-09-30 Thread Diego Avesani
gt; entries with real requests. > > It seems much simpler / faster to just pass in M to MPI_WAITANY (any > friends), not N. > > > > On Sep 30, 2015, at 3:43 AM, Diego Avesani > wrote: > > > > Dear Gilles, Dear All, > > > > What do you mean that the

Re: [OMPI users] send_request error with allocate

2015-09-30 Thread Diego Avesani
> > Cheers, > > Gilles > > > On Tuesday, September 29, 2015, Diego Avesani > wrote: > >> dear Jeff, dear all, >> I have notice that if I initialize the variables, I do not have the error >> anymore: >> ! >> ALLOCATE(SEND_REQUEST(nMsg),RECV_R

Re: [OMPI users] send_request error with allocate

2015-09-29 Thread Diego Avesani
> ! > > ! > > CALL MPI_WAITALL(nMsg,send_request,send_status_list,MPIdata%iErr) > > CALL MPI_WAITALL(nMsg,recv_request,recv_status_list,MPIdata%iErr) > > > > Diego > > > > > > On 29 September 2015 at 00:15, Jeff Squyres (jsquyr

Re: [OMPI users] send_request error with allocate

2015-09-29 Thread Diego Avesani
dear Jeff, dear all, I have notice that if I initialize the variables, I do not have the error anymore: ! ALLOCATE(SEND_REQUEST(nMsg),RECV_REQUEST(nMsg)) SEND_REQUEST=0 RECV_REQUEST=0 ! Could you please explain me why? Thanks Diego On 29 September 2015 at 16:08, Diego Avesani wrote

Re: [OMPI users] send_request error with allocate

2015-09-29 Thread Diego Avesani
,recv_status_list,MPIdata%iErr) Diego On 29 September 2015 at 00:15, Jeff Squyres (jsquyres) wrote: > Can you send a small reproducer program? > > > On Sep 28, 2015, at 4:45 PM, Diego Avesani > wrote: > > > > Dear all, > > > > I have to use a send_request

[OMPI users] send_request error with allocate

2015-09-28 Thread Diego Avesani
Dear all, I have to use a send_request in a MPI_WAITALL. Here the strange things: If I use at the begging of the SUBROUTINE: INTEGER :: send_request(3), recv_request(3) I have no problem, but if I use USE COMONVARS,ONLY : nMsg with nMsg=3 and after that I declare INTEGER :: send_request(nMsg

Re: [OMPI users] difference between OPENMPI e Intel MPI (DATATYPE)

2015-09-04 Thread Diego Avesani
res (jsquyres) > wrote: > > On Sep 3, 2015, at 10:43 AM, Diego Avesani > wrote: > >> > >> Dear Jeff, Dear all, > >> I normaly use "USE MPI" > >> > >> This is the answar fro intel HPC forum: > >> > >> If you are sw

Re: [OMPI users] difference between OPENMPI e Intel MPI (DATATYPE)

2015-09-03 Thread Diego Avesani
ent shown us anything about what goes wrong, you just give us > the error statement and assume it is because of ill-defined type-creation, > it might as well be because you call allreduce erroneously. > Please give us more information... > > 2015-09-03 14:59 GMT+00:00 Diego Avesani : &g

Re: [OMPI users] difference between OPENMPI e Intel MPI (DATATYPE)

2015-09-03 Thread Diego Avesani
it is recommended that you recompile everything. > > use mpi > > is a module, you cannot mix these between compilers/environments, sadly > the Fortran specification does not enforce a strict module format which is > why this is necessary. > > > > 2015-09-03 14:43 GMT+00:00 Dieg

Re: [OMPI users] difference between OPENMPI e Intel MPI (DATATYPE)

2015-09-03 Thread Diego Avesani
; instead of "include 'mpif.h'", and see if that > turns up any errors. > > > > On Sep 2, 2015, at 12:13 PM, Diego Avesani > wrote: > > > > Dear Gilles, Dear all, > > I have found the error. Some CPU has no element to share. It was a my > error

Re: [OMPI users] difference between OPENMPI e Intel MPI (DATATYPE)

2015-09-02 Thread Diego Avesani
. > I recommend you first do this, so you can catch the error as soon it > happens, and hopefully understand why it occurs. > > Cheers, > > Gilles > > > On Wednesday, September 2, 2015, Diego Avesani > wrote: > >> Dear all, >> >> I have notice small

[OMPI users] difference between OPENMPI e Intel MPI (DATATYPE)

2015-09-02 Thread Diego Avesani
Dear all, I have notice small difference between OPEN-MPI and intel MPI. For example in MPI_ALLREDUCE in intel MPI is not allowed to use the same variable in send and receiving Buff. I have written my code in OPEN-MPI, but unfortunately I have to run in on a intel-MPI cluster. Now I have the foll

[OMPI users] vector type

2015-01-31 Thread Diego Avesani
Dear all, here how I create a 2D vector type to send 3D array element: (in the attachment) The vectors are: real*4 AA(4,5,3), BB(4,5,3) In my idea both AA and BB have three elements (last columns) and each elements has (4x5) features. 1) What do you think? 2) why I can not send AA(1,1,2:3) as

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-30 Thread Diego Avesani
call MPI_SEND(AA(1,1,2:3), 2, rowtype, 1, 300, MPI_COMM_WORLD, ierr) Thanks a lot Diego On 18 January 2015 at 13:02, Diego Avesani wrote: > Dear All, Dear Gus, Dear George, > I almost get it. In the attachment the program. > > All data arrived, However I got a segmentation

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-18 Thread Diego Avesani
for the other post in the other treat.) What do you think? Thanks again Diego On 16 January 2015 at 20:04, Diego Avesani wrote: > Dear all, > here the 3D example, but unfortunately it does not work. > I believe that there is some problem with the stride. > > What do you th

Re: [OMPI users] How to handle strides in MPI_Create_type_subarray - Re: MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
; >>>> >>>> On Jan 15, 2015, at 19:31, Gus Correa wrote: >>>>> >>>>> I never used MPI_Type_create_subarray, only MPI_Type_Vector. >>>>> What I like about MPI_Type_Vector is that you can define a stride, >>>

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear all, here the 3D example, but unfortunately it does not work. I believe that there is some problem with the stride. What do you think? Thanks again to everyone Diego On 16 January 2015 at 19:20, Diego Avesani wrote: > Dear All, > in the attachment the 2D example, Now I will try

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear All, in the attachment the 2D example, Now I will try the 3D example. What do you think of it? is it correct? The idea is to build a 2D data_type, to sent 3D data Diego On 16 January 2015 at 18:19, Diego Avesani wrote: > Dear George, Dear All, > > and what do you think

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
t have > to. > > George. > > On Jan 16, 2015, at 11:32 , Diego Avesani wrote: > > Dear all, > > Could I use MPI_PACK? > > > Diego > > > On 16 January 2015 at 16:26, Diego Avesani > wrote: > >> Dear George, Dear all, >> >>

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear all, Could I use MPI_PACK? Diego On 16 January 2015 at 16:26, Diego Avesani wrote: > Dear George, Dear all, > > I have been studying. It's clear for 2D case QQ(:,:). > > For example if > real :: QQ(npt,9) , with 9 the characteristic of each particles. >

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
in > a contiguous buffer originally discontinuous elements. As a result there is > no need to use the MPI_TYPE_VECTOR, but instead you can just use the type > you created so far (MPI_my_STRUCT) with a count. > > George. > > > On Fri, Jan 16, 2015 at 5:32 AM, Diego Avesan

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
e_vector using strides here: > > > > https://computing.llnl.gov/tutorials/mpi/#Derived_Data_Types > > > > and a similar one here: > > > > http://static.msi.umn.edu/tutorial/scicomp/general/MPI/content6.html > > > > Gus Correa > > > >> On 01

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-15 Thread Diego Avesani
dear George, dear Gus, dear all, Could you please tell me where I can find a good example? I am sorry but I can not understand the 3D array. Really Thanks Diego On 15 January 2015 at 20:13, George Bosilca wrote: > > On Jan 15, 2015, at 06:02 , Diego Avesani wrote: > > Dear G

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-15 Thread Diego Avesani
gt; with MPI 3, but for what you're doing 1 and 2 are more than enough.] > > > On 01/13/2015 09:22 AM, Diego Avesani wrote: > >> Dear all, >> >> I had some wonderful talking about MPI_type_create_struct adn >> isend\irecv with >> Gilles, Gustavo, Georg

[OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-13 Thread Diego Avesani
Dear all, I had some wonderful talking about MPI_type_create_struct adn isend\irecv with Gilles, Gustavo, George, Gus, Tom and Jeff. Now all is more clear and my program works. Now I have another question. In may program I have matrix: *QQMLS(:,:,:) *that is allocate as *ALLOCATE(QQMLS(9,npt,18

Re: [OMPI users] send and receive vectors + variable length

2015-01-09 Thread Diego Avesani
Dear all, thanks a lot, really Thanks a lot Diego On 9 January 2015 at 19:56, Jeff Squyres (jsquyres) wrote: > On Jan 9, 2015, at 1:54 PM, Diego Avesani wrote: > > > What does it mean "YMMV"? > > http://netforbeginners.about.com/od/xyz/f/What-Is-YMMV.htm > >

Re: [OMPI users] send and receive vectors + variable length

2015-01-09 Thread Diego Avesani
What does it mean "YMMV"? On 9 January 2015 at 19:44, Jeff Squyres (jsquyres) wrote: > YMMV Diego

Re: [OMPI users] send and receive vectors + variable length

2015-01-09 Thread Diego Avesani
Dear Jeff, Dear George, Dear Dave, Dear all, so, is it correct to use *MPI_Waitall *? Is my program ok now? Do you see other problems? Thanks again Diego On 9 January 2015 at 18:39, George Bosilca wrote: > I totally agree with Dave here. Moreover, based on the logic exposed by > Jeff, there

Re: [OMPI users] send and receive vectors + variable length

2015-01-09 Thread Diego Avesani
he MPI_Isend and MPI_Irecv is incorrect. >> Also printing the supposedly received values (line 127) is incorrect as >> there is no reason to have the non-blocking receive completed at that >> moment. >> > >> > George. >> > >> > >> > On Thu,

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Diego Avesani
fter the MPI_Isend and MPI_Irecv is incorrect. > Also printing the supposedly received values (line 127) is incorrect as > there is no reason to have the non-blocking receive completed at that > moment. > > > > George. > > > > > > On Thu, Jan 8, 2015 at 5:06 PM,

Re: [OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-08 Thread Diego Avesani
Dear Gus, Dear All, so are you suggesting to use DOUBLE PRECISION and not REAL(dp)? Thanks again Diego On 9 January 2015 at 00:02, Gus Correa wrote: > On 01/08/2015 05:50 PM, Diego Avesani wrote: > >> Dear George, Dear all, >> what are the other issues? >&g

Re: [OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-08 Thread Diego Avesani
e_create_resized for the purposes of a small >> example. The specific use of it in this program appears to be superfluous. >> >> >> >> >> >> On Jan 8, 2015, at 4:26 AM, Gilles Gouaillardet < >> gilles.gouaillar...@iferc.org> wrote: >> >&

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Diego Avesani
: > With array bounds checking your program returns an out-of-bounds error > in the mpi_isend call at line 104. Looks like 'send_request' should be > indexed with 'sendcount', not 'icount'. > > T. Rosmond > > > > On Thu, 2015-01-0

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Diego Avesani
the attachment Diego On 8 January 2015 at 19:44, Diego Avesani wrote: > Dear all, > I found the error. > There is a Ndata2send(iCPU) instead of Ndata2recv(iCPU). > In the attachment there is the correct version of the program. > > Only one thing, could do you check if the

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Diego Avesani
Dear all, I found the error. There is a Ndata2send(iCPU) instead of Ndata2recv(iCPU). In the attachment there is the correct version of the program. Only one thing, could do you check if the use of MPI_WAITALL and MPI_BARRIER is correct? Thanks again Diego On 8 January 2015 at 18:48, Diego

[OMPI users] send and receive vectors + variable length

2015-01-08 Thread Diego Avesani
Dear all, thanks thank a lot, I am learning a lot. I have written a simple program that send vectors of integers from a CPU to another. The program is written (at least for now) for 4 CPU. The program is quite simple: Each CPU knows how many data has to send to the other CPUs. This info is than

Re: [OMPI users] OMPI users] OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-08 Thread Diego Avesani
gt; my bad, i should have passed displacements(1) to MPI_Type_create_struct > > here is an updated version > > (note you have to use a REQUEST integer for MPI_Isend and MPI_Irecv, > and you also have to call MPI_Wait to ensure the requests complete) > > Cheers, > >

Re: [OMPI users] OMPI users] OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-07 Thread Diego Avesani
irecv subroutines (you can find it in the attachment) Thanks again Diego On 5 January 2015 at 15:54, Diego Avesani wrote: > Dear Gilles, > > Thanks, Thanks a lot. > Now is more clear. > > Again, thanks a lot > > Diego > > MODULE MOD_PRECISION integer, parame

Re: [OMPI users] OMPI users] OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-05 Thread Diego Avesani
Dear Gilles, Thanks, Thanks a lot. Now is more clear. Again, thanks a lot Diego

Re: [OMPI users] OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-05 Thread Diego Avesani
ed version > > Cheers, > > Gilles > > On 2015/01/05 2:32, Diego Avesani wrote: > > Dear Gilles, Dear all, > > It works. The only thing that is missed is: > > *CALL MPI_Finalize(MPI%iErr)* > > at the end of the program. > > Now, I have to test it sending so

Re: [OMPI users] OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-04 Thread Diego Avesani
> On Sun, Jan 4, 2015 at 6:48 PM, Diego Avesani > wrote: > >> Dear Gilles, Dear all, >> >> in the attachment you can find the program. >> >> What do you meam "remove mpi_get_address(dummy) from all displacements". >> >> Thanks for all yo

Re: [OMPI users] OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-04 Thread Diego Avesani
your program instead of a snippet ? > > Gus is right about using double precision vs real and -r8 > > Cheers, > > Gilles > > Diego Avesani さんのメール: > Dear Gilles Dear all, > > I have done all that to avoid to pedding an integer, as suggested by > George. > I

Re: [OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-02 Thread Diego Avesani
ligned > And the lower bound should be zero. > > BTW, which compiler are you using ? > Is tParticle object a common ? > iirc, intel compiler aligns types automatically, but not commons, and that > means MPI_Type_create_struct is not aligned as it should most of the time. > &

[OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-02 Thread Diego Avesani
dear all, I have a problem with MPI_Type_Create_Struct and MPI_TYPE_CREATE_RESIZED. I have this variable type: * TYPE tParticle* * INTEGER :: ip* * REAL :: RP(2)* * REAL :: QQ(2)* * ENDTYPE tParticle* Then I define: Nstruct=3 *ALLOCATE(TYPES(Nstruct))* *ALLOCATE(LENGTHS(

Re: [OMPI users] ISEND + IRECV in a cycle stuck

2014-12-29 Thread Diego Avesani
dear All, sorry for your time. I have found the solution: icount=1 DO iCPU=0,MPI%nCPU-1 IF(iCPU.NE.MPI%rank)THEN iTag=iCPU CALL MPI_ISEND(Ndata2send(iCPU),1,MPI_INTEGER,iCPU,iTag,MPI_COMM_WORLD,send_request(icount),MPI%iErr) icount=icount+1 ENDIF ENDDO ic

[OMPI users] ISEND + IRECV in a cycle stuck

2014-12-29 Thread Diego Avesani
Dear all, I have the following problem: In my program each rank has a vector, where the position indicates where I have to send the data. For example for rank 0, I have: *ALLOCATE(Ndata2send(0,MPI%nCPU-1)* *Ndata2send(:) = 0,10,10,16* where MPI%nCPU is the number of cPU, in my case 4 Thins m

  1   2   >