It is attached in the previous mail. 2014-10-03 16:47 GMT+00:00 Diego Avesani <diego.aves...@gmail.com>:
> Dear N., > thanks for the explanation. > > really really sorry, but I am not able to see your example. where is it? > > thanks again > > > Diego > > > On 3 October 2014 18:35, Nick Papior Andersen <nickpap...@gmail.com> > wrote: > >> >> >> 2014-10-03 16:27 GMT+00:00 Diego Avesani <diego.aves...@gmail.com>: >> >>> dear N., >>> here my results: >>> >>> 0.200000002980232 >>> 0.200000000000000 >>> 1.00000000000000 >>> 0.200000002980232 >>> 0.200000000000000 >>> 1.00000000000000 >>> 1.00000000000000 >>> 1.00000000000000 >>> >>> I suppose that in case of 0.2 we have a real that is different in double >>> or in single precision. when I write 0.2_db I forced the program to fill >>> with 0 the empty space in the memory. >>> >> Correct. When doing "dp = 0.2" it casts the 0.2 to a double precision, >> see below. >> >>> >>> In the second case we have and integer that the program treat as a real >>> and as a consequence, the program fill automatically the empty space with >>> 0. >>> >> Not correct (at least not in this example), a real 1. (and whole number >> up to a certain integer) is perfectly represented in bytes. Hence >> conversion between different precisions will not loose any precision, the >> 0.2 case is "approximately" set to 0.2 in context of the precision. Hence >> conversion from 0.2 to a double precision 0.2_dp will "guess" the last >> digits, not exactly, but you get the point. >> >>> >>> Am I right? >>> What do you suggest as next step? >>> >> ??? The example I sent you worked perfectly. >> Good luck! >> >>> I could create a type variable and try to send it from a processor to >>> another with MPI_SEND and MPI_RECV? >>> >>> Again thank >>> >>> Diego >>> >>> >>> On 3 October 2014 18:04, Nick Papior Andersen <nickpap...@gmail.com> >>> wrote: >>> >>>> Dear Diego, >>>> Instead of instantly going about using cartesian communicators you >>>> should try and create a small test case, something like this: >>>> >>>> I have successfully runned this small snippet on my machine. >>>> As I state in the source, the culprit was the integer address size. It >>>> is inherently of type long, whereas you used integer. >>>> Running it (with ONLY 2 processors) should print: >>>> 1 1.0000000000000000 1.0000000000000000 11.000000000000000 >>>> 11.000000000000000 >>>> >>>> Please notice the other things I comment on, they can turn out to be >>>> important! >>>> >>>> For instance, try this: >>>> >>>> real(dp) :: a >>>> a = 0.2 >>>> print *,a >>>> a=0.2_dp >>>> print *,a >>>> >>>> Try and understand why the output is not as expected! >>>> >>>> Also try and understand why this has no problems: >>>> >>>> real(dp) :: a >>>> a = 1. >>>> print *,a >>>> a=1._dp >>>> print *,a >>>> >>>> >>>> >>>> 2014-10-03 15:41 GMT+00:00 Diego Avesani <diego.aves...@gmail.com>: >>>> >>>>> Dear Nick, >>>>> thanks again, I am learning a lot and do not be afraid to be rude. >>>>> >>>>> >>>>> >>>>> Diego >>>>> >>>>> >>>>> On 3 October 2014 17:38, Diego Avesani <diego.aves...@gmail.com> >>>>> wrote: >>>>> >>>>>> Dear Jeff, Dear Nick, >>>>>> the question is about, inserting the FLAG for using -r8 >>>>>> >>>>>> Now I have written a simple code with select_kind to avoid -r8. I get >>>>>> the same error. >>>>>> You can find the code in the attachment. >>>>>> >>>>>> probably there is something wrong with ompi configuration >>>>>> >>>>>> What do you think? >>>>>> >>>>>> Again, thanks and thanks a lot >>>>>> >>>>>> >>>>>> >>>>>> Diego >>>>>> >>>>>> >>>>>> On 3 October 2014 17:18, Jeff Squyres (jsquyres) <jsquy...@cisco.com> >>>>>> wrote: >>>>>> >>>>>>> On Oct 3, 2014, at 10:55 AM, Diego Avesani <diego.aves...@gmail.com> >>>>>>> wrote: >>>>>>> >>>>>>> > Dear Jeff, >>>>>>> > how can I do that? >>>>>>> >>>>>>> Er... can you be more specific? I mentioned several things in my >>>>>>> email. >>>>>>> >>>>>>> If you're asking about how to re-install OMPI compiled with -r8, >>>>>>> please first read Nick's email (essentially asking "why are you using >>>>>>> -r8, >>>>>>> anyway?"). >>>>>>> >>>>>>> -- >>>>>>> Jeff Squyres >>>>>>> jsquy...@cisco.com >>>>>>> For corporate legal information go to: >>>>>>> http://www.cisco.com/web/about/doing_business/legal/cri/ >>>>>>> >>>>>>> _______________________________________________ >>>>>>> users mailing list >>>>>>> us...@open-mpi.org >>>>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >>>>>>> Link to this post: >>>>>>> http://www.open-mpi.org/community/lists/users/2014/10/25450.php >>>>>>> >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> users mailing list >>>>> us...@open-mpi.org >>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >>>>> Link to this post: >>>>> http://www.open-mpi.org/community/lists/users/2014/10/25453.php >>>>> >>>> >>>> >>>> >>>> -- >>>> Kind regards Nick >>>> >>>> _______________________________________________ >>>> users mailing list >>>> us...@open-mpi.org >>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >>>> Link to this post: >>>> http://www.open-mpi.org/community/lists/users/2014/10/25454.php >>>> >>> >>> >>> _______________________________________________ >>> users mailing list >>> us...@open-mpi.org >>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >>> Link to this post: >>> http://www.open-mpi.org/community/lists/users/2014/10/25455.php >>> >> >> >> >> -- >> Kind regards Nick >> >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2014/10/25456.php >> > > > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2014/10/25457.php > -- Kind regards Nick