er, some sort of socket protocol is
needed to initiate the shutdown instead of a signal.
George Reeke
get them in order without filling all
the buffer space) and write to the file. I have not timed this vs
other methods, but might be worth a try.
George Reeke
On Fri, 2019-03-15 at 18:02 +, Sergio None wrote:
> Jeff,
>
> Yes, i know that C99 have these fixed standard library.
>
> The point is that many common used function, rand for example, return
> non fixed types. And then you need doing castings, typically to more
> big types. It can be a bit a
the sysdef. Then the code should use the typedef names.
In the MPI_Send, MPI_Recv calls I usually call the type MPI_BYTE and
give the actual lengths in bytes which I compute once at the time
of the malloc and store in a global common block.
George Reeke
___
d would like
to participate.
George Reeke
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
nt variable to name a path for my standard output.
My code, when it finds that variable, opens that file and writes
everything to it instead of stdout (I write from the Rank 0 node
only). Then openmpi (or slurm) can write to stdout all it wants.
George Reeke
___
terminate with MPI_Finalize.
Of course it is more complicated than that to handle special cases
like termination before everything has really started or when the
protocol is not followed, debug messages that do not initiate
termination, etc. but maybe this will give you an idea for one
way to deal wit
st dimension drop out) sends that value
and rank to the neighbor in the next dimension, until the value and rank
of the highest value all end up at rank 0 (or wherever you program it).
George Reeke
___
users mailing list
users@lists.open-mpi.org
https:
g the original
message passing calls with corresponding mpi calls even though all
the processors are now equivalent Intel cpus). Did I mention:
pointers within the data tree are maintained on each processor with
likely different values. Oh, and if anybody wants this, you have to
accept GPL l
end it privately to anyone who really thinks they can
use it and is willing to get their hands dirty.
George Reeke (private email: re...@rockefeller.edu)
On Tue, 2018-04-03 at 23:39 +, Jeff Squyres (jsquyres) wrote:
> On Apr 2, 2018, at 1:39 PM, dpchoudh . wrote:
> >
> >
---
Indeed the packages mentioned are not installed. I found some
discussion of this at https://github.com/open-mpi/ompi/issues/1087
which claims this message should really be about "hwloc" which is
another thing I know nothing about.
Does any of this help
ught of using old-fashioned 'fork' but I really want the
extra communicators to keep asynchronous messages separated.
The documentation says overloading is OK by default, so maybe
something else is wrong here.
George Reeke
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
Dear Gilles et al,
You are correct. I solved the problem based on an email I got
privately based on the same idea. I have just posted that private
reply so the solution will be more widely known among us amateurs.
Thanks,
George Reeke
On Tue, 2017-01-31 at 09:07 +0900, Gilles
On Mon, 2017-01-30 at 16:31 -0500, George Reeke wrote:
> Dear colleagues,
>I am trying MPI_Type_create_struct for the first time.
> I want to combine a small structure (two ints) with a vector of
> variable length to send in a single message. Here is a simplified
> extract of
ous(lblk, MPI_UNSIGNED_CHAR, &MPBlk);
MPI_Type_commit(&MPBlk);
GHect[1] = nblks;
MPI_Get_address(pv, GHoff+1);
MPI_Get_address(&sGHdr, GHoff);
MPI_Type_create_struct(2, GHect, GHoff, GHtyp, &MPPkt);
segfault on this call-never gets to the following commit
Any suggestions would be most
it should go in /etc.
Hope this helps.
George Reeke
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
g source and running
the usual .configure-make sequence? I understand some library
references may also require updating, I can deal with that, but
I want to have my old system back in case the install fails.
Would you advise trying this in a VM?
Thanks,
main program, as it is about 40,000
lines of C code, recently updated for parallel processing with MPI.
When I have time I will try to make a short version for further testing.
Thanks,
George Reeke
On Mon, 2016-10-10 at 21:37 -0400, George Bosilca wrote:
> George,
>
>
> There
all processes are running on the one node, my laptop
with i7 processor. I set the "-mca btl_tcp_if_include lo" parameter
earlier when I got an error message about a refused connection
(that my code never asked for in the first place). This got rid
of that error message but application sti
19 matches
Mail list logo