Re: [MTT devel] [MTT svn] svn:mtt-svn r1176

2008-04-04 Thread Jeff Squyres
Um -- yeah, probably. :-) But there's also likely no harm in leaving them there. :-) On Apr 4, 2008, at 4:29 PM, Ethan Mallove wrote: I like the "all" keyword. Are these no longer needed? _mpi_get_names() _mpi_install_names() _test_get_names() _test_build_names() -Ethan On Fri,

Re: [MTT devel] Launch scaling data in MTT

2008-04-04 Thread Jeff Squyres
MTT probably could gather this data -- some of it was wall clock execution time; other was multiple parts of data extracted from stdout. Ralph -- is this interesting / useful to you? On Apr 4, 2008, at 4:56 PM, Ethan Mallove wrote: I was looking at the graphs posted at

[OMPI devel] Build failure on FreeBSD 7

2008-04-04 Thread Karol Mroz
Hello everyone... it's been some time since I posted here. I pulled the latest svn revision (18079) and had some trouble building Open MPI on a FreeBSD 7 machine (i386). Make failed when compiling opal/event/kqueue.c. It appears that freebsd needs sys/types.h, sys/ioctl.h, termios.h and

Re: [MTT devel] [MTT svn] svn:mtt-svn r1176

2008-04-04 Thread Ethan Mallove
I like the "all" keyword. Are these no longer needed? _mpi_get_names() _mpi_install_names() _test_get_names() _test_build_names() -Ethan On Fri, Apr/04/2008 03:31:07PM, jsquy...@osl.iu.edu wrote: > Author: jsquyres > Date: 2008-04-04 15:31:07 EDT (Fri, 04 Apr 2008) > New Revision: 1176

Re: [OMPI devel] MPI_Comm_connect/Accept

2008-04-04 Thread Ralph H Castain
Okay, I have a partial fix in there now. You'll have to use -mca routed unity as I still need to fix it for routed tree. Couple of things: 1. I fixed the --debug flag so it automatically turns on the debug output from the data server code itself. Now ompi-server will tell you when it is

Re: [OMPI devel] MPI_Comm_connect/Accept

2008-04-04 Thread Ralph H Castain
Well, something got borked in here - will have to fix it, so this will probably not get done until next week. On 4/4/08 12:26 PM, "Ralph H Castain" wrote: > Yeah, you didn't specify the file correctly...plus I found a bug in the code > when I looked (out-of-date a little in

Re: [OMPI devel] Affect of compression on modex and launch messages

2008-04-04 Thread Edgar Gabriel
actually, we used lzo a looong time ago with PACX-MPI, it was indeed faster then zlib. Our findings at that time were however similar to what George mentioned, namely a benefit from compression was only visible if the network latency was really high (e.g. multiple ms)... Thanks Edgar Roland

Re: [OMPI devel] Affect of compression on modex and launch messages

2008-04-04 Thread Jeff Squyres
LZO looks cool, but it's unfortunately GPL (Open MPI is BSD). Bummer. On Apr 4, 2008, at 2:29 PM, Roland Dreier wrote: Based on some discussion on this list, I integrated a zlib-based compression ability into ORTE. Since the launch message sent to the orteds and the modex between the

Re: [OMPI devel] Affect of compression on modex and launch messages

2008-04-04 Thread Roland Dreier
> Based on some discussion on this list, I integrated a zlib-based compression > ability into ORTE. Since the launch message sent to the orteds and the modex > between the application procs are the only places where messages of any size > are sent, I only implemented compression for those two

Re: [OMPI devel] Affect of compression on modex and launch messages

2008-04-04 Thread George Bosilca
Ralph, There are several studies about compressions and data exchange. Few years ago we integrate such mechanism (adaptive compression of communication) in one of the projects here at ICL (called GridSolve). The idea was to optimize the network traffic for sending large matrices used for

Re: [OMPI devel] MPI_Comm_connect/Accept

2008-04-04 Thread Aurélien Bouteiller
Ralph, I've not been very successful at using ompi-server. I tried this : xterm1$ ompi-server --debug-devel -d --report-uri test [grosse-pomme.local:01097] proc_info: hnp_uri NULL daemon uri NULL [grosse-pomme.local:01097] [[34900,0],0] ompi-server: up and running! xterm2$ mpirun

Re: [OMPI devel] init_thread + spawn error

2008-04-04 Thread Tim Prins
Thanks for the report. As Ralph indicated the threading support in Open MPI is not good right now, but we are working to make it better. I have filed a ticket (https://svn.open-mpi.org/trac/ompi/ticket/1267) so we do not loose track of this issue, and attached a potential fix to the ticket.