Hi, Thanks for such a detailed reply. You are right, we have partitioned (normalized) our system with Xen and have seen that virtualization overhead is not that great (for some applications) as compared to potential benefits that we can get. We have executed various benchmarks on different network/cluster configuration of Xen and Native linux and they are really encouraging. The only known problem is inter-domain communication of Xen which is quite poor (1/6 of the native memory transfer and not to mention 50%CPU utilization of host). We have tested out Xensocket, and these sockets give us really good performance boost in all respects. Now that I am having a look at the complex yet wonderful architecture of openmpi, can you guys give me some guidance on couple of naive questions?
1- How do I view the console output of an mpi process which is not at headnode? Do I have to have some parallel debugger? Or is there any magical technique? 2- How do i setup GPR? say i have a struct foo, and all processes have at least one such instance of foo. From what I gather, openmpi will create a linked list of foo's that were passed on (though I am unable to pass one on). Where do i have to define struct foo so that it can be exchanged b/w the processes? I know its a lame question, but I think i am getting lost in the sea. :( Best Regards, Muhammad AtifPS: I am totally new to MPI internals. So if at all we decide to go ahead with the project, I would be regular bugger in the list. ----- Original Message ---- From: Adrian Knoth <a...@drcomp.erfurt.thur.de> To: Open MPI Developers <de...@open-mpi.org> Sent: Thursday, January 10, 2008 1:24:01 AM Subject: Re: [OMPI devel] btl tcp port to xensocket On Tue, Jan 08, 2008 at 10:51:45PM -0800, Muhammad Atif wrote: > I am planning to port tcp component to xensocket, which is a fast > interdomain communication mechanism for guest domains in Xen. I may Just to get things right: You first partition your SMP/Multicore system with Xen, and then want to re-combine it later for MPI communication? Wouldn't it be easier to leave the unpartitioned host alone and use shared memory communication instead? > As per design, and the fact that these sockets are not normal sockets, > I have to pass certain information (basically memory references, guest > domain info etc) to other peers once sockets have been created. I There's ORTE, the runtime environment. It employs OOB/tcp to have a so called out-of-band channel. ORTE also provides a general purpose registry (GPR). Once a TCP connection between the headnode process and all other peers is established, you can store your required information in the GPR. > understand that mca_pml_base_modex_send and recv (or simply using > mca_btl_tcp_component_exchange) can be used to exchange information, Use mca_pml_base_modex_send (now ompi_modex_send) and encode your required information. It's getting stored in the GPR. Read it back with mca_pml_base_modex_recv (ompi_modex_recv), as it is done in mca_btl_tcp_component_exchange and mca_btl_tcp_proc_create. > but I cannot seem to get them to communicate. So to put my question in > a very simple way..... I want to create a socket structure containing > necessary information, and then pass it to all other peers before > start of actual mpi communication. What is the easiest way to do it. Quite the same way. mca_btl_tcp_component_exchange assembles the required information and stores it in the GPR by calling ompi_modex_send. mca_btl_tcp_proc_create (think of "the other peers") reads this information into local context. I guess you might want to copy btl/tcp to let's say btl/xen, so you can modify internal structures, if required. Perhaps xensockets don't need IP addresses, as they are actually memory sockets. However, you'll still need TCP communication between Xen guests for the OOB channel. As mentioned above, I'm not sure if it's reasonable to use Xen and MPI at all. Virtualization overhead might decrease your performance, and that's usually the last thing you want to have when using MPI ;) HTH -- Cluster and Metacomputing Working Group Friedrich-Schiller-Universität Jena, Germany private: http://adi.thur.de _______________________________________________ devel mailing list de...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/devel ____________________________________________________________________________________ Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ