On Fri, Nov 20, 2009 at 3:45 PM, Jaroslav Hajek <high...@gmail.com> wrote:

>
>
> On Fri, Nov 20, 2009 at 3:31 PM, Michael Creel <michael.cr...@uab.es>wrote:
>
>>
>>
>> On Fri, Nov 20, 2009 at 3:20 PM, Jaroslav Hajek <high...@gmail.com>wrote:
>>
>>>
>>>
>>> On Fri, Nov 20, 2009 at 2:57 PM, Michael Creel <michael.cr...@uab.es>wrote:
>>>
>>>>
>>>>
>>>> On Fri, Nov 20, 2009 at 1:57 PM, Jaroslav Hajek <high...@gmail.com>
>>>> wrote:
>>>> >
>>>> >
>>>> > On Fri, Nov 20, 2009 at 1:44 PM, Michael Creel <michael.cr...@uab.es>
>>>> wrote:
>>>> >>
>>>> >> Hi all,
>>>> >>
>>>> >> In another message, Jaroslav made the comment
>>>> >>
>>>> >>> 1. info is the first output argument in all functions. I think it
>>>> should
>>>> >>> be the second one, so that it can be easily ignored if wanted.
>>>> >>> In particular, the check in examples
>>>> >>>
>>>> >>> if not(MPI_Initialized)
>>>> >>>    info = MPI_Init();
>>>> >>> end
>>>> >>>
>>>> >>> makes no sense (unless I'm missing something) because it's the info
>>>> >>> output from MPI_Initialized that is tested and that is always zero.
>>>> >>
>>>> >> This follows the MPI standard, which says:
>>>> >>>
>>>> >>> All MPI routines (except MPI_Wtime and MPI_Wtick) return an error
>>>> value;
>>>> >>> C routines as the value of the function and Fortran routines in the
>>>> last
>>>> >>> argument. Before the value is returned, the current MPI error
>>>> handler is
>>>> >>> called. By default, this error handler aborts the MPI job. The error
>>>> handler
>>>> >>> may be changed with MPI_Errhandler_set; the predefined error handler
>>>> >>> MPI_ERRORS_RETURN may be used to cause error values to be returned.
>>>> Note
>>>> >>> that MPI does not guarentee that an MPI program can continue past an
>>>> error.
>>>> >>>
>>>> >>> MPI_SUCCESS No error; MPI routine completed successfully.
>>>> >>
>>>> >> Following the standard is a big plus, in my opinion. That way,
>>>> knowledge
>>>> >> about MPI previously gained from C or Fortran will be useful when
>>>> using MPI
>>>> >> with Octave.
>>>> >
>>>> > Yes, my approach still follows the standard, it's just more convenient
>>>> to
>>>> > use. The standard says nothing about how external bindings should
>>>> behave.
>>>> >
>>>>
>>>> Right, but MPI functions for C or Fortran don't return anything but
>>>> things like info, flag, status. The contents of messages passed back and
>>>> forth are not obtained from return values of functions.  It's certainly
>>>> possible to place a received message into the output of a MPI_Recv binding,
>>>> and there may be a some logic to the idea, but it is a step away from the C
>>>> and Fortran way of doing things.  Most people will learn about MPI though C
>>>> and Fortan tutorials, so following that syntax is not a bad idea. What 
>>>> would
>>>> you propose to use as outputs of functions?
>>>>
>>>> M.
>>>>
>>>
>>> I think you didn't understand, Michael. What I'm proposing is just to
>>> switch the order to a more convenient one.
>>> Take the simplest example: MPI_Initialized.
>>> This is now called like
>>>
>>> [info, flag] = MPI_Initialized ();
>>>
>>> where info is always zero (MPI_SUCCESS) and flag is the actual result.
>>> What I'm proposing is to use instead
>>>
>>> [flag, info] = MPI_Initialized ();
>>>
>>> because then the call
>>>
>>> flag = MPI_Initialized (); # ignore info
>>>
>>> is possible, and it is also possible to do
>>>
>>> if (MPI_Initialized)
>>>    something
>>> endif
>>>
>>> In C, the corresponding call is
>>> info  = MPI_Initialized (&flag);
>>> but there is *no* way you can preserve this structure in Octave. In
>>> Octave, every output argument must be a return value; there is no other way
>>> (unless you pass names and manipulate the caller scope directly).
>>>
>>>
>>> --
>>> RNDr. Jaroslav Hajek
>>> computing expert & GNU Octave developer
>>> Aeronautical Research and Test Institute (VZLU)
>>> Prague, Czech Republic
>>> url: www.highegg.matfyz.cz
>>>
>>
>> OK, I see what you mean. I think that new bindings should try to emulate
>> MPITB as closely as possible. For one thing, MPITB is clearly the most
>> widely used way of using MPI with Octave, so most existing code uses its
>> syntax. At the moment it is much more complete.
>
>
> But MPITB is already free software (isn't it?), so why develop a clone?
> MPITB users can still run their code using MPITB.
> But this is a chance to do things better than MPITB.
>
>
>
>> Second, if one follows MPITB syntax, then the documentation that is
>> already written will be valid.
>
>
> Documentation? What documentation?
>
>
>> I also hope that it might be possible for the projects to merge or at
>> least benefit from one another, so getting too far apart from the outset for
>> little benefit might not be a good idea.
>
>
> I wouldn't call it a little benefit. I'm not sure if any merge is possible;
> in any case I don't think there's a public repository for MPITB (vital for
> shared development). I was also hoping I could help keep this package better
> synced with current Octave development.
> Ultimately it's up to Riccardo to decide whether the development should be
> constrained by MPITB. If so, I'll probably fork my own version.
>
>
MPITB has excellent documentation, for example

octave:1> help MPI_Bcast
MPI_Bcast      Broadcasts a msg from process `root' to all others of the
group

  info = MPI_Bcast (var, root, comm)

  var        Octave variable used as msg src (at root) or msg dst (all
others)
  root (int) rank of broadcast root
  comm (ptr) communicator (handle)

  info (int)   return code
      0 MPI_SUCCESS    No error
      5 MPI_ERR_COMM   Invalid communicator (NULL?)
     16 MPI_ERR_OTHER  No collective implementation found
(intercommunicator?)
      2 MPI_ERR_COUNT  Invalid count argument (internal to .oct file)
      3 MPI_ERR_TYPE   Invalid datatype argument (internal to .oct file)
      1 MPI_ERR_BUFFER Invalid buffer pointer (NULL?)
      8 MPI_ERR_ROOT   Invalid root rank (0..Comm_size-1)

  SEE ALSO: MPI_Barrier, MPI_Scatter, MPI_Gather, MPI_Reduce
            collective

  MPI_Bcast is a collective operation on comm (all ranks must call it)

  For  4  or  less  ranks, a linear algorithm is used, where rank 0 loops
  over sending the message to each other rank.

  If more than 4 ranks are involved, a tree-based algorithm  is  used  to
  send the messages from rank 0.

/home/user/Econometrics/MyOctaveFiles/mpitb/DLD/MPI_Bcast.oct

Additional help for built-in functions and operators is
available in the on-line version of the manual.  Use the command
`doc <topic>' to search the manual index.

Help and information about Octave is also available on the WWW
at http://www.octave.org and via the h...@octave.org
mailing list.

MPITB is very complete, too, and has had a lot of testing. It is also
licensed GPL 2.0. The main problem with MPITB is that is gets out of date
with respect to Octave. I really appreciate Riccardo's efforts, and as far
as I can understand it, his approach makes it relatively easy to stay up
with Octave. As a user, I would really like to see MPI bindings that are
complete and stay current with Octave, wherever they come from. PelicanHPC
is a live CD that has MPITB installed, if anyone would like to examine it. I
will be happy to do testing of new bindings. I will try to adapt some of the
examples on PelicanHPC to use the new bindings.

Thanks, Michael
------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
Octave-dev mailing list
Octave-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/octave-dev

Reply via email to