The code is  in ompi/mca/fcoll/two_phase/ .  This is exactly similar to the 
ROMIO's implementation of two-phase which has been ported to OMPIO. In this 
code the buf_indices for the send_buffer is defined as an integer but this 
should be something like a size_t.  This will fix the problem in 
fcoll/two_phase. I am not sure whether that will be same fix for the ROMIO 
implementation. 

Thanks
Vish

Vishwanath Venkatesan
Graduate Research Assistant
Parallel Software Technologies Lab
Department of Computer Science
University of Houston
TX, USA
www.cs.uh.edu/~venkates


On Nov 8, 2012, at 2:08 PM, Rayson Ho wrote:

> Vishwanath,
> 
> Can you point me to the two_phase module code? (I just wanted to make
> sure that we are looking at the same problem.)
> 
> Rayson
> 
> ==================================================
> Open Grid Scheduler - The Official Open Source Grid Engine
> http://gridscheduler.sourceforge.net/
> 
> 
> On Thu, Nov 8, 2012 at 1:58 PM, Vishwanath Venkatesan
> <venka...@cs.uh.edu> wrote:
>> I just checked the code for testing 2GB limitation in OMPIO. The code works
>> with OMPIO's  " fcoll dynamic" module. Although it does have the same 2GB
>> limitation with the two_phase module which is based on ROMIO's
>> implementation and the static module. I have a fix for both these modules I
>> will commit them to trunk shortly.
>> 
>> 
>> Thanks
>> Vish
>> 
>> Vishwanath Venkatesan
>> Graduate Research Assistant
>> Parallel Software Technologies Lab
>> Department of Computer Science
>> University of Houston
>> TX, USA
>> www.cs.uh.edu/~venkates
>> 
>> 
>> On Nov 7, 2012, at 3:47 AM, Ralph Castain wrote:
>> 
>> Hi Rayson
>> 
>> We take snapshots from time to time. We debated whether or not to update
>> again for the 1.7 release, but ultimately decided not to do so - IIRC, none
>> of our developers had the time.
>> 
>> If you are interested and willing to do the update, and perhaps look at
>> removing the limit, that is fine with me! You might check to see if the
>> latest ROMIO can go past 2GB - could be that an update is all that is
>> required.
>> 
>> Alternatively, you might check with Edgar Gabriel about the ompio component
>> and see if it either supports > 2GB sizes or can also be extended to do so.
>> Might be that a simple change to select that module instead of ROMIO would
>> meet the need.
>> 
>> Appreciate your interest in contributing!
>> Ralph
>> 
>> 
>> On Tue, Nov 6, 2012 at 11:55 AM, Rayson Ho <raysonlo...@gmail.com> wrote:
>>> 
>>> How is the ROMIO code in Open MPI developed & maintained? Do Open MPI
>>> releases take snapshots of the ROMIO code from time to time from the
>>> ROMIO project, or was the ROMIO code forked a while ago and maintained
>>> separately in Open MPI??
>>> 
>>> I would like to fix the 2GB limit in the ROMIO code... and that's why
>>> I am asking! :-D
>>> 
>>> Rayson
>>> 
>>> ==================================================
>>> Open Grid Scheduler - The Official Open Source Grid Engine
>>> http://gridscheduler.sourceforge.net/
>>> 
>>> 
>>> On Thu, Nov 1, 2012 at 6:21 PM, Richard Shaw <jr...@cita.utoronto.ca>
>>> wrote:
>>>> Hi Rayson,
>>>> 
>>>> Just seen this.
>>>> 
>>>> In the end we've worked around it, by creating successive views of the
>>>> file
>>>> that are all else than 2GB and then offsetting them to eventually read
>>>> in
>>>> everything. It's a bit of a pain to keep track of, but it works at the
>>>> moment.
>>>> 
>>>> I was intending on following your hints and trying to fix the bug
>>>> myself,
>>>> but I've been short on time so haven't gotten around to it yet.
>>>> 
>>>> Richard
>>>> 
>>>> On Saturday, 20 October, 2012 at 10:12 AM, Rayson Ho wrote:
>>>> 
>>>> Hi Eric,
>>>> 
>>>> Sounds like it's also related to this problem reported by Scinet back in
>>>> July:
>>>> 
>>>> http://www.open-mpi.org/community/lists/users/2012/07/19762.php
>>>> 
>>>> And I think I found the issue, but I still have not followed up with
>>>> the ROMIO guys yet. And I was not sure if Scinet was waiting for the
>>>> fix or not - next time I visit U of Toronto, I will see if I can visit
>>>> the Scinet office and meet with the Scinet guys!
>>>> 
>>>> 
>>>> 
>>>> 
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> _______________________________________________
>>> devel mailing list
>>> de...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>> 
>> 
>> _______________________________________________
>> devel mailing list
>> de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>> 
>> 
>> 
>> _______________________________________________
>> devel mailing list
>> de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel

Reply via email to