Fixed in the branch barry/fix-matload-mpidense-denseinfile and next will put 
into master after testing.

  I couldn't put my fix into maint (the release version), I recommend you 
upgrade to the development version for this feature or you can manually
fix by editing MatLoad_MPIDense_DenseInFile() and using MPI_Reduce() to 
communicate the largest size needed to be passed to the malloc() instead of 
using the size from the first process.

  Thanks for letting us know about the bug.

  Barry

> On Apr 9, 2015, at 6:02 AM, Matteo Aletti <[email protected]> wrote:
> 
> Hello,
> 
> I was trying to use the function MatLoad to read an mpi dense (square) matrix 
> from a binary file.
> I got an error related to a memory problem (in parallel).
> I tried to locate the problem using gdb and I think it is in the function 
> MatLoad_MPIDense_DenseInFile.
> The master cannot execute the line 
>   ierr = PetscFree(vals);CHKERRQ(ierr);
> without an error. The other processors can.
> 
> For me, the error is in the allocation of the vals array: each processor 
> allocates it with m*N number of elements, where N is the same for all of the 
> procs and m is the local number of rows, which in my case is already given in 
> the matrix.
> 
> The master uses the array vals to read its own data, but also the other 
> processors' data. The problem is that, in my case, one processor has a higher 
> number of rows than the master and therefore vals is too short to store those 
> values. For me the master should allocate it with size m_max*N , where m_max 
> is the maximum number of local rows between all procs.
> 
> Thanks,
> Best,
> Matteo

Reply via email to