Relevant backtrace is:

    frame #4: 0x00000001085b8c86 libdeal_II.g.8.5.0-pre.dylib`void 
dealii::deal_II_exceptions::internals::issue_error<dealii::LinearAlgebra::distributed::Vector<double>::ExcNonMatchingElements>(handling=abort_on_exception,
 
file="/Users/davydden/libs-sources/deal.ii/davydden/include/deal.II/lac/la_parallel_vector.templates.h",
 
line=686, function="void 
dealii::LinearAlgebra::distributed::Vector<double>::compress_finish(::dealii::VectorOperation::values)",
 
cond="*read_position == Number() || std::abs(local_element(j) - 
*read_position) <= std::abs(local_element(j)) * 1000. * 
std::numeric_limits<real_type>::epsilon()", 
exc_name="ExcNonMatchingElements(*read_position, local_element(j), 
part.this_mpi_process())", e=ExcNonMatchingElements @ 0x00007fff5fbfa5e0) + 
134 at exceptions.h:285
    frame #5: 0x00000001085b6d41 
libdeal_II.g.8.5.0-pre.dylib`dealii::LinearAlgebra::distributed::Vector<double>::compress_finish(this=0x00000001261087c8,
 
operation=insert) + 2401 at la_parallel_vector.templates.h:681
    frame #6: 0x00000001085b516f 
libdeal_II.g.8.5.0-pre.dylib`dealii::LinearAlgebra::distributed::Vector<double>::compress(this=0x00000001261087c8,
 
operation=insert) + 47 at la_parallel_vector.templates.h:494
    frame #7: 0x000000010a150a73 
libdeal_II.g.8.5.0-pre.dylib`dealii::parallel::distributed::SolutionTransfer<3, 
dealii::LinearAlgebra::distributed::Vector<double>, dealii::DoFHandler<3, 
3> >::interpolate(this=0x00007fff5fbfb5b8, all_out=size=120) + 2451 at 
solution_transfer.cc:183

but that's not really helpful.

On Monday, January 2, 2017 at 5:50:32 PM UTC+1, Denis Davydov wrote:
>
> Hi all,
>
> I came across a very weird problem today. When using 
> p::d::SolutionTransfer during refinement I get the following error 
> during interpolate step at the second refinement cycle:
>
> 61: An error occurred in line <686> of file </Users/davydden/spack/var/
> spack/stage/dealii-develop-7douosfpwd62254laf4et7n6x64b62fh/dealii/include
> /deal.II/lac/la_parallel_vector.templates.h> in function
> 61:     void dealii::LinearAlgebra::distributed::Vector<double>::
> compress_finish(::dealii::VectorOperation::values)
> 61: The violated condition was:
> 61:     *read_position == Number() || std::abs(local_element(j) - *
> read_position) <= std::abs(local_element(j)) * 1000. * std::numeric_limits
> <real_type>::epsilon()
> 61: Additional information:
> 61:     Called compress(VectorOperation::insert), but the element 
> received from a remote processor, value 2.059635156599626e-06, does not 
> match with the value 2.059635156600494e-06 on the owner processor 2
>
> I am quite certain that SolutionTransfer gets parallel vectors with (i) 
> constraints distributed (ii) ghosted, That is accomplished by
>
> constraints.distribute (solution_vectors[i]);
> solution_vectors[i].update_ghost_values();
>
>
> and a minor trick (prior to those lines) to change ghost index set when 
> combining matrix-free and Kelly, as discussed here 
> https://groups.google.com/d/msg/dealii/-FMHGdn18fE/w6YotFXRAAAJ
>
> Between this point and the usage of solution transfer, there is only Kelly 
> estimator / DataOutput and cell marking involved.
>
>
> The issue appears for a very specific problem and does not happen always. 
> So far it looks like a hidden *Heisenbug*, which gets triggered in 
> certain cases. 
> Not knowing the details of p::d::SolutionTransfer class, I am not quite 
> sure where to dig next.  
> Given the fact that ghost values should be consistent among MPI processes, 
> I don't see a way how the interpolation during solution transfer 
> could lead to such discrepancies.
>
> Regards,
> Denis
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to