Re: [deal.II] bug in program only with adaptive mesh refinement

2017-03-07 Thread Wolfgang Bangerth
Hi Wolfgang, a brief update -- before, I was using ConstraintMatrix::distribute_local_to_global in the assembly of both the derivative vector and the Hessian matrix. I can see how that would make the computed vector no longer represent the derivative of a functional, since you're changing

Re: [deal.II] Re: Announcing the deal.II Code Gallery

2017-03-07 Thread Jean-Paul Pelteret
Hi Michael, I've opened an issu e for this. The problem seems not to be with the actual examples *per se*, but rather that the current mechanism in place to build the documentation. It will collect whichever files are in these subdirectories and

Re: [deal.II] Re: Announcing the deal.II Code Gallery

2017-03-07 Thread Michael Harmon
Hi Jean-Paul, Yes, Thanks! I removed all the CG examples except for goal_oriented_electroplasticity and my own and it worked I guess it is one of the CG examples.. It takes an painfully long time to write the html on my puny macbook air :) - Mike On Tuesday, March 7, 2017 at 11:54:42 AM

Re: [deal.II] Re: Announcing the deal.II Code Gallery

2017-03-07 Thread Jean-Paul Pelteret
Thanks Wolfgang, a slight permutation of that seemed to work! I'll submit a PR in a moment. Michael, can you tell me if you've built any of the code gallery examples? I think that this might be the issue. If you have, can you go into those examples' directories and run "make distclean", then

[deal.II] Re: Access specific element within a distributed triangulation

2017-03-07 Thread Daniel Arndt
Seyed, after having just a quick glance over your approach there seem to be some more issues you can easily stumble upon: - n_vertices() only gives you the number of vertices for one process, not the global one. In particular, you can't rely on the fact that this is the same for all processes.

Re: [deal.II] Re: Announcing the deal.II Code Gallery

2017-03-07 Thread Wolfgang Bangerth
On 03/07/2017 09:30 AM, Jean-Paul Pelteret wrote: Matthias, is there any way to disable the deletion of doxygen.log when a build of the documentation fails? In doc/doxygen/CMakeLists.txt, line ~230, you have ADD_CUSTOM_COMMAND( OUTPUT ${CMAKE_BINARY_DIR}/doxygen.log COMMAND

Re: [deal.II] Re: Announcing the deal.II Code Gallery

2017-03-07 Thread Jean-Paul Pelteret
Dear all, Ok, so it looks as though at least one of the code gallery examples are being problematic. I nuked my build directory and moved all but one of the CG examples out of their subdirectory. For added precaution, I disabled MathJax via -DDEAL_II_DOXYGEN_USE_MATHJAX=OFF \

[deal.II] Re: Access specific element within a distributed triangulation

2017-03-07 Thread 'Seyed Ali Mohseni' via deal.II User Group
Dear all, With MPI_Allreduce it works like a charm. Thank you very much everyone. Especially Prof. Bangerth and Daniel. Kind regards, S. A. Mohseni -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en

Re: [deal.II] Re: Announcing the deal.II Code Gallery

2017-03-07 Thread Michael Harmon
Thanks! I am glad it wasn't just me!! Mike On Tuesday, March 7, 2017 at 10:26:43 AM UTC-5, Jean-Paul Pelteret wrote: > > Hi Michael, > > I've just tried to build the documentation with the code gallery and have > run into similar problems. I'm going to fiddle around to see if I can work > out

Re: [deal.II] Re: Access specific element within a distributed triangulation

2017-03-07 Thread Wolfgang Bangerth
On 03/07/2017 07:46 AM, 'Seyed Ali Mohseni' via deal.II User Group wrote: Now my question how can I store the data on all processors? Or how am I able to at least store my max_rank variable on all processors? The function you're looking for is called MPI_Allreduce. Best W. --

Re: [deal.II] Re: Announcing the deal.II Code Gallery

2017-03-07 Thread Michael Harmon
I ran "make" again and attached the outputs fm the terminal into make.log I also ran "make install" and attached the outputs from the into make_install.log It seems they are failing at different points... but I'm not sure whats going wrong.. Thanks, Mike On Monday, March 6, 2017 at 10:58:28

[deal.II] Re: Access specific element within a distributed triangulation

2017-03-07 Thread 'Seyed Ali Mohseni' via deal.II User Group
I think I figured it out ;) After thinking about your suggestion, Prof. Bangerth. It came to my mind, that the result of my max_rank variable is stored on a specific processor due to MPI_Reduce. That means I cannot access these data parts without being currently at the correspinding processor

Re: [deal.II] Re: Access specific element within a distributed triangulation

2017-03-07 Thread Wolfgang Bangerth
On 03/07/2017 05:31 AM, 'Seyed Ali Mohseni' via deal.II User Group wrote: Now the funny part is, if I set max_rank manually such as max_rank = 3 for instance, it works and for the currently owned rank I receive an output within terminal. Another thing is that MPI_Reduce creates somewhere some

[deal.II] Re: MeshWorker clarifications

2017-03-07 Thread Franco Milicchio
Thanks for the answer, Daniel. On Monday, March 6, 2017 at 7:49:53 PM UTC+1, Daniel Arndt wrote: So my first question is, should I avoid using this class and implement >> parallel loops by hand (via TBB or other means)? >> > > "amandus"[1] is in fact based on MeshWorker. If you are trying to