Hello Dr. Arndt,
The above fix resolved the issue in the minimal example. Thanks a lot for
providing the fix.
Best,
Sambit
On Tuesday, January 23, 2018 at 6:16:02 AM UTC-6, Daniel Arndt wrote:
>
> Sambit,
>
> Please try if https://github.com/dealii/dealii/pull/5779 fixes the issue
> for you.
Hi Wolfgang
Here is the program, I did not but adding a
DoFRenumbering::Cuthill_McKee(dof_handler) call in 251. I tried to debug
with lldb but did not gain any useful information. I will check the video
again to make sure I was doing it right.
Thank you very much!
Jie
On Tue, Jan 23, 2018 at
On 01/23/2018 01:16 PM, Jie Cheng wrote:
These are just warnings -- what happens if you run the executable?
If I do not modify step-40.cc, it runs fine both in serial and parallel. After
I added DoFRenumbering in it, it crashes in parallel. Does DoFRenumbering have
any dependency?
Not
On 01/23/2018 02:13 PM, Bruno Turcksin wrote:
mypath/dealii/source/lac/scalapack.cc:243:91: error: there are no
arguments to ‘MPI_Comm_create_group’ that depend on a template parameter,
so a declaration of ‘MPI_Comm_create_group’ must be available [-fpermissive]
ierr =
On 01/23/2018 01:59 PM, 'Maxi Miller' via deal.II User Group wrote:
Assumed I want to use either MPI_Bcast or a function from Boost::MPI, what do
I have to do to initialize them, and what is already done by deal.II?
If you initialize the MPI system as we do, for example, in the main() function
Juan Carlos
On Tuesday, January 23, 2018 at 3:12:20 PM UTC-5, Juan Carlos Araujo
Cabarcas wrote:
>
> [ 50%] Building CXX object
> source/fe/CMakeFiles/obj_fe_debug.dir/fe_poly.cc.o
> mypath/dealii/source/lac/scalapack.cc: In member function ‘void
>
Assumed I want to use either MPI_Bcast or a function from Boost::MPI, what
do I have to do to initialize them, and what is already done by deal.II?
Thanks!
Am Dienstag, 23. Januar 2018 19:18:27 UTC+1 schrieb Wolfgang Bangerth:
>
> On 01/23/2018 06:40 AM, 'Maxi Miller' via deal.II User Group
Hi Wolfgang
> These are just warnings -- what happens if you run the executable?
>
If I do not modify step-40.cc, it runs fine both in serial and parallel.
After I added DoFRenumbering in it, it crashes in parallel. Does
DoFRenumbering have any dependency? As I posted in previous messages,
Dear all,
I am trying to install deal.II from the GIT repository with the following
features:
petsc_ver='3.6.0';
trilinos_ver='12.4.2';
git clone https://github.com/dealii/dealii.git dealii
cmake \
-DTRILINOS_DIR=${install_dir}/trilinos-${trilinos_ver}
On 01/23/2018 10:35 AM, Dulcimer0909 wrote:
If I do go ahead and replace code, so that it does a cell by cell assembly, I
am a bit lost on how I would store the old_solution (U^(n-1)) for each cell
and retrieve it during the assembly for the Rhs.
Dulcimer -- can you elaborate? It's not
Hello Professor,
Thanks for the reply.
I had been struggling with this issue for 5 days now. I raised the ticket
on XSEDE forum on 18th Jan.
The technical team was expecting everything was fine from their side and
advised me to reinstall with some different modules loaded. They were also
On 01/22/2018 09:17 PM, Jie Cheng wrote:
I've reinstalled MPICH, and did a clean build of p4est, petsc and dealii, this
problem still exists. At the linking stage of building dealii, I got warnings:
[526/579] Linking CXX shared library lib/libdeal_II.9.0.0-pre.dylib
ld: warning: could not
On 01/23/2018 06:40 AM, 'Maxi Miller' via deal.II User Group wrote:
So, a solution would be calling MPI_Bcast() after every call in the if()-loop
in the run()-function? Thanks!
Yes. After each if-statement, process 0 has to broadcast the information it
has computed to all of the processors
Hello all,
An additional question regarding this thread:
If I do go ahead and replace code, so that it does a cell by cell assembly,
I am a bit lost on how I would store the old_solution (U^(n-1)) for each
cell and retrieve it during the assembly for the Rhs.
grateful if anyone could help.
Hello my name is Markus,
last week I started my first project with deal.II. I used step-8
(linear-elasticity) as starting point, built my own grid_geometry and it
worked fine. Then I added new boundary conditions:
hanging_node_constraints.condense (system_matrix);
So, a solution would be calling MPI_Bcast() after every call in the
if()-loop in the run()-function? Thanks!
Am Dienstag, 23. Januar 2018 14:31:58 UTC+1 schrieb Bruno Turcksin:
>
> Hi,
>
> On Tuesday, January 23, 2018 at 7:53:16 AM UTC-5, Maxi Miller wrote:
>
>> But now it looks like as if only
Hi,
On Tuesday, January 23, 2018 at 7:53:16 AM UTC-5, Maxi Miller wrote:
> But now it looks like as if only the first node gets the result of the
> calculations, but the others do not, instead defaulting to the default
> values of the calculation function when not initialized. Is there a way I
Sambit,
Please try if https://github.com/dealii/dealii/pull/5779 fixes the issue
for you.
Best,
Daniel
Am Dienstag, 16. Januar 2018 22:06:55 UTC+1 schrieb Sambit Das:
>
> Thank you, Dr. Arndt.
>
> Best,
> Sambit
>
> On Tuesday, January 16, 2018 at 11:16:08 AM UTC-6, Daniel Arndt wrote:
>>
>>
Hello everyone,
For this discussion, I just want to post the working setup in some clusters
that uses intel MKL instead of BLAS ans LAPACK. Here is the code,
"thrilino_setup.sh",
mkdir build
cd build
cmake\
19 matches
Mail list logo