Re: [deal.II] Implementation of Broyden's method

2018-12-04 Thread 'Maxi Miller' via deal.II User Group
What do you mean with "What I want to do with the matrix afterwards"? And I am not sure if I need it element-by-element, I just would like to implement a cheaper update method than the full recalculation. Am Dienstag, 27. November 2018 22:21:00 UTC+1 schrieb Wolfgang Bangerth: > > On 11/27/2018

Re: [deal.II] Implementation of Broyden's method

2018-12-04 Thread Daniel Arndt
Maxi, What do you mean with "What I want to do with the matrix afterwards"? And I > am not sure if I need it element-by-element, I just would like to implement > a cheaper update method than the full recalculation. > You can always obtain a matrix representation of a linear operator by

[deal.II] Several MPI jobs in a single machine

2018-12-04 Thread Daniel Garcia-Sanchez
Hi, I'm fine tuning a simulation in the frequency domain (similar to https://github.com/dealii/dealii/pull/6747). Because it is in the frequency domain I can parallelize: 1. Using MPI 2. Running several simulations in parallel for different independent frequency ranges. 3. A

[deal.II] Fast access to position of non-zero elements in sparsity pattern

2018-12-04 Thread 'Maxi Miller' via deal.II User Group
For the implementation of the Quasi-Newton-method according to Schubert, which is defined as C_1 = C_0 - sum(u_i u_i^T(C_0p_j - y/t)(p_i^T/(p_i^Tp_i))) with p_i the vector p with all elements set to zero where column vector i of matrix C_0 has zero-value entries, I need the position of zero-

Re: [deal.II] Fast access to position of non-zero elements in sparsity pattern

2018-12-04 Thread Wolfgang Bangerth
On 12/4/18 6:19 AM, 'Maxi Miller' via deal.II User Group wrote: > For the implementation of the Quasi-Newton-method according to Schubert, > which > is defined as > > C_1 = C_0 - sum(u_i u_i^T(C_0p_j - y/t)(p_i^T/(p_i^Tp_i))) > > with p_i the vector p with all elements set to zero where column

Re: [deal.II] Implementation of Broyden's method

2018-12-04 Thread Bruno Turcksin
Maxi, On Tuesday, December 4, 2018 at 3:48:41 AM UTC-5, Maxi Miller wrote: > > What do you mean with "What I want to do with the matrix afterwards"? And > I am not sure if I need it element-by-element, I just would like to > implement a cheaper update method than the full recalculation. > If

Re: [deal.II] Change in the behavior of Boundary Id interpretation from GMSH 2 to GMSH 4?

2018-12-04 Thread Daniel Arndt
Bruno, I created a corresponding issue in the GitHub repository at https://github.com/dealii/dealii/issues/7501. Best, Daniel -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this

Re: [deal.II] Several MPI jobs in a single machine

2018-12-04 Thread Wolfgang Bangerth
On 12/4/18 4:30 AM, Daniel Garcia-Sanchez wrote: > Hi, > > I'm fine tuning a simulation in the frequency domain (similar to > https://github.com/dealii/dealii/pull/6747). Because it is in the frequency > domain I can parallelize: > > 1. Using MPI > 2. Running several simulations in parallel

[deal.II] Re: Several MPI jobs in a single machine

2018-12-04 Thread Daniel Garcia-Sanchez
Hi, I think that I found the explanation/solution. But I would be happy if somebody with experience with OpenMPI and Slurm could comment on this! On Tuesday, December 4, 2018 at 12:30:15 PM UTC+1, Daniel Garcia-Sanchez wrote: > > But it is striking that If I run 16 MPI jobs of one process per

Re: [deal.II] cmake linker errors using spack view

2018-12-04 Thread Praveen C
Hello Daniel In this case, the duplicates are because I have sym links to the same libraries, and cmake is adding both paths to the linker. This should not cause any problem. I have fixed it by not using the symlinked libraries, so it is fine now. Thank you praveen > On 04-Dec-2018, at 5:02

Re: [deal.II] Several MPI jobs in a single machine

2018-12-04 Thread Wolfgang Bangerth
On 12/4/18 7:58 AM, Daniel Garcia-Sanchez wrote: > > Although if I run in the same 16 core machine: > > * 16 instances of  ./executable > * 16 instances of mpirun -n 1 ./executable > > Then the results are different. I think that the reason is that mpirun > (OpenMPI) will bound all the

Re: [deal.II] Several MPI jobs in a single machine

2018-12-04 Thread Daniel Garcia-Sanchez
> There shouldn't be a difference between running a job via >./executable > and >mpirun -n 1 ./executable. > > Are your timing results reproducible? > > > My timing results are reproducible. As expected, there is no difference between ./executable and mpirun -n 1 ./executable.

[deal.II] Stress divergence at quadrature points in the context of large deformation elastics

2018-12-04 Thread Riku Suzuki
Hi all, I have a hyperelastic code similar to Step-44. I am able to compute the Cauchy stress \sigma at the quadrature points. Based on this, how do I further compute the divergence (w.r.t. current configuration) of Cauchy stress as a vector? The following is my thought: 1. Define the stress

[deal.II] Cooperation proposition

2018-12-04 Thread BornToCreate
Hi I'm looking for people passioned in programming and Science generally geeks I am after initial conversation with Investor Profile of my company is Tooling, Prototyping, 3d Printing and HPC in the future military equipment production Initial plan is to locate my company in Jelenia Góra in

Re: [deal.II] Cooperation proposition

2018-12-04 Thread Chinedu Nwaigwe
Yes, I am interested. On Wed, Dec 5, 2018, 03:53 BornToCreate wrote: > Hi > > I'm looking for people passioned in programming and Science generally geeks > > I am after initial conversation with Investor > > Profile of my company is Tooling, Prototyping, 3d Printing and HPC in the > future

Re: [deal.II] Cooperation proposition

2018-12-04 Thread Jean-Paul Pelteret
Dear Chinedu, Please be aware that the original post was deleted because it was unsolicited spam. Best, Jean-Paul On Wednesday, December 5, 2018 at 8:05:02 AM UTC+1, Chinedu Nwaigwe wrote: > > Yes, I am interested. > -- The deal.II project is located at http://www.dealii.org/ For mailing