Dear Luca,
I am using 9.2 version and the
implementation I try to follow from your presentation at SISSA 2018 but it
gives me error. Following are the lines I added to step-1. I want to
implement K nearest neighbor. I will work on your suggestion.
*#include *
*Poi
Dear community
I have written the simple code below for solving a system using PETSc,
having defined
Vector incremental_displacement;
Vector accumulated_displacement;
in the class LargeStrainMechanicalProblem_OneField.
It turns out that this code produces a memory loss, quite significant since
Dear Heena,
here is a snippet to achieve what you want:
#include
namespace bgi = boost::geometry::index;
…
Point<2> p0;
const auto tree = pack_rtree(tria.get_vertices());
for (const auto &p : tree | bgi::adaptors::queried(bgi::nearest(p0, 3)))
// do something with p
std::c
KDTree needs nanoflann to be available. Did you compile deal.II with nanoflann
exnabled? Check in the summary.log if DEAL_II_WITH_NANOFLANN is ON.
RTree, on the other hand, does not require nanoflann, as it is included with
boost (and it is faster than nanoflann).
L.
> On 24 Jul 2020, at 10:05
On Thu, Jul 23, 2020 at 5:14 PM Daniel Arndt wrote:
>
> You can do similarly,
>
> Quadrature q(fe.get_unit_support_points());
> FEValues fe_values (..., q, update_q_points);
> for (const auto& cell)
> ...
> points = fe_values.get_quadrature_points();
> fe_values.get_function_values(values);
Dear Luca,
Thank you very much. It now works in both
ways. Thanks for advice.
Regards,
Heena
On Fri, Jul 24, 2020 at 12:31 PM luca.heltai wrote:
> KDTree needs nanoflann to be available. Did you compile deal.II with
> nanoflann exnabled? Check in the summary.log if
Dear community,
if I am not mistaking my analysis, it turned out that the memory loss is
caused by this call:
BiCG.solve (this->system_matrix, distributed_incremental_displacement,
this->system_rhs, preconditioner);
because if I turn it off the top command shows no change in the RES at all.
M
Yuesun,
Apparently, CMake was able to find the file petscvariables, but not the
include directories or the library.
Can you search for "libpetsc.so" yourself? Our CMake find module tries to
find this library in {PETSC_DIR}/lib or {PETSC_DIR}/lib64.
See if you can adjust PETSC_DIR accordingly.
Bes
Alberto,
Have you tried running valgrind (in parallel) on your code? Admittedly, I
expect quite a bit of false-positives from the MPI library but it should
still help.
Best,
Daniel
Am Fr., 24. Juli 2020 um 12:07 Uhr schrieb Alberto Salvadori <
alberto.salvad...@unibs.it>:
> Dear community,
>
>
Dear Daniel,
Thank you for the instruction! I gave the architecture directory, which is
a sub-directory : /home/yjin6/petsc/arch-linux-c-debug. It returns message
like this:
***
Dear all,
This problem has been solved. I copied the petscversion.h file to the
arch/include folder therefore cmake found all petsc files and finished
compilation.
Best regards
On Fri, Jul 24, 2020 at 3:17 PM yuesu jin wrote:
> Dear Daniel,
> Thank you for the instruction! I gave the architec
On 7/23/20 12:07 PM, Xuefeng Li wrote:
Well, the above function calculates the gradients of a finite element at the
quadrature points of a cell, not at the nodal points of a cell.
Such a need arises in the following situation.
for ( x in vector_of_nodal_points )
v(x) = g(x, u(x), grad u(x)
On 7/23/20 11:47 AM, Daniel Arndt wrote:
McKenzie,
I'm interested in applying a non-homogeneous Dirichlet boundary condition
to a specific edge. However, I'm unsure how to identify or specify a
particular edge or face to add the boundary condition to. Could you help
clear this up
On 7/24/20 3:32 AM, Alberto Salvadori wrote:
It turns out that this code produces a memory loss, quite significant since I
am solving my system thousands of times, eventually inducing the run to fail.
I am not sure what is causing this issue and how to solve it, maybe more
experienced users t
14 matches
Mail list logo