Does this code work on one core (mpirun -np 1)? Are there any extra errors
when running in debug mode, or are you already running it in debug mode?
On Tuesday, June 7, 2016 at 7:02:42 PM UTC+2, Ehsan Esfahani wrote:
>
> Thank for your response. I'm not sure it's related because, previously,
>
Jonathan,
You may have figured this out already -- you might have forgotten to add
hanging_node_constraints.distribute (localized_solution);
after you solve your system (see step-17 for example).
Artur
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum
Bastien,
as there already is the Polynomial class HermiteInterpolation you should
create a new FiniteElement based on a
TensorProductPolynomial. A good way might be to copy
FE_Q_Base and modify it accordingly.
Of course you need to think about how to interpolate values using this
element, how
Bastien,
GridTools::collect_periodic_faces stores only the periodic faces on the
coarsest level.
This information is used in DoFTools::make_periodicity_constraints to
create the corresponding constraints
on the active set of DoFs.
My best guess would be that you don't set the boundary_ids on
I have run this code by Eclipse in debug mode. It's been terminated so I
cannot track the error, The error printed on Console is the same as the one
I have mentioned here.
On Tuesday, June 7, 2016 at 2:47:34 PM UTC-5, Jean-Paul Pelteret wrote:
>
> Does this code work on one core (mpirun -np 1)?
Hi Bastien,
There are a few potential problems here.
1. You're only refining cells along the top edge of your domain.
Periodic boundaries can only have a difference of 1 refinement level
between pairs of faces.
2. You may not be colouring any of your boundaries. This should be a
I agree with Daniel. So in this case, you probably just need to change your
order of operations and set the boundary_id's before doing any refinement.
On Wednesday, June 8, 2016 at 2:07:09 AM UTC+2, Daniel Arndt wrote:
>
> Bastien,
>
> GridTools::collect_periodic_faces stores only the periodic
Hi Marco,
Thats a good question. It doesn't look like its explicitly documented
anywhere as to which packages deal.II *requires* Trilinos to be built with
(i.e. utilises directly or offers some wrapped functionality for). I'll
make a note to do this on the GitHub repository.
Hints to the
Hi Jean-Paul,
Concerning the problem you've pointed out :
1. Ok, I'll take it into account. From now on, let's try to solve this
problem without local refinement. I do not do it anymore here.
2. Thanks for the advice, I've changed my checks.
3. I understand. However, I am almost sure that an
Hi Artur -
Unfortunately I don't have any hanging node constraints in the model. I do
have constraints but there aren't any along the mesh partition. I still
can't figure out why my output looks the way it does. The solve function
looks like this:
unsigned int EigenvalueProblem::solve ()
{
Thank for your response. I'm not sure it's related because, previously,
without distributed triangulation, I modified step-25 in order to solve my
problem (ginzburg landau eq.) and in that code, I didn't use those lines of
the code, and it's running without errors. Also, I don't need to
Hi Daniel.
Thank you for your answer, it worked well to enforce my periodic boundary
conditions on the 1D beam.
Thus, I'm back to the 2D neo-hookean case. Here is the code of my
*make_grid()* function :
template
void Solid::make_grid()
{
GridGenerator::hyper_cube(triangulation, 0.0,
Hello Daniel,
Since although we can solve two elastic and heat equations separately, we
still need to enter stress computed from elastic equation into the heat
equation in every time step and keep updating stress field.
I was wondering when we have two different DoFHandlers for each of the
Hamed,
assuming that both DoFHandlers are based on the same triangulation, you can
do something like the following:
FEValues fe_stress_values(...);
stress_cell = dof_handler_stress.begin_active();
temperature_cell = dof_handler_temperature.begin_active();
for (; stress_cell !=
14 matches
Mail list logo