Samuel, thanks for the update.
Glad the workaround gave a useable build.
BTW - looking at configure.log - I think the option LDFLAGS="-dynamic"
is interfering with some of these tests - in this case the default
-fPIC test - hence this need for the workarround.
I think the problem won't appear
On 2018-04-25 09:47 AM, Matthew Knepley wrote:
On Wed, Apr 25, 2018 at 12:40 PM, Danyang Su > wrote:
Hi Matthew,
In the worst case, every node/cell may have different label.
Do not use Label for this. Its not an appropriate thing. If
I get it, thanks, that's a strong argument i will tell my advisor about
Have a great day,
On Wed, Apr 25, 2018 at 12:30 PM, Smith, Barry F.
wrote:
>
>
> > On Apr 25, 2018, at 2:12 PM, Manuel Valera wrote:
> >
> > Hi and thanks for the quick answer,
> >
> On Apr 25, 2018, at 2:12 PM, Manuel Valera wrote:
>
> Hi and thanks for the quick answer,
>
> Yes it looks i am using MPICH for my configure instead of using the system
> installation of OpenMPI, in the past i had better experience using MPICH but
> maybe this will be
Hi and thanks for the quick answer,
Yes it looks i am using MPICH for my configure instead of using the system
installation of OpenMPI, in the past i had better experience using MPICH
but maybe this will be a conflict, should i reconfigure using the system
MPI installation?
I solved the problem
Hi Manuel,
this looks like the wrong MPI gets used. You should see an increasing
number of processes, e.g.
Number of MPI processes 1 Processor names node37
Triad: 6052.3571 Rate (MB/s)
Number of MPI processes 2 Processor names node37 node37
Triad:9138.9376 Rate (MB/s)
It should not have
Number of MPI processes 1
it should have an increasing number of MPI processes. Do you have multiple
MPI's installed on your machine and have the "wrong" mpiexec in your path for
the MPI you built PETSc with?
What command line options did you use for ./configure ?
Hi,
I'm running scaling tests on my system to check why my scaling is so poor,
and after following the MPIVersion guidelines my scaling.log output looks
like this:
Number of MPI processes 1 Processor names node37
Triad:12856.9252 Rate (MB/s)
Number of MPI processes 1 Processor names
On Wed, Apr 25, 2018 at 12:40 PM, Danyang Su wrote:
> Hi Matthew,
>
> In the worst case, every node/cell may have different label.
>
> Do not use Label for this. Its not an appropriate thing. If every cell is
different, just use the cell number.
Labels are for mapping a
Hi Matthew,
In the worst case, every node/cell may have different label.
Below is one of the worst scenario with 102299 nodes and 102299
different labels for test. I found the time cost increase during the
loop. The first 9300 loop takes least time (<0.5) while the last 9300
loops takes much
Dear Satish,
Your workaround worked! Great!
Thanks a lot for all your help, everyone. =)
Cheers,
Samuel
On 04/24/2018 06:33 PM, Satish Balay wrote:
Hm - I'm not sure why configure didn't try and set -fPIC compiler options.
Workarround is:
CFLAGS=-fPIC FFLAGS=-fPICC CXXFLAGS=-fPIC
or
On Wed, Apr 25, 2018 at 8:10 AM, Savneet Kaur wrote:
> Hello Again,
>
> Yes I would like to find for the initial 10% and 20% of the eigenpairs.
> But in addition to this,I also want to check for the full spectrum.
>
> So, how shall I proceed with it?
>
First just setup the
Hello Again,
Yes I would like to find for the initial 10% and 20% of the eigenpairs.
But in addition to this,I also want to check for the full spectrum.
So, how shall I proceed with it?
Thank You.
Regards,
savneet
Le 25/04/2018 à 11:33, Matthew Knepley a écrit :
On Wed, Apr 25, 2018 at
On Tue, Apr 24, 2018 at 11:57 PM, Danyang Su wrote:
> Hi All,
>
> I use DMPlex in unstructured grid code and recently found DMSetLabelValue
> takes a lot of time for large problem, e.g., num. of cells > 1 million. In
> my code, I use
>
I read your code wrong. For large
On Wed, Apr 25, 2018 at 5:10 AM, Savneet Kaur wrote:
> Hello,
>
> Warm Regards
>
> I am Savneet Kaur, a master student at University Paris Saclay and
> currently pursuing an internship at CEA Saclay (France).
>
> I have recently started to understand the slepc and petsc
On Tue, Apr 24, 2018 at 11:57 PM, Danyang Su wrote:
> Hi All,
>
> I use DMPlex in unstructured grid code and recently found DMSetLabelValue
> takes a lot of time for large problem, e.g., num. of cells > 1 million. In
> my code, I use
>
> DMPlexCreateFromCellList ()
>
> Loop
Hello,
Warm Regards
I am Savneet Kaur, a master student at University Paris Saclay and
currently pursuing an internship at CEA Saclay (France).
I have recently started to understand the slepc and petsc solvers, by
taking up the tutorials for eigenvalue problems. In my internship work I
17 matches
Mail list logo