Matthew Knepley writes:
>> BLAS. (Here a interesting point opens: I assume an efficient BLAS
>>
>> implementation, but I am not so sure about how the different BLAS do
>> things
>>
>> internally. I work from the assumption that we have a very well tuned BLAS
>>
>>
On Mon, Apr 3, 2017 at 2:50 PM, Ingo Gaertner
wrote:
> Dear all,
> as part of my studies I would like to implement a simple finite volume CFD
> solver (following the textbook by Ferziger) on an unstructured, distributed
> mesh. It seems like the DMPlex class with its
On Mon, Apr 3, 2017 at 11:45 AM, Filippo Leonardi
wrote:
> On Monday, 3 April 2017 02:00:53 CEST you wrote:
>
> > On Sun, Apr 2, 2017 at 2:15 PM, Filippo Leonardi >
>
> >
>
> > wrote:
>
> > > Hello,
>
> > >
>
> > > I have a project in mind and
> On Apr 3, 2017, at 10:05 AM, Jed Brown wrote:
>
> Barry Smith writes:
>
>>
>> SNESGetUsingInternalMatMFFD(snes,); Then you can get rid of the
>> horrible
>>
>> PetscBool flg;
>> ierr =
>>
> On Apr 3, 2017, at 1:10 PM, Wenbo Zhao wrote:
>
> Barry,
> Hi. I am sorry for too late to reply you.
> I read the code you send to me which create a VecScatter for ghost points on
> rotation boundary.
> But I am still not clear to how to use it to assemble the
Yes, one should rely on MKL (or Cray LibSci, if using the Cray toolchain)
on Cori. But I'm guessing that this will make no noticeable difference for
what Justin is doing.
--Richard
On Mon, Apr 3, 2017 at 12:57 PM, murat keçeli wrote:
> How about replacing
How about replacing --download-fblaslapack with vendor specific
BLAS/LAPACK?
Murat
On Mon, Apr 3, 2017 at 2:45 PM, Richard Mills
wrote:
> On Mon, Apr 3, 2017 at 12:24 PM, Zhang, Hong wrote:
>
>>
>> On Apr 3, 2017, at 1:44 PM, Justin Chang
Dear all,
as part of my studies I would like to implement a simple finite volume CFD
solver (following the textbook by Ferziger) on an unstructured, distributed
mesh. It seems like the DMPlex class with its DMPlex*FVM methods has
prepared much of what is needed for such a CFD solver.
Unfortunately
On Mon, Apr 3, 2017 at 12:24 PM, Zhang, Hong wrote:
>
> On Apr 3, 2017, at 1:44 PM, Justin Chang wrote:
>
> Richard,
>
> This is what my job script looks like:
>
> #!/bin/bash
> #SBATCH -N 16
> #SBATCH -C knl,quad,flat
> #SBATCH -p regular
> #SBATCH -J
On Apr 3, 2017, at 1:44 PM, Justin Chang
> wrote:
Richard,
This is what my job script looks like:
#!/bin/bash
#SBATCH -N 16
#SBATCH -C knl,quad,flat
#SBATCH -p regular
#SBATCH -J knlflat1024
#SBATCH -L SCRATCH
#SBATCH -o knlflat1024.o%j
#SBATCH
Richard,
This is what my job script looks like:
#!/bin/bash
#SBATCH -N 16
#SBATCH -C knl,quad,flat
#SBATCH -p regular
#SBATCH -J knlflat1024
#SBATCH -L SCRATCH
#SBATCH -o knlflat1024.o%j
#SBATCH --mail-type=ALL
#SBATCH --mail-user=jychan...@gmail.com
#SBATCH -t 00:20:00
#run the application:
cd
Fixing typo: Meant to say "Keep in mind that individual KNL cores are much
less powerful than an individual Haswell *core*."
--Richard
On Mon, Apr 3, 2017 at 11:36 AM, Richard Mills
wrote:
> Hi Justin,
>
> How is the MCDRAM (on-package "high-bandwidth memory")
Hi Justin,
How is the MCDRAM (on-package "high-bandwidth memory") configured for your
KNL runs? And if it is in "flat" mode, what are you doing to ensure that
you use the MCDRAM? Doing this wrong seems to be one of the most common
reasons for unexpected poor performance on KNL.
I'm not that
> El 1 abr 2017, a las 0:01, Toon Weyens escribió:
>
> Dear jose,
>
> I have saved the matrices in Matlab format and am sending them to you using
> pCloud. If you want another format, please tell me. Please also note that
> they are about 1.4GB each.
>
> I also attach
Hi all,
On NERSC's Cori I have the following configure options for PETSc:
./configure --download-fblaslapack --with-cc=cc --with-clib-autodetect=0
--with-cxx=CC --with-cxxlib-autodetect=0 --with-debugging=0 --with-fc=ftn
--with-fortranlib-autodetect=0 --with-mpiexec=srun --with-64-bit-indices=1
So my makefile/script is slightly different from the tutorial directory.
Basically I have a shell for loop that runs the 'make runex48' four times
where -da_refine is increased each time. It showed Levels 1 0 then 2 1 0
because the job was in the middle of the loop, and I cancelled it halfway
when
Hi Filippo,
to recompile Petsc twice is easy.
The difficulty is that in both libraries there will be the same symbols
for double and double complex functions.
If they were a part of a C++ namespaces, then it would be easier.
Michael.
On 04/03/2017 12:45 PM, Filippo Leonardi wrote:
On Monday,
Barry,
Hi. I am sorry for too late to reply you.
I read the code you send to me which create a VecScatter for ghost points
on rotation boundary.
But I am still not clear to how to use it to assemble the matrix.
I studied the example "$SLEPC_DIR/src/eps/examples/tutorials/ex19.c", which
is a 3D
On Monday, 3 April 2017 02:00:53 CEST you wrote:
> On Sun, Apr 2, 2017 at 2:15 PM, Filippo Leonardi
>
> wrote:
> > Hello,
> >
> > I have a project in mind and seek feedback.
> >
> > Disclaimer: I hope I am not abusing of this mailing list with this idea.
> > If
Barry Smith writes:
>> On Apr 3, 2017, at 8:51 AM, Jed Brown wrote:
>>
>> Barry Smith writes:
>>
>>> Jed,
>>>
>>>Here is the problem.
>>>
>>> https://bitbucket.org/petsc/petsc/branch/barry/fix/even-huger-flaw-in-ts
>>
>>
> On Apr 3, 2017, at 8:51 AM, Jed Brown wrote:
>
> Barry Smith writes:
>
>> Jed,
>>
>>Here is the problem.
>>
>> https://bitbucket.org/petsc/petsc/branch/barry/fix/even-huger-flaw-in-ts
>
> Hmm, when someone uses -snes_mf_operator, we really
Barry Smith writes:
>Jed,
>
> Here is the problem.
>
> https://bitbucket.org/petsc/petsc/branch/barry/fix/even-huger-flaw-in-ts
Hmm, when someone uses -snes_mf_operator, we really just need
SNESTSFormJacobian to ignore the Amat. However, the user is allowed to
Matthew Knepley writes:
> I can't think why it would fail there, but DMDA really likes old numbers of
> vertices, because it wants
> to take every other point, 129 seems good. I will see if I can reproduce
> once I get a chance.
This problem uses periodic boundary conditions
On Mon, Apr 3, 2017 at 6:11 AM, Jed Brown wrote:
> Justin Chang writes:
>
> > So if I begin with a 128x128x8 grid on 1032 procs, it works fine for the
> > first two levels of da_refine. However, on the third level I get this
> error:
> >
> > Level 3
Justin Chang writes:
> So if I begin with a 128x128x8 grid on 1032 procs, it works fine for the
> first two levels of da_refine. However, on the third level I get this error:
>
> Level 3 domain size (m)1e+04 x1e+04 x1e+03, num elements 1024 x
> 1024 x 57
I fixed the KNL issue - apparently "larger" jobs need to have the
executable copied into the /tmp directory to speedup loading/startup time
so I did that instead of executing the program via makefile.
On Mon, Apr 3, 2017 at 12:45 AM, Justin Chang wrote:
> So if I begin with
Works for my code and ts/../ex2.c ... as you probably know.
Ed
On Sun, Apr 2, 2017 at 9:54 PM, Barry Smith wrote:
>
>Jed,
>
> Here is the problem.
>
> https://bitbucket.org/petsc/petsc/branch/barry/fix/even-huger-flaw-in-ts
>
>
> > On Apr 2, 2017, at 10:39 PM, Ed
27 matches
Mail list logo