s would
> just be valgrind not supporting the instruction set. You could
> downgrade the target arch via compiler flags or see if the latest (dev
> version?) Valgrind supports the instruction.
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
raph_AGG() line 832 in
>>> /global/u2/m/madams/petsc_install/petsc/src/ksp/pc/impls/gamg/agg.c
>>> [0]PETSC ERROR: #4 PCSetUp_GAMG() line 517 in
>>> /global/u2/m/madams/petsc_install/petsc/src/ksp/pc/impls/gamg/gamg.c
>>> [0]PETSC ERROR: #5 PCSetUp() line 932 in
>>> /global/u2/m/madams/petsc_install/petsc/src/ksp/pc/interface/precon.c
>>> [0]PETSC ERROR: #6 KSPSetUp() line 381 in
>>> /global/u2/m/madams/petsc_install/petsc/src/ksp/ksp/interface/itfunc.c
>>> [0]PETSC ERROR: #7 KSPSolve() line 612 in
>>> /global/u2/m/madams/petsc_install/petsc/src/ksp/ksp/interface/itfunc.c
>>> [0]PETSC ERROR: #8 SNESSolve_NEWTONLS() line 224 in
>>> /global/u2/m/madams/petsc_install/petsc/src/snes/impls/ls/ls.c
>>> [0]PETSC ERROR: #9 SNESSolve() line 4350 in
>>> /global/u2/m/madams/petsc_install/petsc/src/snes/interface/snes.c
>>> [0]PETSC ERROR: #10 main() line 161 in
>>> /global/homes/m/madams/petsc_install/petsc/src/snes/examples/tutorials/ex19.c
>>> [0]PETSC ERROR: PETSc Option Table entries:
>>> [0]PETSC ERROR: -ksp_monitor_short
>>> [0]PETSC ERROR: -mat_type aijmkl
>>> [0]PETSC ERROR: -options_left
>>> [0]PETSC ERROR: -pc_type gamg
>>> [0]PETSC ERROR: -snes_monitor_short
>>>
>>>
>> --
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
On Wed, Jul 4, 2018 at 6:31 AM Matthew Knepley wrote:
> On Tue, Jul 3, 2018 at 10:32 PM Jeff Hammond
> wrote:
>
>>
>>
>> On Tue, Jul 3, 2018 at 4:35 PM Mark Adams wrote:
>>
>>> On Tue, Jul 3, 2018 at 1:00 PM Richard Tran Mills
>>> wrote:
>
e MPI+OpenMP application code will be appreciated, though I don't know if
> there are any good solutions.
> >
> > --Richard
> >
> > On Wed, Jul 4, 2018 at 11:38 PM, Smith, Barry F.
> wrote:
> >
> >Jed,
> >
> > You could use your
On Mon, Jul 9, 2018 at 6:38 AM, Matthew Knepley wrote:
> On Mon, Jul 9, 2018 at 9:34 AM Jeff Hammond
> wrote:
>
>> On Fri, Jul 6, 2018 at 4:28 PM, Smith, Barry F.
>> wrote:
>>
>>>
>>> Richard,
>>>
>>> The problem is that
r model.
>>>
>>
>> If this implies that BoxLib will use omp-parallel and then use explicit
>> threading in a manner similar to MPI (omp_get_num_threads=MPI_Comm_size
>> and omp_get_thread_num=MPI_Comm_rank), then this is the Right Way to
>> write Op
On Mon, Jul 9, 2018 at 11:36 AM, Smith, Barry F. wrote:
>
>
> > On Jul 9, 2018, at 8:33 AM, Jeff Hammond wrote:
> >
> >
> >
> > On Fri, Jul 6, 2018 at 4:28 PM, Smith, Barry F.
> wrote:
> >
> > Richard,
> >
> > The
On Tue, Jul 10, 2018 at 9:33 AM, Jed Brown wrote:
> Mark Adams writes:
>
> > On Mon, Jul 9, 2018 at 7:19 PM Jeff Hammond
> wrote:
> >
> >>
> >>
> >> On Mon, Jul 9, 2018 at 7:38 AM, Mark Adams wrote:
> >>
> >>
On Tue, Jul 10, 2018 at 11:27 AM, Richard Tran Mills
wrote:
> On Mon, Jul 9, 2018 at 10:04 AM, Jed Brown wrote:
>
>> Jeff Hammond writes:
>>
>> > This is the textbook Wrong Way to write OpenMP and the reason that the
>> > thread-scalability of DOE appl
example, is an array assumed
> to be
> >>>>>> sorted (that is hard to know). With C++, one can use private to
> minimize
> >>>>>> data exposure.
> >>>>>>
> >>>>>
> >>>>> This just has to be coding disciplin
but let's discuss the
> communication pattern first. You said you are working with a FEM model,
> but also mention "igatherv". Is this for some sequential mesh
> processing task or is it related to the solver? There isn't a
> neighborhood igatherv and MPI_Igatherv isn't a pattern that should ever
> be needed in a FEM solver.
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
0200023a6)
>> >>>libevent_pthreads-2.1.so.6 =>
>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/pgi-19.4/spectrum-mpi-10.3.0.1-20190611-4ymaahbai7ehhw4rves5jjiwon2laz3a/lib/libevent_pthreads-2.1.so.6
>> (0x200023ae)
>> >>>libopen-rte.so.3 =>
>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/pgi-19.4/spectrum-mpi-10.3.0.1-20190611-4ymaahbai7ehhw4rves5jjiwon2laz3a/lib/libopen-rte.so.3
>> (0x200023b1)
>> >>>libopen-pal.so.3 =>
>> /autofs/nccs-svm1_sw/summit/.swci/1-compute/opt/spack/20180914/linux-rhel7-ppc64le/pgi-19.4/spectrum-mpi-10.3.0.1-20190611-4ymaahbai7ehhw4rves5jjiwon2laz3a/lib/libopen-pal.so.3
>> (0x200023c2)
>> >>>libXau.so.6 => /usr/lib64/libXau.so.6 (0x200023d1)
>> >>>
>> >>>
>> >>>> On Feb 7, 2020, at 2:31 PM, Smith, Barry F.
>> wrote:
>> >>>>
>> >>>>
>> >>>> ldd -o on the executable of both linkings of your code.
>> >>>>
>> >>>> My guess is that without PETSc it is linking the static version of
>> the needed libraries and with PETSc the shared. And, in typical fashion,
>> the shared libraries are off on some super slow file system so take a long
>> time to be loaded and linked in on demand.
>> >>>>
>> >>>> Still a performance bug in Summit.
>> >>>>
>> >>>> Barry
>> >>>>
>> >>>>
>> >>>>> On Feb 7, 2020, at 12:23 PM, Zhang, Hong via petsc-dev <
>> petsc-dev@mcs.anl.gov> wrote:
>> >>>>>
>> >>>>> Hi all,
>> >>>>>
>> >>>>> Previously I have noticed that the first call to a CUDA function
>> such as cudaMalloc and cudaFree in PETSc takes a long time (7.5 seconds) on
>> summit. Then I prepared a simple example as attached to help OCLF reproduce
>> the problem. It turned out that the problem was caused by PETSc. The
>> 7.5-second overhead can be observed only when the PETSc lib is linked. If I
>> do not link PETSc, it runs normally. Does anyone have any idea why this
>> happens and how to fix it?
>> >>>>>
>> >>>>> Hong (Mr.)
>> >>>>>
>> >>>>> bash-4.2$ cat ex_simple.c
>> >>>>> #include
>> >>>>> #include
>> >>>>> #include
>> >>>>>
>> >>>>> int main(int argc,char **args)
>> >>>>> {
>> >>>>> clock_t start,s1,s2,s3;
>> >>>>> double cputime;
>> >>>>> double *init,tmp[100] = {0};
>> >>>>>
>> >>>>> start = clock();
>> >>>>> cudaFree(0);
>> >>>>> s1 = clock();
>> >>>>> cudaMalloc((void **)&init,100*sizeof(double));
>> >>>>> s2 = clock();
>> >>>>> cudaMemcpy(init,tmp,100*sizeof(double),cudaMemcpyHostToDevice);
>> >>>>> s3 = clock();
>> >>>>> printf("free time =%lf malloc time =%lf copy time =%lf\n",((double)
>> (s1 - start)) / CLOCKS_PER_SEC,((double) (s2 - s1)) /
>> CLOCKS_PER_SEC,((double) (s3 - s2)) / CLOCKS_PER_SEC);
>> >>>>>
>> >>>>> return 0;
>> >>>>> }
>> >>>>>
>> >>>>>
>> >>>>
>> >>>
>> >>
>> >
>>
>> --
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
>>
>> AMD
>>
>> https://www.anandtech.com/show/15581/el-capitan-supercomputer-detailed-amd-cpus-gpus-2-exaflops
>>
>>
>> _______
>> Xlab mailing list
>> x...@lists.cels.anl.gov
>> https://lists.cels.anl.gov/mailman/listinfo/xlab
>>
>>
>> --
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
t;>
> Definitely +1 for variadic macros and for-loop declarations, but not VLAs.
>
>
> --
> Lisandro Dalcin
>
> Research Scientist
> Extreme Computing Research Center (ECRC)
> King Abdullah University of Science and Technology (KAUST)
> http://ecrc.kaust.edu.sa/
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
will be used.
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
gt; octave-iso2mesh-0:1.9.1-5.fc32.x86_64
> > petsc-0:3.12.3-2.fc32.x86_64
> >
> > I don't know what the equivalent in deb world is.
> >
> > So I would manually check on some of the petsc dependencies
> >
> > apt install libblas-dev
> >
> >
ive
>
> A 3x0, B 0x8, C 8x7 -> (ABC) is a valid 3x7 matrix (empty)
>
> If I understand you right, (AB) would be a 0x0 matrix, and it can no longer
> be multiplied against C
>
>
> Richard, what is the hardship in preserving the shape relations?
>
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
---send entire
error message to petsc-ma...@mcs.anl.gov--
What do I need to do to use a transpose view properly outside of M*V?
Thanks,
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
We've had limited demand for more
> sophisticated operations on objects of type MATTRANSPOSE and there
> probably isn't much benefit in fusing the parallel version here anyway.
>
> Jeff Hammond writes:
>
> > I am trying to understand how to use a transpos
Jeff dumb. Make copy paste error. Sorry.
Jeff
On Mon, Jun 1, 2020 at 5:02 PM Jeff Hammond wrote:
> I'm still unable to get a basic matrix transpose working. I may be
> stupid, but I cannot figure out why the object is in the wrong state, no
> matter what I do.
>
> T
ish
>
> On Mon, 8 Jun 2020, Barry Smith wrote:
>
> >
> >Googling PetscMatlabEngine leads to 3.7 version but
> PetscMatlabEngineDestroy leads to 3.13.
> >
> > Any idea why some of the manual pages don't point to the latest
> version?
> >
> >Barry
> >
>
> --
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
, Matthew Knepley wrote:
> >
> >> We default to ch3:sock. Scott MacLachlan just had a long thread on the
> >> Firedrake list where it ended up that reconfiguring using ch3:nemesis
> had a
> >> 2x performance boost on his 16-core proc, and noticeable effect on the
On Thu, Jul 23, 2020 at 9:35 PM Satish Balay wrote:
> On Thu, 23 Jul 2020, Jeff Hammond wrote:
>
> > Open-MPI refuses to let users over subscribe without an extra flag to
> > mpirun.
>
> Yes - and when using this flag - it lets the run through - but there is
> still
t;>
>>
>>
>> > On Sep 2, 2020, at 8:58 AM, Mark Adams wrote:
>>
>>
>> >
>>
>>
>> > PETSc mallocs seem to boil down to PetscMallocAlign. There are switches
>> in here but I don't see a Cuda malloc. THis would see
t
> some of these names even mean. What does "PASCAL60" vs. "PASCAL61" even
> mean? Do you know of where this is even documented? I can't really find
> anything about it in the Kokkos documentation. The only thing I can really
> find is an issue or two a
okkos etc for THAT
> system.
>
I will try to write something for you tomorrow. For NVIDIA hardware, the
sole dependency will be nvcc.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
, Barry Smith wrote:
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
, which has useful features
like atomics in the language (which are useful not just for threaded
programs, but also MPI+shared-memory, which I know the PETSc team likes).
I too am a huge fan of VLAs, but VLA for non-POD types are "hard", which is
why C++ doesn't sup
Matt
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
On Wed, Jun 29, 2016 at 8:18 PM, Barry Smith wrote:
>
> > On Jun 29, 2016, at 10:06 PM, Jeff Hammond
> wrote:
> >
> >
> >
> > On Wednesday, June 29, 2016, Barry Smith wrote:
> >
> >Who are these people and why to they have this webpage?
&
> The default settings often don’t work well for 3D problems.
Are 2D (1D?) problems really the common case for PDE solvers? Aren't
interesting problems 3D? Shouldn't the defaults be set to optimize for 3D?
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
be in, just not for the very large scale.
>
> Rough ideas and pointers to publications are all useful. There is an
> extremely short fuse so the sooner the better,
>
> Thanks
>
> Barry
>
>
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
hierarchy of fast memory (e.g.
HBM), regular memory (e.g. DRAM), and slow (likely nonvolatile) memory on
a node. Xeon Phi and some GPUs have caches, but it is unclear to me if it
actually benefits software like PETSc to consider them. Figuring out how
to run PETSc effectively on KNL should be generally useful...
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
On Thu, Jul 7, 2016 at 4:34 PM, Richard Mills
wrote:
> On Fri, Jul 1, 2016 at 4:13 PM, Jeff Hammond
> wrote:
>
>> [...]
>>
>> Maybe I am just biased because I spend all of my time reading
>> www.nextplatform.com, but I hear machine learning is becoming an
>
t your
adjoint matrices, provided the NVM bandwidth is sufficient.
*Disclaimer: All of these are academic comments. Do not use them to try to
influence others or make any decisions. Do your own research and be
skeptical of everything I derived from the Internet.*
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
On Fri, Jul 8, 2016 at 9:48 AM, Richard Mills
wrote:
>
>
> On Fri, Jul 8, 2016 at 9:40 AM, Jeff Hammond
> wrote:
>
>>
>>> > 1) How do we run at bandwidth peak on new architectures like Cori or
>>> Aurora?
>>>
>>> Huh, there is a
On Friday, July 8, 2016, Barry Smith wrote:
>
> > On Jul 8, 2016, at 12:17 PM, Jeff Hammond > wrote:
> >
> >
> >
> > On Fri, Jul 8, 2016 at 9:48 AM, Richard Mills > wrote:
> >
> >
> > On Fri, Jul 8, 2016 at 9:40 AM, Jeff Hammond >
face. I would
> > check first.
> >
> >Matt
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments
> > is infinitely more interesting than any results to which their
> experiments
> > lead.
> > -- Norbert Wiener
> >
> >
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
If PETSc has symbols greater than 6 characters, it has never been Fortran 77
compliant anyways, so it's a bit strange to ask permission to break it in
another way in the examples.
Sorry for being a pedant but we've had this debate in the MPI Forum and it is
false to conflate punchcard Fortran
just absurd. I am wondering if you have any clue
> on why this happens and how to fix it. FYI, I attached the driver for the
> PETSc example.
>
> Thanks,
> Hong
>
>
>
>
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
https://gcc.gnu.org/onlinedocs/gfortran/ISO_005fFORTRAN_005fENV.htmlhas
real{32,64,128} that does a nice job for this situation.
Jeff
Sent from my iPhone
> On Sep 1, 2016, at 4:15 PM, Blaise A Bourdin wrote:
>
> Hi,
>
> If I recall correctly, fortran does not mandate that "selected_real_kin
1, 2016, Blaise A Bourdin wrote:
> Neat.
> I was not aware of this.
>
> Blaise
>
> > On Sep 1, 2016, at 9:26 PM, Jeff Hammond > wrote:
> >
> > https://gcc.gnu.org/onlinedocs/gfortran/ISO_005fFORTRAN_005fENV.htmlhas
> real{32,64,128} that does a nice
> University of Illinois Urbana-Champaign
>
>
>
>
>
> On Sep 1, 2016, at 6:15 PM, Blaise A Bourdin > wrote:
>
> If I recall correctly, fortran does not mandate that
> "selected_real_kind(5)" means the same across compilers, so that hardcoding
> kind v
some of them. If anyone has
> useful information on the needs/approaches of these projects please let us
> know so we can determine if there any place for PETSc.
>
>Thanks
>
> Barry
>
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
and non-unrolled also on Xeon.
>
>We've never done a good job of managing our unrolling, where, how and
> when we do it and macros for unrolling such as PetscSparseDensePlusDot.
> Intel would say just throw it all away.
>
>Barry
>
>
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
Is Petsc build system not capable of running "find" on $MKLROOT as a hedge
against subdirectories reorganization?
At least with Intel compilers, "-mkl" is the easy button unless you're
trying to get threaded BLAS/LAPACK with ScaLAPACK.
Jeff
On Wed, Dec 21, 2016 at 7:37 AM Pierre Jolivet
wrote:
g PETSc via the PyLith installer which builds mpich, NetCDF, HDF5,
> etc from scratch using the Apple's clang. If I swap mpich3.2 for
> mpich3.1.3, these segfaults go away, so I am inclined to blame mpich3.2.
> >
> > If this is a unknown or undiagnosed issue, I will try a sta
e I should do
> with-batch=1,
>
>
> On Wed, Feb 15, 2017 at 4:36 PM, Mark Adams wrote:
>
>> I get this error on the KNL partition at NERSC. P4est works on the
>> Haswell partition and other downloads seem to work on the KNL partition.
>> Any ideas?
>>
h the option
> --with-blas-lapack-lib option.
>
> Wrt libmkl_intel_thread vs libmkl_sequential - configure defaults to
> libmkl_sequential for reqular use.
>
> libmkl_intel_thread is picked up only when MKL pardiso is requested
> [so the assumption here is aware of pardiso requirements wrt
&
ALUES,is,ierr)
> > call VecScatterCreate(this%xVec,PETSC_NULL_OBJECT,vec,is,this%
> from_petsc,ierr)
> > ! reverse scatter object
> >
> > If we want to make this change then I could help a developer or you
> > can get me set up with a (small) test problem and a branch and I can
> > do it at NERSC.
> >
> > Thanks,
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
etween).
One consequence of using libnuma to manage MCDRAM is that one can call
numa_move_pages, which Jed has asserted is the single most important
function call in the history of memory management ;-)
Jeff
> Perhaps I should try memkind calls since they may become much better.
>
&g
. As I understand it, calloc != malloc+memset, and the
> differences might be important in multicore+multithreading scenarios and
> the first-touch policy.
>
>
My intuition is that any HPC code that benefits from mapping the zero page
vs memset is doing something wrong.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
>
> Disappointing on my Mac.
>
> WARNING: Could not locate OpenMP support. This warning can also occur if
> compiler already supports OpenMP
> checking omp.h usability... no
> checking omp.h presence... no
> checking for omp.h... no
> omp.h is required for this library
>
Blame Apple for not s
> On Mar 10, 2017, at 12:42 PM, Jed Brown wrote:
>
> Jeff Hammond writes:
>> My intuition is that any HPC code that benefits from mapping the zero page
>> vs memset is doing something wrong.
>
> The relevant issue is the interface, from
>
> a = malloc(size
On Sat, Mar 11, 2017 at 9:00 AM Jed Brown wrote:
> Jeff Hammond writes:
> > I agree 100% that multithreaded codes that fault pages from the main
> thread in a NUMA environment are doing something wrong ;-)
> >
> > Does calloc *guarantee* pages are not mapped? If I calloc
s malloc, which also
> doesn't promise unfaulted pages. This is one reason some of us keep saying
> that OpenMP sucks. It's a shitty standard that obstructs better standards
> from being created.
>
>
> On March 12, 2017 11:19:49 AM MDT, Jeff Hammond
> wrote:
>
>
On Mon, Mar 13, 2017 at 8:08 PM, Jed Brown wrote:
>
> Jeff Hammond writes:
>
> > OpenMP did not prevent OpenCL,
>
> This programming model isn't really intended for architectures with
> persistent caches.
>
It's not clear to me how much this should matter
On Tue, Mar 14, 2017 at 8:52 PM Jed Brown wrote:
> Jeff Hammond writes:
>
> > On Mon, Mar 13, 2017 at 8:08 PM, Jed Brown wrote:
> >>
> >> Jeff Hammond writes:
> >>
> >> > OpenMP did not prevent OpenCL,
> >>
> >>
;>
> >> I usually say tens of thousands. Our users meeting attracts 100+
> presenters
> >> each year.
> >>
> >> Matt
> >>
> >>
> >>> Any other metrics would be welcome as well.
> >>>
> >>> Thanks,
> >>> Dave
> >>>
> >>
> >>
> >>
> >> --
> >> What most experimenters take for granted before they begin their
> >> experiments is infinitely more interesting than any results to which
> their
> >> experiments lead.
> >> -- Norbert Wiener
> >>
> >> http://www.caam.rice.edu/~mk51/
>
> --
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
pilers/conftest.c
>>>> > > Possible ERROR while running compiler:
>>>> > > stderr:
>>>> > > ModuleCmd_Load.c(244):ERROR:105: Unable to locate a modulefile for
>>>> > > 'autoconf'
>>>> > > Mod
w and Ubuntu platforms.
You can get almost all compiler versions and MPI libraries in both, with
reasonable effort.
I can do an initial implementation if you aren’t inclined to do it
yourselves.
Jeff
> >Basically another Satish, ideas on how to find such a person?
>
> Impossible to find "another Satish".
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
On Sat, Dec 30, 2017 at 3:34 PM Jed Brown wrote:
> Jeff Hammond writes:
>
> > On Sat, Dec 30, 2017 at 12:04 PM Jed Brown wrote:
> >
> >> "Smith, Barry F." writes:
> >>
> >> >> On Dec 30, 2017, at 3:53 AM, Matthew Knepley
> wrot
lg;
> ierr =
> MPI_Attr_get(PETSC_COMM_WORLD,Petsc_Counter_keyval,&counter,&flg);CHKERRQ(ierr);
> if (!flg) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_ARG_CORRUPT,"Bad MPI
> communicator supplied ???");
> }
>
> Any ideas?
>
> Mark
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
gt;
>
>> Is this a normal and definitive change or an unwanted/unobserved bug?
>>
>> Thanks,
>>
>> Eric
>>
>> ps: here are the logs:
>>
>> this night:
>>
>> -
>>
>> http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2018.02.10.02h00m01s_configure.loghttp://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2018.02.10.02h00m01s_make.log
>>
>>
>> a day before:
>>
>> http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2018.02.09.02h00m02s_configure.loghttp://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2018.02.09.02h00m02s_make.log
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/%7Emk51/>
>
>
> --
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
t
> file or don't include it before mpi.h is included.
>
Indeed, everybody should compiler MPI codes with "-DMPICH_SKIP_MPICXX=1
-DOMPI_SKIP_MPICXX=1" now.
I'll ask MPICH and Open-MPI to switch the default to exclude C++ bindings.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
t; >
> >
> ***
> >
> >
> > [knepley@rush:/projects/academic/knepley/PETSc3/petsc]$ which mpicc
> >
> > /util/common/openmpi/3.0.0/gcc-4.8.5/bin/mpicc
> >
> > [kneple
he
> PETSC_MIXED_LEN / PETSC_END_LEN to use size_t instead of int.
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
er arguments manually and it would
> be great to remove that manual step.
>
>Thanks
>
> Barry
>
>
> > On May 2, 2018, at 11:42 PM, Jeff Hammond
> wrote:
> >
> > Or you could just use ISO_C_BINDING. Decent compilers should support it.
> >
&g
etsc/src/snes/examples/tutorials/ex19.c
>> > > [0]PETSC ERROR: PETSc Option Table entries:
>> > > [0]PETSC ERROR: -da_refine 3
>> > > [0]PETSC ERROR: -ksp_monitor
>> > > [0]PETSC ERROR: -mat_type aijmkl
>> > > [0]PETSC ERROR: -options_left
>> > > [0]PETSC ERROR: -pc_type gamg
>> > > [0]PETSC ERROR: -snes_monitor_short
>> > > [0]PETSC ERROR: -snes_view
>> > > [0]PETSC ERROR: End of Error Message ---se
>> >
>> > On Sat, Jun 30, 2018 at 3:08 PM Mark Adams > > <mailto:mfad...@lbl.gov>> wrote:
>> >
>> > OK, that got further.
>> >
>> > On Sat, Jun 30, 2018 at 3:03 PM Mark Adams > > <mailto:mfad...@lbl.gov>> wrote:
>> >
>> > Like this?
>> >
>> >
>> >
>>
>> '--with-blaslapack-lib=/opt/intel/compilers_and_libraries_2018.1.163/linux/mkl/lib/intel64/libmkl_intel_thread.a',
>> >
>> >
>> > On Sat, Jun 30, 2018 at 3:00 PM Mark Adams > > <mailto:mfad...@lbl.gov>> wrote:
>> >
>> >
>> > Specify either "--with-blaslapack-dir" or
>> > "--with-blaslapack-lib --with-blaslapack-include".
>> > But not both!
>> >
>> >
>> > Get rid of the dir option, and give the full path to the
>> > library.
>> >
>> >
>> > What is the syntax for giving the full path?
>> >
>>
>
>
>
> --
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
On Sun, Jul 1, 2018 at 10:43 AM Victor Eijkhout
wrote:
>
>
> On Jul 1, 2018, at 12:30 PM, Jeff Hammond wrote:
>
> If you really want to do this, then replace COMMON with CORE to specialize
> for SKX. There’s not point to using COMMON if you’ve got the MIC path
> already.
&
==
>>
>>
>>
>>
>> ___
>> discuss mailing list disc...@mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>
>
>
> ___
> discuss mailing list disc...@mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
--
Jeff Hammond
jeff.scie...@gmail.com
;>> = EXIT CODE: 56
>>>> = CLEANING UP REMAINING PROCESSES
>>>> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
>>>>
>>>> =======
>>>>
>>>>
>>>>
>>>>
>>>> __
:
>> (1) the MPICH library ("application called MPI_Abort...") and (2) the job
>> launcher ("BAD TERMINATION..."). You can eliminate the messages from the
>> job launcher by providing an error code of 0 in MPI_Abort.
>>
>> ~Jim.
>>
>>
>
m two
>> > sources: (1) the MPICH library ("application called MPI_Abort...") and (2)
>> > the job launcher ("BAD TERMINATION..."). You can eliminate the messages
>> > from the job launcher by providing an error code of 0 in MPI_Abort.
>> >
>
#x27;s confusing to people when you suggest any other relationship than
"Jeff is a dude that responses to threads on the MPICH discuss list".
Jeff
On Fri, Feb 21, 2014 at 1:54 PM, Munson, Todd S. wrote:
>
> Sounds like someone needs to look up the definition of "customer s
CESSES
> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
> ===
>
>
>
>
> ___
> discuss mailing list disc...@mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
--
Jeff Hammond
jeff.scie...@gmail.com
t; = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
>> ===========
>>
>>
>>
>>
>> ___
>> discuss mailing list disc...@mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>
>
>
> --
> Jeff Hammond
> jeff.scie...@gmail.com
--
Jeff Hammond
jeff.scie...@gmail.com
https://trac.mpich.org/projects/mpich/ticket/2038 has the patches.
Jeff
On Fri, Feb 21, 2014 at 3:47 PM, Jeff Hammond wrote:
> Barry:
>
> Can you tolerate the following workaround for Hydra's error cleanup or
> do you need it to be internal? I presume you know enough bash to
let
them decide how to handle it, but he learned MPI from Bill Gropp, so
he might not know anything ;-)
I apologize for being unpleasant earlier.
Best,
Jeff
>
> Barry
>
> On Feb 21, 2014, at 3:10 PM, Jeff Hammond wrote:
>
>> Barry:
>>
>> Would the following beha
Jed,
Did you look at my patch or the demonstration yet? I posted all the details
this afternoon.
I tried very hard to support verbosity suppression in a reasonable way at
runtime.
Do you really want an MPIX call that is equivalent to setenv("")? Is
the extra code worth it? (These are seriou
I'll look into a general setenv-like mechanism for CVARs rather than an
abort-specific one-off. Might be there already through MPI_T interface (which
is standard, unlike MPIX).
Jeff
Sent from my iPhone
> On Feb 21, 2014, at 11:35 PM, Jed Brown wrote:
>
> Jeff Hammond writes:
Thanks for catching this. I tested my patch but not through valgrind.
Thanks also for figuring out the line break issue. I figured that was
coming from the device but didn't track it down.
Jeff
On Sat, Feb 22, 2014 at 2:07 PM, Jed Brown wrote:
> Jeff Hammond writes:
&g
the rest of the RMA
calls use ranks[i]. I assume this is innocuous, but unless you have a
mutex on 'sf', it is theoretically possible that sf->ranks[i] could be
changed by another thread in such a way as to lead to an undefined or
incorrect program. If you prohibit this behavior as
gt; incorrect program. If you prohibit this behavior as part of the
>> internal design contract, then it should be noted.
>
> sf->ranks is not mutated after setup.
Except due to DRAM SDC, rowhammer exploits, gamma ray muons... :-D
In any case, it will probably save you 0-10 cycles to use the
automatic variable rather than to dereference the struct pointer again
:-)
Best,
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
/memkind#fork-destination-box.
I'm sure that the memkind developers would be willing to review your
pull request once you've implemented memkind_move_pages().
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
r heads totally up their asses
>
> formatted source code with astyle --style=linux --indent=spaces=4 -y -S
>
> when everyone knows that any indent that is not 2 characters is totally
> insane :-)
>
> Barry
>
>
>> On Jun 3, 2015, at 9:37 PM, Jeff Hammond wrote:
&g
On Wed, Jun 3, 2015 at 9:58 PM, Jed Brown wrote:
> Jeff Hammond writes:
>> The beauty of git/github is one can make branches to try out anything
>> they want even if Jed thinks that he knows better than Intel how to
>> write system software for Intel's hardware
If everyone would just indent with tabs, we could just set the indent
spacing with our editors ;-)
On Wed, Jun 3, 2015 at 10:01 PM, Barry Smith wrote:
>
>> On Jun 3, 2015, at 9:58 PM, Jeff Hammond wrote:
>>
>> http://git.mpich.org/mpich.git/blob/HEAD:/src/mpi/init/init.c
mkl_intel_lp64.a,/home/hector/mkl_static/libmkl_core.a,/home/hector/mkl_static/libmkl_intel_thread.a]
> --with-scalapack-include=/share/apps/intel2/mkl/include
> --with-scalapack-lib=[/home/hector/mkl_static/libmkl_scalapack_lp64.a,/home/hector/mkl_static/libmkl_blacs_openmpi_lp64.a]
> --with-valgrind=1 --with-valgrind-dir=/home/hector/installed
> --with-shared-libraries=0 --with-fortran-interfaces=1
> --FC_LINKER_FLAGS="-openmp -openmp-link static" --FFLAGS="-openmp
> -openmp-link static" --LIBS="-Wl,--start-group
> /home/hector/mkl_static/libmkl_intel_lp64.a
> /home/hector/mkl_static/libmkl_core.a
> /home/hector/mkl_static/libmkl_intel_thread.a -Wl,--end-group -liomp5 -ldl
> -lpthread -lm"
> >
> > Thanks for your help!
> >
> > Hector
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276
> http://www.math.lsu.edu/~bourdin
>
>
>
>
>
>
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
:
> Satish Balay writes:
> > And then the MPI c++ libraries..
>
> The current MPI standard does not contain C++ bindings.
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
ntel-16.0/compilers_and_libraries_2016.0.083/mac/compiler/lib/libifcore.a
> make a difference?
>
> And what do you have for:
>
> cd /opt/HPC/mpich-3.1.4-intel16.0/lib
> ls
> nm -Ao libmpiifort.* |grep get_command_argument
>
> BTW: I'm assuming you have sept 20 or newer petsc master branch.
>
> Satish
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
overflow.com/questions/1465036/install-python-2-6-in-centos
> [install prebuilt binary from 'fedora epel'
>
>
> http://bda.ath.cx/blog/2009/04/08/installing-python-26-in-centos-5-or-rhel5/
> [install some devel packages from rhel - before attempting to compile
> python from
On Tuesday, October 13, 2015, Satish Balay wrote:
> On Tue, 13 Oct 2015, Jeff Hammond wrote:
>
> > PETSc can build MUMPS. Why not Python? :-)
>
> and gcc :)
>
>
I have GCC builds almost completely automated for similar reasons as Python
2.4 (eg my CentOS box is stuck with
s changed on one process but
> not on others triggering disaster in the MatAssemblyEnd_MPIA().
> >>
> >> This is now fixed in the maint, master and next branches and will be in
> the next patch release. I have also attached the patch to this email.
> >>
> >> Barry
> >>
> >
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
done this way. Who wants to convert a
> code to __float128 and have to then label all floating point numbers with a
> q which of course also means the code is not compilable with other
> compilers in double.
>
> It's ugly, but you can put it in a macro:
>
> #define NUM(a) a ## q
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
overwriting the change I make.
>
> There is a similar bug in SuperLU_dist that I also fix by hand.
>
> Garth
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
; P.O. Box 808, L-561, Livermore, CA 94551
> > Phone - (925) 422-4377, Email - rfalg...@llnl.gov, Web -
> http://people.llnl.gov/falgout2
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
- use this file for your next run - to catch the actual issues.
>
> https://wiki.wxwidgets.org/Valgrind_Suppression_File_Howto has some of
> this info..
>
>
> We normally recommend using valgrind on linux with mpich built with
> '--enable-g=meminit' [--download-mpich o
we can check for at configure time or we just wait that
> > Intel ships MKL with a shared version of libmkl_blacs_(i)lp64?
>
> You can write a test for it or wait. Since the test needs to execute
> code, it would need to run as part of conftest on a batch system.
>
Jeff, who works for Intel but is not responsible for any of our software
products
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
1 - 100 of 108 matches
Mail list logo