I have 'mem...@linkedin.com' listed in mainlman's
discard_these_nonmembers (privacy) option - but still this e-mail
came through?
Should we be removing e-mail ids from subscriber list [from where these
requests keep coming?]. It looks more like spam-bots doing this.
Satish
--
Date:
If PetscInitialize() is called from fortran - command line should work.
An easy alternative is to have a file 'petscrc' in the executable run
dir - with the command line options listed [one option per line]
Satish
On Tue, 21 May 2013, Matthew Knepley wrote:
On Mon, May 20, 2013 at 11:01 PM,
]
As the message indicates - you should install MPICH for windows from
http://www.mpich.org/downloads/ [and not use --download-mpich - when
using MS compilers on windows]
Satish
Thanks!
Hao
发件人: Satish Balay [ba...@mcs.anl.gov]
发送时间: 2013年5月22
Does this MPI have mpicc/mpif77 etc?
If so - just use:
--with-cc=mpicc --with-fc=mpif77
This is less error prone then specifying --with-mpi-include --with-mpi-lib etc..
Satish
On Thu, 21 Sep 2006, jens.madsen at risoe.dk wrote:
Hi
Do you have any experience with TOPSPIN mpich
They are not real applications - but ex52.c, ex56.c, ex64.c in
src/mat/examples/tests show the usage of MatSetValuesBlocked().
Satish
On Wed, 4 Oct 2006, Paul Constantine wrote:
i just started using PETSc and i'm looking for a code example using
MatSetValuesBlocked.
thanks,
paul
On Thu, 12 Oct 2006, Julian wrote:
I think I got the preallocation right.. But it still gets stuck at pretty
much the same spot.
Can you veify with '-info' - and look for number of mallocs in the
verbose output.
Satish
When it slowed to almost pretty much a full stop, I did a break all in
I'm attaching a patch that can be applied to petsc-2.3.2 for this to
work.
cd petsc-2.3.2-p3
patch -Np1 opt.patch
Alternatively petsc-dev should have it
http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html
Let us know if this works.
Satish
On Mon, 18 Sep 2006, Barry Smith wrote:
ops - forgot the attachment. [attached now]
Satish
On Fri, 13 Oct 2006, Satish Balay wrote:
I'm attaching a patch that can be applied to petsc-2.3.2 for this to
work.
cd petsc-2.3.2-p3
patch -Np1 opt.patch
Alternatively petsc-dev should have it
http://www-unix.mcs.anl.gov/petsc/petsc
,if each PETSC function is
called by each processor, the function like
VecSet(F_global, zero), in which F_global is a parallel vector, will be
called many times. In my opinion, to be called once is enough.
Regards,
Yixun
- Original Message -
From: Satish Balay balay
On Sat, 21 Oct 2006, Yixun Liu wrote:
Hi,
I install PETSC on WindowsXP, MS VC++6.0, Cygwin, Dell standard dual
processors PC,
I do as the PETSc installation page says,
$ ./config/configure.py --with-mpi-dir=d:/MPICH2
However, the error is the Fortran error! mpi_init() could not be
I have the symbol at the following location..
asterix:/home/balay/soft/petsc-2.2.1/lib/libg/asterixnm -Ao *.a |grep
PETSC_COMM_SELF |grep -v ' U '
libpetsc.a:init.o:000c B PETSC_COMM_SELF
Perhaps there were errors when the libraries were built?
Satish
On Tue, 24 Oct 2006, Isabel Gil
On Thu, 26 Oct 2006, Yixun Liu wrote:
Hi,
Can I compiler and link it in VC enviroment and then copy the .exe to a
directory.
you can just go to the dir which has the executable. [copy is not necessary]
Then, at command line run mpiexec -n 2 xxx.exe?
yes
I try it, but the error is
certainly not something which
requires a reinstall of mpich.
Satish
On Thu, 26 Oct 2006, Yixun Liu wrote:
Hi,
I tried what you said and mpiexec - n 2 localonly is ok. Do I need to
reinstall the MPICH2?
Best,
Yixun
- Original Message -
From: Satish Balay balay at mcs.anl.gov
On Sun, 29 Oct 2006, Yixun Liu wrote:
Hi,
I use PETSC with other libs, but they use different runtime lib.
I don't understand this statement.
I hope to config PETSC with Debug multithread dll runtime lib. How
to do it?
1. Build PETSc libraries with the default configure options
[The
will use different run time libs when they use different
compiler option. But this will cause lib conflict. So I need to config the
PETSC to let it use the same run time lib with the other.
Best,
Yixun
- Original Message -
From: Satish Balay balay at mcs.anl.gov
To: PETSC petsc
On Tue, 31 Oct 2006, Yixun Liu wrote:
Hi,
Should be COPTFLAGS='-MDd -Z7' CXXOPTFLAGS='-MDd -Z7' due to debug version?
-Z7 already indecates debugging - so both should be fine..
Satish
On Wed, 1 Nov 2006, Yixun Liu wrote:
Hi,
I use Cygwin on windowsXP. I want to know how to set system enviroment
varialble PETSC_DIR. PETSC_DIR = /cygdrive/d/myvc/petsc-2.3.2-p3 or PETSC_DIR
= d:\myvc\petsc-2.3.2-p3? The former can be recongized by cygwin but not by
windows. The latter
On Sat, 4 Nov 2006, Yixun Liu wrote:
Hi,
I config PETSC with,
$ ./config/configure.py --with-mpi-dir=/cygdrive/d/MPICH2 --with-cc='win32fe
cl' --with-fc=0 --with-cxx='win32fe cl' --with-clanguage=cxx
On Thu, 23 Nov 2006, sqbang wrote:
Hi, all
when I create a matrix with MatCreateSeqAIJ() and the matrix is
symmetry,how can I complete it while only the half data is inserted
? does the memory needed be half of the same scale matrix but not
symmetry ? if not,how should i set it ?
thanks
On Fri, 24 Nov 2006, Matteo Semplice wrote:
Thansk for all the replies.
I always thought that binary format would be best, but I couldn't find any
description of the binary format. Could you point me to a relevant piece of
the docs?
This is documented in the man page for VecLoad() and
Have you rebuilt PETSc libraries with mpich2 - when you made this change?
If so - send us the relavent logs at petsc-maint at mcs.anl.gov
[configure.log, make_log and log for 'make test']
Satish
On Sat, 25 Nov 2006, Ben Tay wrote:
Hi,
I'm trying to use MPICH2 for the mpi during compilation
On Tue, 28 Nov 2006, Matthew Knepley wrote:
On 11/28/06, Ben Tay zonexo at gmail.com wrote:
hi,
I'm using win xp with visual fortran and visual c++. i had compiled in
cygwin w/o any problems and most examples worked. however ex16f90 in the mat
directory gave the error of unresolved
We haven't tried gcc+compaq fortran. I don't think it will work.
You'll need the C compiler to install PETSc libraries. But once the
libraries are installed - you might be able to copy/use them on a
different machine with only the fortran compiler [provided everything
else PETSc is configured
There are already tools to do get this dump
xwd | xpr -device ps foo.ps
[now when the curser changes - click on the window to be dumped]
xv
[use the grab option, and then save as tiff/jpg/ps format]
Perhaps there are other tools that can be used. Hence there is
no extra tools in PETSc to do
0xbfff9890 in ?? ()
#6 0x406873a8 in __dtors_list_end () from /opt/intel_fc_80/lib/libifcore.so.5
#7 0x0002 in ?? ()
#8 0x in ?? ()
(gdb)
This all makes me think this is an INTEL compiler bug, and has nothing to
do with my code.
Any ideas?
Randy
Satish Balay wrote:
Looks
try your other suggestions as well. However, this code has worked
flawlessly until now, with a model much much larger than I've used in the
past.
Satish Balay wrote:
- Not sure what SIGUSR1 means in this context.
- The stack doesn't show any PETSc/user code. Was
this code
asterix:/home/balay/spetsc/src/snes/examples/tutorialsmpiexec -np 2 valgrind
-q --tool=memcheck ./ex5
Number of Newton iterations = 4
asterix:/home/balay/spetsc/src/snes/examples/tutorials
The above is with MPICH2 and valgrind doesn't find any problems.
So I'll sugest installing another
On Wed, 31 May 2006, Matt Funk wrote:
Hi,
i need to build PETsC on s sycld machine. Basically i need to have MPI
support
in PetsC but i need to switch from using mpicc to gcc.
why use gcc over mpicc? [mpicc should be internally using gcc so it
should satisfy the gcc requiremetn]
I was
On Thu, 1 Jun 2006, Satish Balay wrote:
Looks like you are not using PETSc makefiles. This is the recommended
thing to do.
I intend to say 'The recommended method is to use PETSc makefiles'.
Satish
Why use petsc-2.1.6 instead of the latest 2.3.1?
What do you have for:
nm -Ao /home/leep/petsc-2.1.6/lib/libg/linux64/*.a |grep -i petscsetcommonblock
Satish
On Thu, 15 Jun 2006, Pilhwa Lee wrote:
Hi,
I'm in the stage of testing compilation of an example. I'm using PETSc
2.1.6. In the
yeah the ftp server is down since morning. Our systems folks are
trying to fix it.
Will let you know once its up..
Satish
On Thu, 15 Jun 2006, Letian Wang wrote:
Hi,
Today I could not download PETSc from
ftp://ftp.mcs.anl.gov/pub/petsc/petsc-lite-2.3.1.tar.gz, neither could I
The ftp serer is back up now. You can retry.
Satish
On Thu, 15 Jun 2006, Satish Balay wrote:
yeah the ftp server is down since morning. Our systems folks are
trying to fix it.
Will let you know once its up..
Satish
On Thu, 15 Jun 2006, Letian Wang wrote:
Hi,
Today I
On Mon, 19 Jun 2006, Laslo Tibor Diosady wrote:
Hi,
I'm trying to create a distribution which can be built both with and without
petsc. The idea behind this is that when only running uni-processor cases
typically petsc will not be needed so that users do not need a copy of petsc
on their
The error says - you are passing in 'F' instead of 'F' to
adjust_boundary()
Satish
On Wed, 19 Jul 2006, Christopher Harden wrote:
Hello,
I'm having trouble passing the Vec and Mat data types to a C function.
Specifically,
In my header for example I'm using,
void assembly( Vec F,
The basic form of a PETSc makefile is as follows:
CFLAGS =
FFLAGS =
CPPFLAGS =
FPPFLAGS =
CLEANFILES =
include ${PETSC_DIR}/bmake/common/base
ex1: ex1.o chkopts
-${CLINKER} -o ex1 ex1.o ${PETSC_KSP_LIB}
${RM} ex1.o
If you wish to
MatPreallocInitialize appears to be a typo [for
MatPreallocateInitialize] Will fix this in petsc-dev.
Satish
On Mon, 30 Jan 2006, Barry Smith wrote:
Actually they are not really internal routines; though their
documentation is a bit short. You can see uses of them in
On Fri, 17 Feb 2006, billy at dem.uminho.pt wrote:
I am new to PETSc, so I don't really know how it works.
I added this line to my makefile:
include $(PETSC_DIR)/bmake/common/base
but I had a target named 'clean' defined. Now, when I run my makefile it says:
... ignoring old
PETSc is installed with MPI, LAPACK - and can be installed with PARMETIS
[which has metis.a]. So there is no reason to have a non-petsc
makefile.
And you can't use a different MPI than what PETSc is installed with.
The simplest makefile would be:
--
CFLAGS
$(EXECUTABLE) $(SOURCE_OBJ) $(METIS_LIB) ${PETSC_LIB}
--
If I use that is says:
make: *** No rule to make target `debug/main.o', needed by `all'. Stop.
Billy.
Quoting Satish Balay balay at mcs.anl.gov:
PETSc is installed with MPI
These external packages are not tested on windows.
/home/feap/Library/petsc-2.3.1-p5/externalpackages/ParMetis/cygwin-c-real-opt/include/parmetis.h:20:1:
warning: __cdecl redefined
You could try removing the following code from
On Tue, 21 Feb 2006, abdul-rahman at tu-harburg.de wrote:
Hi all,
There seems to be some changes for the CG type in PETSc 2.3.1-p7.
I can't have -ksp_cg_symmetric anymore - it simply says:
Option left: name:-ksp_cg_symmetric no value
petsc-2.3.0 changelog has the following entry:
On Wed, 22 Feb 2006, abdul-rahman at tu-harburg.de wrote:
Satish,
On Tue, 21 Feb 2006, Satish Balay wrote:
petsc-2.3.0 changelog has the following entry:
-ksp_cg_Hermitian and -ksp_cg_symmetric have been changed
to -ksp_cg_type Hermitian or symmetric
oops, thanks for pointing
Call KSPGetResidualNorm() after KSPSolve()
Satish
On Fri, 24 Feb 2006, billy at dem.uminho.pt wrote:
How do you retrieve the final residual of the iterative solver?
Billy.
On Fri, 24 Feb 2006, billy at dem.uminho.pt wrote:
Hello,
VecGetValues - Gets values from certain locations of a vector. Currently can
only get values on the same processor
How can I get a value of the vector belonging to a different processor?
I am trying to learn how to adapt
factors etc..]
Satish
On Fri, 11 Aug 2006, Margot Summer wrote:
Thanks. But can we find out this info inside the code (like the way we get
the number of iterations of ksp)? Also, for many subpc's, e.g. using
bjacobi, -ksp_view does not print out every block.
Margot
Satish Balay
On Tue, 15 Aug 2006, Yaron Kretchmer wrote:
Hi All
Is there a way of determining programatically which commandline options are
used/not used within petsc?
You can run the code with the additional option '-options_left'
or add the following line of code - after PetscInitialize()
ierr =
an array of unused options or something similar)
Thanks
Yaron
On 8/15/06, Satish Balay balay at mcs.anl.gov wrote:
On Tue, 15 Aug 2006, Yaron Kretchmer wrote:
Hi All
Is there a way of determining programatically which commandline options
are
used/not used within petsc
Yes - we limit the e-mail sizes on the mailing list - as we don't want
to flood all list participents with multi-megabyte emails.
Issues that require such interaction should be done at
petsc-mait at mcs.anl.gov not petsc-users at mcs.anl.gov.
Satish
On Tue, 15 Aug 2006, Matt Funk wrote:
Is
On Wed, 16 Aug 2006, Thomas Geenen wrote:
dear petsc users,
is there a way to prevent Petsc during the assembly phase from redistributing
matrix rows over cpu's ??
The row distribution is done at matrix creation time - and you can set
the row distribution with MatSetSizes() [or
C:\pkg\cygwin\bin\python2.3.exe: *** unable to remap
C:\pkg\cygwin\bin\cygssl-0.9.7.dll
There are probably 2 ways to recover from this cygwin error.
1. reinstall cygwin from scratch
2 - kill all cygwin processes [by rebooting]
- run 'ash' shell from cygwin bin dir [this should be done
This query is best sent to mpich2-maint at mcs.anl.gov
Satish
On Wed, 23 Aug 2006, Changyeol Lee wrote:
Hi, everyone!
I assembled a 4-node cluster consisting of 4 Intel processors. I used Fedora
Core 4, PETSc 2.3.1 and MPICH2-1.0.3.
There is no problem in installation of PETSc and
If you plan to use windows recommend mpich1 as this is what PETSc is
usually tested with [as far as installation is concerned].
http://www-unix.mcs.anl.gov/mpi/mpich1/mpich-nt/
Configure will automatically look for it - and use it.
The scalability depends upon the OS, MPI impl and
at mcs.anl.gov
[mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Satish Balay
Sent: Thursday, August 24, 2006 10:54 AM
To: petsc-users at mcs.anl.gov
Subject: Re: Intel Dual core machines
If you plan to use windows recommend mpich1 as this is what
PETSc is usually tested
This message is outofdate. I'll fix it in petsc-dev.
The way to enable this feature is to rerun configure with the
additional option
--with-fortran-kernels=generic
You can use additonal option PETSC_ARCH with configure so that a new
set of configuraton [with new set of libraries are created] -
On Mon, 28 Aug 2006, Satish Balay wrote:
This message is outofdate. I'll fix it in petsc-dev.
looks like this is already cleanedup in petsc-dev.
Satish
The option --with-blas-lapack-dir is useful if you already have
blaslapack libraries compiled installed. If you've manually
downloaded fblaslapack.tar.gz - then use the option:
--download-f-blas-lapack=/home/petsc/fblaslapack.tar.gz
[with the correct patch to fblaslapack.tar.gz file]
Satish
Use the latest 2.3.1 release of PETSc.
Satish
On Mon, 3 Apr 2006, li pan wrote:
hi
but --download-f-blas-lapack only takes no, yes ..
boolean value.
pan
The option --with-blas-lapack-dir is useful if you
already have
blaslapack libraries compiled installed. If you've
Please send replies to the list..
If you are not using 2.3.1 - then do the following:
cd petsc-2.3.0
mkdir externalpackages
cd externalpackages
tar -xzf ~/fblaslapack.tar.gz
cd ..
./config/configure.py --download-f-blas-lapack=1
Satish
On Mon, 3 Apr 2006, li pan wrote:
h, I'm not
already responded to this query in the previous thread.
Satish
On Mon, 3 Apr 2006, li pan wrote:
Dear all,
could anybody tell how to install petsc into a pc
without internet connection?
best
pan
__
Do You Yahoo!?
Tired of spam?
That would be --with-precision=single. However PETSc currently doesn't
compile in this mode.
Satish
On Tue, 11 Apr 2006, abdul-rahman at tu-harburg.de wrote:
Dear all,
if I were to compute in single precision complex, should I configure
--with-precision=single or matsingle, or both?
What problems?
Send us the logs at petsc-maint at mcs.anl.gov - and we can take
a look at them.
Satish
On Thu, 13 Apr 2006, abdul-rahman at tu-harburg.de wrote:
Dear all,
I'd like to know if anyone has successfully built petsc 2.3.1-p12 on
Fedora Core 4 with the Intel compilers package
On Wed, 10 Jan 2007, Ben Tay wrote:
Hi,
I've some problems compiling PETSC on one of my school's server.
send us configure.log at petsc-maint at mcs.anl.gov and we take a look
at the problem. Its probably easier to get this working than trying
the alternatives below..
I've already
Can you reproduce this with a PETSc example?
make test
Satish
On Wed, 10 Jan 2007, Paul T. Bauman wrote:
Hello,
Has anyone had any experience using PETSc and gfortran together? My code
compiles, but when I run it, it crashes with the following error (mac Tiger
10.4.8, power pc, latest
at the same step.. [i.e
using .o and .f90 in the same command]. Sugest crearing .o files before
linking..
i.e
${FLINKER} -o a.out global.o main.o ${PETSC_LIB}
Satish
On Thu, 11 Jan 2007, Ben Tay wrote:
Yes it ran successfully. I've attached the output.
thank you very much.
On 1/11/07, Satish
On Thu, 11 Jan 2007, Ben Tay wrote:
Hi,
regarding-lirc -lgcc_s -lirc_s -ldl -o a.out global.o main.f90
is it the correct order? my make file is in that order.
No its the wrong order. The correct order was indicated in the previous e-mail.
${FLINKER} -o a.out global.o main.o
We don't have prebuild binaries..
Sugets configuring with:
config/configure.py --with-cc='win32fe cl' --with-cxx='win32fe cl' --with-fc=0
--with-clanguage=cxx --download-c-blas-lapack=1
If you encounter problems - send us configure.log at
petsc-maint at mcs.anl.gov
Satish
On Fri, 12 Jan
You've built PETSc with the following.
--with-mpi-dir=/opt/mpich/intel/ --with-x=0
However you are comparing simple MPI test with a different MPI [from
/opt/mpich/myrinet/intel]
/usr/lsf6/bin/mpijob_gm /opt/mpich/myrinet/intel/bin/mpirun a.out
You are using different MPI imps for
On Wed, 24 Jan 2007, Aron Ahmadia wrote:
Is there a good script lying around somewhere for setting the X11
connections up from the master/interactive node? This seems like it could
be a huge pain if you've got a bunch of worker nodes sitting in a private
network behind the master in classic
Ideally you'll have to benchmark the correct code to determine the
differences in performance between different architectures.
We have some 'sequential' performance numbers for a sample test case
in src/benchmarks/results/performance_cfd_2_10. Also anyone can run
this benchmark on the hardware
On Fri, 26 Jan 2007, Ben Tay wrote:
My school's server only has MKL installed on some nodes. Hence I
was told that if I need to use it, I'll have to link them
statically. So how does PETSc use MKL?
I guess the question is - how do you work arround the problem [of
badly installed MKL] when
On Tue, 28 Nov 2006, Satish Balay wrote:
Actually we have the f90 interface working with Compaqf90 [6.0 compiler]. Use
the additional configure option
--with-f90-interface=win32
There was bug - which prevented this from working. You can get the
latest tarfile [p7] which has the fix
On Sun, 3 Dec 2006, Matthew Knepley wrote:
1. Are petscscaler the same as real nos.? I am writing in double precision
or real(8). So are these 2 interchangeable? (same as petscint and integer).
Yes.
You can check these defines in include/finclude/petscdef.h
PetscInt - integer*4
nope - PETSc uses MPI for parallelism internally - so using the
additional compiler directive is not required. It mignt work - but it
is not appropriate. [we never tested this mode]
If you use this option - you'll need to understand what exactly it
means [in this context of MPI etc.. ] - and how
On Fri, 22 Dec 2006, Matthew Knepley wrote:
1. Is Intel MKL much faster than the downloaded BLAS/LAPACK? Or is it true
only for really large problem? Is ATLAS a good alternative too?
This depends heavily on the architecture and on the BLAS operations
used. I don't think it much faster, but
The idea is to use a PETSc example makefile - and modify it
appropriately.
And with fortran codes - we require preprocessing [i.e source files
that call PETSc routines should be .F]
A minimal PETSc makefile is as follows:
CFLAGS =
)
if (myrank .eq. 0) then
print *, ?Hello World?
endif
call PetscFinalize(ierr)
stop
On Mar 2, 2007, at 11:38 AM, Satish Balay wrote:
On Fri, 2 Mar 2007, P. Aaron Lott wrote:
Thanks for this information. I think I setup things almost right
if you have installation problems - send us configure.log at petsc-maint at
mcs.anl.gov
Satish
On Sat, 3 Mar 2007, Ben Tay wrote:
Hi,
I tried to compile PETSc with compaq visual fortran in cygwin. My command is
$ ./config/configure.py --with-cc='win32fe cl' --with-fc='win32fe f90'
unsteady_ex compiled fine
before using petsc and the preprocessor, so I really don't know what the
problem could be.
Any ideas?
-Aaron
On Mar 2, 2007, at 12:16 PM, Satish Balay wrote:
/usr/local/mpich-1.2.5.2/bin/mpif90 -c -I. -g __unsteady_ex.F -o
unsteady_ex.o
to rearrange things in the makefile command
in order to do this or not. Do you have any ideas?
Thanks,
-Aaron
$(CMD) : $(SOBJS)
-${FLINKER} $(FLAGS) -o $(EXENAME) $(SOBJS)
On Mar 3, 2007, at 12:07 PM, Satish Balay wrote:
I can't spot any obvious issues here
On Fri, 16 Mar 2007, Thomas Geenen wrote:
oke maybe i did not explain it clear enough.
i already build petsc with support for hypre and mumps the more or
less default way. yes you will have to do a lot of tweaking but that
went oke.
The problem was that in this version Mumps would hang in
On Thu, 29 Mar 2007, Knut Erik Teigen wrote:
On Thu, 2007-03-29 at 03:42 -0400, Diego Nehab wrote:
So far, using only one process, everything is simple and
works (it took me longer to compile and test MPI and Petsc
then to write code that solves my problem :)).
When I move to
There are 2 aspects to performance.
- MPI performance [while message passing]
- sequential performance for the numerical stuff.
So it could be that the SMP box has better MPI performance. This can
be verified with -log_summary from both the runs [and looking at
VecScatter times]
However with
On Fri, 2 Feb 2007, Satish Balay wrote:
However with the sequential numerical codes - it primarily depends
upon the bandwidth between the CPU and the memory. On the SMP box -
depending upon how the memory subsystem is designed - the effective
memory bandwidth per cpu could be a small fraction
On Sat, 3 Feb 2007, Shi Jin wrote:
I do see that the cluster run is faster than the shared-memory
case. However, I am not sure how I can tell the reason for this
behavior is due to the memory subsystem. I don't know what evidence
in the log to look for.
There were too many linewraps in the
A couple of comments:
- with the dual core opteron - the memorybandwith per core is now
reduced by half - so the performance suffers. However memory
bandwidth across CPUs is scalable. [6.4 Gb/s per each node or 3.2Gb/s
per core]
- Current generation Intel Core 2 duo appears to claim having
, Satish Balay wrote:
A couple of comments:
- with the dual core opteron - the memorybandwith per core is now
reduced by half - so the performance suffers. However memory
bandwidth across CPUs is scalable. [6.4 Gb/s per each node or 3.2Gb/s
per core]
- Current generation Intel Core 2 duo
if there is a speed issue if one has to copy
data from the RAM of one CPU to another.
Thanks.
Shi
--- Satish Balay balay at mcs.anl.gov wrote:
A couple of comments:
- with the dual core opteron - the memorybandwith
per core is now
reduced by half - so the performance suffers.
However
Just looking at 8 proc run [diffusion stage] we have:
MatMult: 79 sec
MatMultAdd : 2 sec
VecScatterBegin: 17 sec
VecScatterEnd : 51 sec
So basically the communication in MatMult/Add is represented by
VecScatters. Here out of 81 sec total - 68 seconds are used for
communication
Can you send the optupt from the following runs. You can do this with
src/ksp/ksp/examples/tutorials/ex2.c - to keep things simple.
petscmpirun -n 2 taskset -c 0,2 ./ex2 -log_summary | egrep
\(MPI_Send\|MPI_Barrier\)
petscmpirun -n 2 taskset -c 0,4 ./ex2 -log_summary | egrep
- If you have build isses [involing sending configure.log] please use
petsc-maint at mcs.anl.gov address [not the mailing list]
- Looks like you were using the following configure options:
--with-cc=/scratch/g0306332/intel/cc/bin/icc
--with-fc=/lsftmp/g0306332/inter/fc/bin/ifort
time for MPI_Barrier(): 1.35899e-05
Average time for zero size MPI_Send(): 6.73532e-06
They all seem quite fast.
Shi
--- Shi Jin jinzishuai at yahoo.com wrote:
Yes. The results follow.
--- Satish Balay balay at mcs.anl.gov wrote:
Can you send the optupt from the following runs
On Mon, 12 Feb 2007, Ben Tay wrote:
Hi Satish,
I've installed superlu. I issued the command ./a.out -mat_type
superlu -ksp_type preonly -pc_type lu and it just hanged there.
Did you install superlu separately? Sugest installing with PETSc
configure option '--download-superlu=1.
Is it
On Fri, 16 Feb 2007, Shi Jin wrote:
I actually used MatGetLocalSize(A,m,n) in the code. They give me
m=4,n=3, as expected. I can also specify m=4,n=3 in
MatCreateMPIAIJ() which is exactly identical to the previous code.
If I specify anything else, I get error saying that they don't agree
On Fri, 16 Feb 2007, Shi Jin wrote:
We are stoing the diagonal block and offdiagonal block
separately. However both blocks are on the same processor. i.e
each processor stores m*N values - in 2 submatrices m*n,
m*(N-n). To understand this better - check manpage for
MatCreateMPIAIJ().
On Thu, 22 Feb 2007, Ben Tay wrote:
Hi,
I have been using PETSc with visual fortran/intel mkl/mpich installed. This
has the same configuration as the configuration file .dsw supplied by PETSc.
However, now using another of my school's computer, MKL and MPICH are not
installed.
Is there
Maximum memory PetscMalloc()ed 29246912 maximum size of entire process 0
The choice of wording here is a bit misleading. PETSc is using
getrusage(ru_maxrss) - which is resident set size. [so top should show
similar numbers got RSS]
This might include both code segment and data segments - and
On Wed, 28 Feb 2007, Satish Balay wrote:
Maximum memory PetscMalloc()ed 29246912 maximum size of entire process 0
The choice of wording here is a bit misleading. PETSc is using
getrusage(ru_maxrss) - which is resident set size. [so top should show
similar numbers got RSS]
Ops - my
On Wed, 28 Feb 2007, Shi Jin wrote:
Thank you very much.
This is very helpful.
So the mismatch in size only comes from MatLoad()?
I am actually not a big fan of loading the matrices
either. I used it just to do some test. There is no
need to change the implementation for me at all.
So
i wonder what can be done.
Don't use -static. Lots of system libraries can't be used in -static
mode - as you've discovered. So you should:
- make sure the remote machine has all the basic system libraries [as
.so available].
- built PETSc without sharedlibrary options.
- Now compile an
Also - why can't you just install PETSc on this remote machine?
Satish
On Tue, 8 May 2007, Satish Balay wrote:
i wonder what can be done.
Don't use -static. Lots of system libraries can't be used in -static
mode - as you've discovered. So you should:
1 - 100 of 1911 matches
Mail list logo