problem compiling PETSC on MacOS Leopard

2007-11-18 Thread Bernard Knaepen
Type: application/pgp-signature Size: 486 bytes Desc: not available URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20071118/0b20d767/attachment.pgp

problem compiling PETSC on MacOS Leopard

2007-11-18 Thread Barry Smith
Please direct these problems to petsc-maint instead of petsc-users. From the log file Checking for program /Users/bknaepen/Unix/mpich2-106/bin/mpif90...found Defined make macro FC to mpif90 Pushing language FC sh: mpif90 -c -o conftest.o conftest.F Executing: mpif90

problem compiling PETSC on MacOS Leopard

2007-11-18 Thread Satish Balay
Executing: mpif90 -c -o conftest.o conftest.F sh: Possible ERROR while running compiler: ret = 256 error message = {ifort: error #10106: Fatal error in /opt/intel/fc/10.0.020/bin/fpp, terminated by segmentation violation ifort is giving SEGV - hence configure failed. There must be some

Parallel ISCreateGeneral()

2007-11-18 Thread Tim Stitt
Hi all, Just wanted to know if the the length of the index set for a call to ISCreateGeneral() in a parallel code, is a global length, or the length of the local elements on each process? Thanks, Tim. -- Dr. Timothy Stitt timothy_dot_stitt_at_ichec.ie HPC Application Consultant - ICHEC

Parallel ISCreateGeneral()

2007-11-18 Thread Matthew Knepley
IS are not really parallel, so all the lengths, etc. only refer to local things. Matt On Nov 18, 2007 11:22 AM, Tim Stitt timothy.stitt at ichec.ie wrote: Hi all, Just wanted to know if the the length of the index set for a call to ISCreateGeneral() in a parallel code, is a global length,

Parallel ISCreateGeneral()

2007-11-18 Thread Matthew Knepley
On Nov 18, 2007 11:34 AM, Tim Stitt timothy.stitt at ichec.ie wrote: OK..so I should be using the aggregate length returned by MatGetOwnershipRange() routine? If you are using it to permute a Mat, yes. Matt Thanks Matt for you help. Matthew Knepley wrote: IS are not really parallel,

Parallel ISCreateGeneral()

2007-11-18 Thread Tim Stitt
Matt, It is in setup for MatLUFactorSymbolic() and MatLUFactorNumeric() calls which require index sets. I have distributed my rows across the processes and now just a bit confused about the arguments to the ISCreateGeneral() routine to set up the IS sets used by the Factor routines in

Parallel ISCreateGeneral()

2007-11-18 Thread Matthew Knepley
On Nov 18, 2007 11:52 AM, Tim Stitt timothy.stitt at ichec.ie wrote: Matt, It is in setup for MatLUFactorSymbolic() and MatLUFactorNumeric() calls which require index sets. I have distributed my rows across the processes and now just a bit confused about the arguments to the

Parallel ISCreateGeneral()

2007-11-18 Thread Tim Stitt
Oh...ok I am now officially confused. I have developed a serial code for getting the first k rows of an inverted sparse matrix..thanks to PETSC users/developers help this past week. In that code I was calling MatLUFactorSymbolic() and MatLUFactorNumeric() to factor the sparse matrix and then

Dual core performance estimate

2007-11-18 Thread Ben Tay
-- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20071118/4fbf2f74/attachment.htm

Dual core performance estimate

2007-11-18 Thread Aron Ahmadia
Hi Ben, You're asking a question that is very specific to the program you're running. I think the general consensus on this list has been that for the more common uses of PETSc, getting dual-cores will not speed up your performance as much as dual-processors. For OS X, dual-cores are pretty

Dual core performance estimate

2007-11-18 Thread Gideon Simpson
regards -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20071118/8e49f4ba/attachment.htm

Dual core performance estimate

2007-11-18 Thread Barry Smith
Gideon, On Sun, 18 Nov 2007, Gideon Simpson wrote: I asked the original question, and I have a follow up. Like it or not, multi-core CPUs have been thrust upon us by the manufacturers and many of us are more likely to have access to a shared memory, multi core/multi processor