Hi, The link to mpich1 says this:
MPICH.NT is no longer being developed. Please use MPICH2. MPICH.NT and MPICH2 can co-exist on the same machine so it is not necessary to uninstall MPICH to install MPICH2. But applications must be re-compiled with the MPICH2 header files and libraries. So, is it ok if I use mpich2? Thanks, Julian. > -----Original Message----- > From: owner-petsc-users at mcs.anl.gov > [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Satish Balay > Sent: Thursday, August 24, 2006 10:54 AM > To: petsc-users at mcs.anl.gov > Subject: Re: Intel Dual core machines > > If you plan to use windows recommend mpich1 as this is what > PETSc is usually tested with [as far as installation is concerned]. > > http://www-unix.mcs.anl.gov/mpi/mpich1/mpich-nt/ > > Configure will automatically look for it - and use it. > > The scalability depends upon the OS, MPI impl and > MemoryBandwidh numbers for this hardware. Don't know enough > about the OS & MPI part - but the MemoryBandwidh part is easy > to check based on the hardware you have. [The new core duo > chips appear to have high memory bandwidth numbers - so I > think it should scale well] > > But you should be concerned about this only for performance measurents > - but not during development. [You can install MPI on a > single cpu machine and use PETSc on it - for development] > > Satish > > On Thu, 24 Aug 2006, Julian wrote: > > > Hello, > > > > So far, I have been using PETSc on a single processor > windows machine. > > Now, I am planning on using it on a Intel Dual core > machine. Before I > > start running the installation scripts, I wanted to confirm > if I can > > use both the processors on this new machine just like how you would > > use multiple processors on a supercomputer. > > If yes, is there anything special that I need to do when > installing PETSc? > > I'm guessing I would have to install some MPI software... > Which one do > > you recommend for windows machines (I saw more than one windows MPI > > software on the PETSc website) ? > > > > Thanks, > > Julian. > > > > >
