ghtly/v1.3/) to verify if this has been fixed
--Nysal
On Thu, 2009-02-19 at 16:09 -0600, Jeff Pummill wrote:
I built a fresh version of lammps v29Jan09 against Open MPI 1.3 which
in turn was built with Gnu compilers v4.2.4 on an Ubuntu 8.04 x86_64
box. This Open MPI build was able to generate usabl
I built a fresh version of lammps v29Jan09 against Open MPI 1.3 which in
turn was built with Gnu compilers v4.2.4 on an Ubuntu 8.04 x86_64 box.
This Open MPI build was able to generate usable binaries such as XHPL
and NPB, but the lammps binary it generated was not usable.
I tried it with a
On Mar 7, 2008, at 7:44 AM, Jeff Pummill wrote:
> Just a quick question...
>
> Does Open MPI 1.2.5 support most or all of the MPI-2 directives and
> features?
>
> I have a user who specified MVAPICH2 as he needs some features like
> extra task
Just a quick question...
Does Open MPI 1.2.5 support most or all of the MPI-2 directives and
features?
I have a user who specified MVAPICH2 as he needs some features like
extra task spawning, but I am trying to standardize on Open MPI compiled
against Infiniband for my primary software
Is it possible that this could be a problem with /usr/lib64 as opposed
to /usr/lib?
Just a thought...
Jeff F. Pummill
Senior Linux Cluster Administrator
University of Arkansas
//
Hsieh, Pei-Ying (MED US) wrote:
Hi, Edgar and Galen,
Thanks for the quick reply!
What puzzles me is that, on
I'm guessing he means the ASC FLASH code which simulates star explosions...
Brock?
Jeff F. Pummill
University of Arkansas
//
Doug Reeder wrote:
Brock,
Do you mean flash memory, like a USB memory stick. What kid of file
system is on the memory. Is there some filesystem limit you are
Krishna,
When you log in to the remote system, use ssh -X or ssh -Y which will
export the xterm back thru the ssh connection.
Jeff Pummill
University of Arkansas
Krishna Chaitanya wrote:
Hi,
I have been tracing the interactions between the PERUSE
and MPI library,on one
-np 4
--byslot ./cg.C.4
It appears that this does avoid oversubscribing any particular core as I
am not exceeding my core count by running just the two jobs requiring 4
cores each.
Thanks,
Jeff Pummill
George Bosilca wrote:
The cleaner way to define such an environment is by using
care of this detail for me?
Thanks!
Jeff Pummill
SLURM was really easy to build and install, plus it's a project of LLNL
and I love stuff that the Nat'l Labs architect.
The SLURM message board is also very active and quick to respond to
questions and problems.
Jeff F. Pummill
Bill Johnstone wrote:
Hello All.
We are starting to need
Jeff,
Count us in at the UofA. My initial impressions of Open MPI are very
good and I would be open to contributing to this effort as time allows.
Thanks!
Jeff F. Pummill
Senior Linux Cluster Administrator
University of Arkansas
Fayetteville, Arkansas 72701
(479) 575 - 4590
to the command line submission
to ensure that it is using the IB network instead of the TCP? Or
possibly disable the Gig-E with ^tcp to see if it still runs successfully?
I just want to be sure that Open MPI is actually USING the IB network
and mvapi.
Thanks!
Jeff Pummill
years old.
Would it be reasonable to expect OpenMPI 1.2.3 to build and run in such
an environment?
Thanks!
Jeff Pummill
University of Arkansas
the long-term plan to get "srun -n X my_mpi_application"
model to work; it just hasn't bubbled up high enough in the priority
stack yet... :-\
On Jun 20, 2007, at 1:59 PM, Jeff Pummill wrote:
Just started working with OpenMPI / SLURM combo this morning. I can
successfully launch this job fro
Thanks guys!
Setting F77=gfortran did the trick.
Jeff F. Pummill
Senior Linux Cluster Administrator
University of Arkansas
Fayetteville, Arkansas 72701
(479) 575 - 4590
http://hpc.uark.edu
"A supercomputer is a device for turning compute-bound
problems into I/O-bound problems." -Seymour Cray
Greetings all,
I downloaded and configured v1.2.2 this morning on an Opteron cluster
using the following configure directives...
./configure --prefix=/share/apps CC=gcc CXX=g++ F77=g77 FC=gfortran
CFLAGS=-m64 CXXFLAGS=-m64 FFLAGS=-m64 FCFLAGS=-m64
Compilation seemed to go OK and there IS
your timings, Jeff, and what processor do
you exactly have?
Mine is a Pentium D at 2.8GHz.
Victor
--- Jeff Pummill <jpum...@uark.edu> wrote:
Victor,
Build the FT benchmark and build it as a class B
problem. This will run
in the 1-2 minute
--- Jeff Pummill <jpum...@uark.edu> wrote:
Perfect! Thanks Jeff!
The NAS Parallel Benchmark on a dual core AMD
machine now returns this...
[jpummil@localhost bin]$ mpirun -np 1 cg.A.1
NAS Parallel Benchmarks 3.2 -- CG Benchmark
CG Benchmark Completed.
.
Victor
--- Jeff Pummill <jpum...@uark.edu> wrote:
Victor,
Just on a hunch, look in your BIOS to see if
Hyperthreading is turned
on. If so, turn it off. We have seen some unusual
behavior on some of
our machines unless this is disabled.
Victor,
Just on a hunch, look in your BIOS to see if Hyperthreading is turned
on. If so, turn it off. We have seen some unusual behavior on some of
our machines unless this is disabled.
I am interested in your progress as I have just begun working with
OpenMPI as well. I have used mpich for
20 matches
Mail list logo