In message from Bill Broadley b...@cse.ucdavis.edu (Thu, 13 Aug 2009
17:09:24 -0700):
Tom Elken wrote:
To add some details to what Christian says, the HPC Challenge
version of
STREAM uses dynamic arrays and is hard to optimize. I don't know
what's
best with current compiler versions, but you
In message from Bill Broadley b...@cse.ucdavis.edu (Thu, 13 Aug 2009
17:09:24 -0700):
Do I unerstand correctly that this results are for 4 cores 4 openmp
threads ?
And what is DDR3 RAM: DDR3/1066 ?
Mikhail
I tried open64-4.2.2 with those flags and on a nehalem single socket:
$ opencc
Mikhail Kuzminsky wrote:
In message from Bill Broadley b...@cse.ucdavis.edu (Thu, 13 Aug 2009
17:09:24 -0700):
Do I unerstand correctly that this results are for 4 cores 4 openmp
threads ? And what is DDR3 RAM: DDR3/1066 ?
4 cores and 8 openmp threads.
4 threads is slightly faster:
richard.wa...@comcast.net wrote:
I have a head node that am trying to get WOL set up on.
It is a SuperMicro motherboard (X8DTi-F) with two built
in interfaces (eth0, eth1). I am told by SuperMicro support
that both interfaces support WOL fully, but when I probe them
with ethtool only
On Behalf Of Bill Broadley
I put DDR3-1333 in the machine, but the bios seems to want to run them
at
1066,
How many dimms per memory channel do you have?
My understanding (which may be a few months old) is that if you have more than
one dimm per memory channel, DDR3-1333 dimms will run at
In message from Tom Elken tom.el...@qlogic.com (Fri, 14 Aug 2009
13:57:53 -0700):
On Behalf Of Bill Broadley
I put DDR3-1333 in the machine, but the bios seems to want to run
them
at
1066,
How many dimms per memory channel do you have?
My understanding (which may be a few months old) is
In message from Bill Broadley b...@cse.ucdavis.edu (Fri, 14 Aug 2009
16:13:21 -0700):
Mikhail Kuzminsky wrote:
Your results look excellent, so I wouldn't be surprised if they are
running at 1333.
I have 12-18 GB/s on 4 threads of stream/ifort w/DDR3-1066 on dual
E5520
server. But it works
Hi all,
For my parallel code to run, I first make grid partitioning on command line
then for running the parallel code I give hard-code the path of
METIS-partition files. It is very cumbersome if I need to run code with
different grids and for different -np value. Please tell me how to call
METIS