On 14:24 Mon 12 Dec , Dave Love wrote:
> Andreas Schäfer <gent...@gmx.de> writes:
>
> >> Yes, as root, and there are N different systems to at least provide
> >> unprivileged read access on HPC systems, but that's a bit different, I
> >> think.
>
aemon to provide limited RW access to MSRs for
applications. I wouldn't wonder if support for this was added to
LIKWID by RRZE.
Cheers
-Andreas
[1] https://github.com/RRZE-HPC/likwid
--
==
Andreas Schäfer
HPC a
ers
-Andreas
--
==========
Andreas Schäfer
HPC and Grid Computing
Department of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85-27910
PGP/GPG key via keyserver
http://www.libg
; Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/04/26608.php
--
==
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen
On 14:26 Wed 26 Mar , Ross Boylan wrote:
> [Main part is at the bottom]
> On Wed, 2014-03-26 at 19:28 +0100, Andreas Schäfer wrote:
> > If you have a complex workflow with varying computational loads, then
> > you might want to take a look at runtime systems which allow
Heya,
On 19:21 Wed 26 Mar , Gus Correa wrote:
> On 03/26/2014 05:26 PM, Ross Boylan wrote:
> > [Main part is at the bottom]
> > On Wed, 2014-03-26 at 19:28 +0100, Andreas Schäfer wrote:
> >> On 09:08 Wed 26 Mar , Ross Boylan wrote:
> >>> Second,
Ross-
On 09:08 Wed 26 Mar , Ross Boylan wrote:
> On Wed, 2014-03-26 at 10:27 +, Jeff Squyres (jsquyres) wrote:
> > On Mar 26, 2014, at 1:31 AM, Andreas Schäfer <gent...@gmx.de> wrote:
> >
> > >> Even when "idle", MPI processes use all the CPU.
more
> hardware resources to the remaining HT that is left is each core
> (e.g., deeper queues).
Oh, I didn't know that. That's interesting! Do you have any links with
in-depth info on that?
Thanks!
-Andreas
--
==========
Andreas Schäfer
H
("hyperthreading").
Cheers
-Andreas
--
==========
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85-27910
PGP/GP
ndreas
--
==
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85-27910
PGP/GPG key via keyserver
http://www.libgeodeco
__
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
==========
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg,
egrate the client-side
into our library to allow users to let their simulations run through
despite nodes failing.
Thanks!
-Andreas
--
==
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Er
ity to dynamically
connect/disconnect nodes in a robust way, then we could build
fault-resilient apps on top of that.
Best
-Andreas
--
==
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnb
ou advice with so little information. Otherwise i
might just say "use MPI and you're done".
In any case this is probably not the right mailinglist to ask these
questions as this list is specifically for Open MPI, not MPI in
general.
Best
-Andreas
--
========
Hi,
On 00:05 Fri 24 Aug , Reuti wrote:
> Am 23.08.2012 um 23:28 schrieb Andreas Schäfer:
>
> > ...
> > checking for style of include used by make... GNU
> > checking how to create a ustar tar archive...
> > ATTENTION! pax archive volume change required.
&
Archive name >
=== 8< *snip* ==
--
==========
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85-27910
PGP/GP
On 17:55 Fri 01 Jun , Rayson Ho wrote:
> We posted an MPI quiz but so far no one on the Grid Engine list has
> the answer that Jeff was expecting:
>
> http://blogs.scalablelogic.com/
That link gives me an "error 503"?
--
====
tatype variable
is merely a handle, Open MPI has an internal data store for each
user-defines datatype. Same for requests AFAIK.
Best
-Andreas
--
==
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Unive
sers mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
==
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, G
you using and how would you specify the GPU to use
sans MPI?
Best
-Andreas
--
======
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85-27910
PGP/GPG key via keyse
__
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
==========
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürn
s that he would like to use both: slots AND weights.
--
======
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85-27910
PGP/GPG key via keyse
.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
======
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85
nd 4x QDR
(active_width 4X, active_speed 10.0 Gbps), so I /should/ be able to
get about twice the throughput of what I'm currently seeing.
--
======
Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität
ping-pong for >256K.
I'll try to find a Intel system to repeat the tests. Maybe it's AMD's
different memory subsystem/cache architecture which is slowing Open
MPI? Or are my systems just badly configured?
Best
-Andreas
--
==========
Andreas Schäfer
ind-to-core or --bind-to-socket on the cmd line?
> Otherwise, the processes are running unbound, which makes a significant
> difference to performance.
>
>
> On Jul 9, 2010, at 3:15 AM, Andreas Schäfer wrote:
>
> > Maybe I should add that for tests I ran the benchmarks
Maybe I should add that for tests I ran the benchmarks with two MPI
processes: for InfiniBand one process per node and for shared memory
both processes were located on one node.
--
==
Andreas Schäfer
HPC and Grid Computing
Chair
uot;" (
)
10 16[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ] "" (
)
10 17[ ] ==( 4X 2.5 Gbps Active/ LinkUp)==> 52[ ]
"faui36a HCA-1" ( )
10 18[ ] ==( 4X 2.5 Gbps Down/ Polling)==> [ ] "" (
)
10 1
tion time, would
it be an option to only ship the source necessary to build the
flex.exe? One could then add an additional build stage during which
flex.exe is compiled, just before it is required.
Just my $0.02
-Andreas
--
======
Andreas Sc
the vector). Since datatypes can only be used for objects with
fixed size (and layout), you can't define an MPI_Datatype for
this. I'd suggest you to use Boost.MPI in this case
(http://www.boost.org/doc/libs/1_35_0/doc/html/mpi.html)
Cheers
-Andreas
--
=========
gt; http://www.open-mpi.org/mailman/listinfo.cgi/users
--
====
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
0049/3641-9-46376
PGP/GPG key via keyserver
I'm a bright... http://www.the-brights.net
(\___/)
(+'.'+)
ould not need to delete, just add in front of MPICH.
> Would you please help me with that ?
I utterly hope I just did.
Most sincerely yours ;-)
-Andreas
--
========
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universit
would suggest MPEG output (MPEG 4, or MPEG 2 if you really
must). But that's just what I prefer.
Cheers
-Andi
--
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/G
}
}
--
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/GPG key via keyserver
I'm a bright... http://www.the-brights.net
(\___/)
(+'.'+)
(")_(")
This is B
ds
directly into your own shell.
> - ...other [low-budget] suggestions?
Maybe an a tad higher audio bitrate. And some people don't like the
.mov format, but that isn't really important.
Thanks!
-Andreas
--
========
Andreas Schäfer
Cluster and Metacompu
On 12:28 Fri 30 May , Lee Amy wrote:
> 2008/5/29 Andreas Schäfer <gent...@gmx.de>:
> Thank you very much. If I do a shorter job it seems run well. And the job
> dosen't repeatedly fail at the same time, but it will fail at this error
> messages. Anyway, I'm not using a sch
s like your application is
terminated from an external instance, maybe because your job exceeded
the wall clock time limit of your scheduling system. Does the job
repeatedly fail at the same time? Do shorter jobs finish successfully?
Just my 0.02 Euros (-8
Cheers
-Andreas
--
==========
want to.
Cheers!
-Andi
--
========
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/GPG key via keyserver
I'm a bright... http://www.the-brights.net
(\___/)
(+'.'+)
(")_(")
This is Bunny. Copy an
t's not OMPI's fault but VASP's, since the segfault happens in one of
its functions. Maybe you should have a look there.
HTH
-Andi
--
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Ger
IK, boost::mpi will thus buffer all vectors to be sent. This might
not be as efficient as just feeding it a raw pointer and the number of
elements.
Cheers!
-Andreas
[1]
http://www.boost.org/doc/libs/1_35_0/doc/html/mpi/tutorial.html#mpi.point_to_point
--
=========
on't provide us with self-sufficient
code; small excerpts mixed with comments won't cut it in most cases.
Cheers
-Andreas
--
========
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/GPG key via keyse
titutes
a valid/complete MPI program).
Cheers!
-Andreas
--
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/GPG key via keyserver
I'm a bright... http://www.the-brights.net
(\_
?
You could do so,
> king regards, oeter
your majesty Oeter ;-)
Cheers
-Andreas
--
============
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/GPG key via keyserver
I'm a bright... http://w
eative and colorful
way you like.
Cheers!
-Andreas
--
========
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/GPG key via keyserver
I'm a bright... http://www.the-brights.net
(\___/)
(+'.'+)
(")_(&qu
ect MPI apps to avoid this optimization -- a
> proper fix is coming in the v1.3 series.
Yo, I've just tried it with the current SVN and couldn't reproduce the
deadlock. Nice!
Cheers
-Andreas
--
========
Andreas Schäfer
Cluster and Metacomputing Working
0x0040ca04 in MPI::Comm::Send ()
#4 0x00409700 in main ()
Anyone got a clue?
--
========
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/GPG key via keyserver
I'm a bright... h
>
> ______
> Fale com seus amigos de graça com o novo Yahoo! Messenger
> http://br.messenger.yahoo.com/
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
===
e-dimensional in
memory.
Cheers
-Andreas
--
============
Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/GPG key via keyserver
I'm a bright... http://www.the-brights.net
=
0
in your case. Thus, the other nodes cannot produce the same output as
node 0.
I've attached my reworked version (including some initialization
code for clarity). If you want me again to debug a program of yours,
send a floppy along with a pizza Hawai (cartwheel size) to:
Andreas Schäfe
it - the basic approach would be to
> parallel the divide and conquer part - which would
> result in ALOT of network messages...
As already said, please read Powers' paper from above. I could imagine
that even though this results in _many_ messages, the algorithms
optimal runtime complexity wi
50 matches
Mail list logo