Yeah, I forgot that pure ANSI C doesn't really have namespaces, other than
to fully qualify modules and variables. Bummer.
Makes writing large, maintainable middleware more difficult.
-Original Message-
From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Kenneth A.
Lloyd
Sent
Doesn't namespacing obviate the need for this convoluted identifier scheme?
See, for example, UML package import and include behaviors.
-Original Message-
From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Dave Goodell
(dgoodell)
Sent: Wednesday, July 30, 2014 3:35 PM
To: Open
What about providing garbage collection for both POSIX and MPI threads? This
problem hints at several underlying layers of "programming faults".
-Original Message-
From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Ralph Castain
Sent: Wednesday, July 16, 2014 8:59 AM
To: Open
Would you consider a user-defined process language library outside of
OpenMPI? Process functors could be defined by compositions in this external
area, and maintenance of the language simply the user's responsibility?
-Original Message-
From: devel [mailto:devel-boun...@open-mpi.org] On
Vasily,
The problem you've identified of differing kernel versions is exacerbated by
also computing across hybrid, heterogeneous hardware architectures (i.e.
SMP& NUMA, different streaming processor architectures, or different shared
memory architectures).
======
Ken
+1 - and my family has been notified for the holidays.
-Original Message-
From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Jeff Squyres
(jsquyres)
Sent: Friday, October 18, 2013 1:42 PM
To: Open MPI Developers List
Subject: [OMPI devel] Open MPI shirts and more
OMPI Developer
+1
From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Rolf vandeVaart
Sent: Tuesday, October 08, 2013 3:05 PM
To: de...@open-mpi.org
Subject: [OMPI devel] RFC: Add GPU Direct RDMA support to openib btl
WHAT: Add GPU Direct RDMA support to openib btl
WHY: Better latency for small
Thank for making this patch available.
Ken Lloyd
-Original Message-
From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On
Behalf Of George Bosilca
Sent: Monday, June 24, 2013 1:39 PM
To: Open MPI Developers
Subject: [OMPI devel] RFC MPI 2.2 Dist_graph addition
WHAT:
Is that because end-to-end checksums don't match?
-Original Message-
From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On
Behalf Of Nathan Hjelm
Sent: Wednesday, February 27, 2013 10:54 AM
To: Open MPI Developers
Subject: [OMPI devel] RFC: Remove pml/csum
What: svn rm
Paul,
Have you tried llvm with clang?
Ken
From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On
Behalf Of Paul Hargrove
Sent: Thursday, January 17, 2013 4:58 PM
To: Open MPI Developers
Subject: Re: [OMPI devel] 1.6.4rc1 has been posted
On Thu, Jan 17, 2013 at 2:26 PM,
Ken,
I have no problem to compile OMPI trunk with llvm-gcc-4.2 (os x 10.8)
Pavel (Pasha) Shamis
---
Computer Science Research Group
Computer Science and Math Division
Oak Ridge National Laboratory
On Jan 7, 2013, at 3:49 PM, Kenneth A. Lloyd <kenneth.ll...@wattsys.com>
wrote:
> H
Has anyone experienced any problems compiling OpenMPI 1/7 with the llvm
compiler and C front ends?
-- Ken
Ralph,
Indeed, some of us are using clang (and other llvm front ends) for JIT on
our hetero HPC clusters for amorphous problem spaces. Obviously, I don't
work for a National Lab. But I do mod/sim/vis for quantum, nano, and
meso-physics.
Just wanted you to be aware.
Ken
From:
I should note that we only virtualize the private cloud / management nodes
over our HPC. The HPC is not virtualized as such.
Ken
-Original Message-
From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On
Behalf Of Kenneth A. Lloyd
Sent: Sunday, September 02, 2012 7:14 AM
This is a tricky issue, isn't it? With the differences between AMD & Intel,
and between base operating systems "touching" & heaps (betw. Linux &
Windows), and various virtual machines schemes- we have opted for an
"outside the main code path" solution to get deterministic results. But that
is as
I haven't used SGE or Oracle Grid Engine in ages, but apparently it is now
called Open Grid Engine
http://gridscheduler.sourceforge.net/
-Original Message-
From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On
Behalf Of Rayson Ho
Sent: Friday, July 27, 2012 8:25 AM
To:
Also, which version of MVAPICH2 did you use?
I've been pouring over Rolf's OpenMPI CUDA RDMA 3 (using CUDA 4.1 r2) vis
MVAPICH-GPU on a small 3 node cluster. These are wickedly interesting.
Ken
-Original Message-
From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On
Oliver,
Thank you for this summary insight. This substantially affects the
structural design of software implementations, which points to a new
analysis "opportunity" in our software.
Ken Lloyd
-Original Message-
From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On
Jeff,
I'm a researcher / developer, but a very small player in the OpenMPI
landscape. I'd say go ahead with the commit. Some stuff is just too old to
maintain.
Ken Lloyd
Watt Systems Technologies Inc.
WARP - Watt Advanced Research Platforms
-Original Message-
From:
19 matches
Mail list logo