Hi Josh --
Thanks for your interest in Chapel as an alternative to TBB, Rust, Julia,
and the like.
Someone in the OpenVDB forum wanted to know if there's a recent
comparison with Intel TBB. It seems there is not.
Offhand, I'm not aware of a recent Chapel-vs-TBB comparison (and wouldn't
trust anything older because our performance has been improving by leaps
and bounds over the past few years).
Here on the core development team, we tend to focus our efforts on work
that others could not easily accomplish themselves, like modifications to
the compiler and language. By comparison, benchmarking Chapel vs.
parallel programming model 'xyz' is relatively easy for anyone in the
open-source/academic programming communities to do without special
knowledge, so we tend to leave that to others.
Practically speaking, given that the space of {programming models x
benchmarks} that someone _might_ be interested in seeing results for is
vast, I consider attempting to satisfy that desire to be a bottomless pit,
particularly given my team's finite/limited resources. Better for us to
focus on making Chapel perform better than to try and flesh out this huge
cross-product for the community (besides which, if we published the
numbers, you'd likely take them with a healthy dash of salt anyway).
Internally, our focus is increasingly turning to distributed memory
execution since that's Chapel's primary reason for being, so we tend to be
focused more on comparisons to other distributed memory models like MPI
rather than on shared memory models like TBB. That said, there are some
ongoing shared memory comparison efforts that you might be interested in:
One comparison that we have been focused on in recent years is the
Computer Language Benchmarks Game (formerly "the language shootout"),
hosted at http://benchmarksgame.alioth.debian.org/. We like it because
it's a good example of a neutrally-managed system that does cross-model
comparisons where we'd only need to do the work for Chapel, not everyone
else. Unfortunately, it does not look like TBB has an entry on the
official website. (Nor do we, yet. We've been working on fixing up a few
final issues before submitting our codes. But these codes are available
in our repo if people wanted to do their own comparisons).
A second case that we're focused on currently is the LCALS loop kernel
suite from Livermore. I'm not seeing a TBB version of this offhand
either. In practice, our comparisons have been against OpenMP (which is
the dominant shared memory parallel programming model in our community).
The Intel PRK suite (https://github.com/ParRes/Kernels) is a third set of
benchmarks (for shared and/or distibuted memory) that we've recently
started looking at, though I'm not seeing a TBB entry there. (And
frankly, neither of these last two cases are set up to easily support
automated cross-language comparisons as well as the first).
Anyway, if you are aware of a language comparison system similar to the
Computer Language Benchmarks game that includes Intel TBB that you think
we should participate in, please let us know. We have reasonably limited
resources, so might not get to it in short order, but I'm always
interested in hearing about such comparison sites, and ports like these
make for good summer intern projects.
Reorganizing your mail:
Does the Chapel team have a justification for only benchmarking against
itself...
We don't benchmark just against ourselves, but we do focus on programming
models that are used most dominantly by our community (MPI and OpenMP
being key examples). If, in this question, you're referring to the fact
that you don't see other programming models in our nightly performance
graphs:
http://chapel.sourceforge.net/perf/
...the historical reason for this has been that it's a nontrivial legal
hassle for us to commit others' benchmark codes to our repository (not to
mention that it would result in SCM bloat) combined with the fact that our
automated performance testing only uses code in our repo. We've recently
been brainstorming ways to work around these issues (say, by pulling
benchmark suites and overlaying them dynamically), but haven't gotten that
up and working yet. If you're interested in contributing to that effort,
let us know.
Second, I just found out where to report issues, and it's nothing as
nice as GitHub or JIRA.
We've recently launched a Chapel JIRA site, though we're definitely still
easing into it. Specifically, for the time being, we're using it
primarily for internal issue tracking purposes as the team gets familiar
with it and establishes best practices for its use. It is
publicly-readable, though, and the intention is to increasingly reflect
bugs reported to the (public) mailing lists as JIRA issues:
https://chapel.atlassian.net/projects/CHAPEL/issues/
Specifically, we have not yet created a portal for the community to open
their own JIRA issues and have been kicking around different approaches
for filtering signal from noise.
Beefing up our support for JIRA as our primary issue tracker has been
called out as a priority for the current release cycle. You can see this
(and other current priorities) in our release notes for version 1.13:
http://chapel.cray.com/releaseNotes/1.12/10-Priorities.pdf
http://chapel.cray.com/download.html#releaseNotes
Does the Chapel team have a justification for ... using disparate SF
lists riddled with spam?
We've been looking for a replacement for SF for our mailing lists, but
haven't found anything that's felt satisfying enough to justify the effort
of switching horses and migrating our archives. If you have a proposal,
please let us know -- maybe you're aware of a hosting site that we're not.
Note that there's no need to be subscribed to chapel-bugs to mail to it,
so hopefully the spam only affects us and not those trying to report bugs.
As someone wanting something more mature than Julia, Rust, or Nim to
port the sparse data structure behind much of 3D animation simulation,
Sounds interesting -- let us know if you end up pursuing it. I'll mention
that our "built-in" sparse domain/array implementation is functional but
in need of a lot more performance-tuning work, so if your goal is to find
something that is currently performance competitive with TBB today, I'd
guess that you wouldn't want to use Chapel's sparse domains/arrays at
present. Building your own sparse data structures from dense ones would
likely result in more competitive performance, but lose a lot of the
productivity features that Chapel was designed for.
Back on your first question, I'll note that I'm not aware offhand of a
good suite of shared memory sparse array/matrix benchmarks that would be
appropriate for Chapel vs. TBB or OpenMP comparisons. So if you are, that
would be of particular interest to me. I think part of the reason that
optimizing our sparse implementation hasn't received more attention is due
to a lack of having written such benchmarks in Chapel.
Improving sparse performance and capabilities is also a priority over the
next few release cycles (under the umbrella "array/domain improvements" in
the deck I pointed you to above), so if you're interested in seeing Chapel
step up its sparse game here, we'd be curious what real-world use cases
like yours look like in Chapel, where the pain points are, and where our
performance is lagging the competition.
-Brad
------------------------------------------------------------------------------
_______________________________________________
Chapel-bugs mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/chapel-bugs