Rebecca
I’ll try to write down some short info for the resource partitioner module and
get it to you before the end of the week
JB
From: hpx-users-boun...@stellar.cct.lsu.edu
On Behalf Of Rebecca Stobaugh
Sent: 01 February 2020 22:58
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users]
Kor,
please try this and let me know how you get on. Note that annotations for all
kinds of tasks dispatched using hpx::launch::sync policy are broken, async is
fine
https://github.com/STEllAR-GROUP/hpx/pull/4311
JB
-Original Message-
From: hpx-users-boun...@stellar.cct.lsu.edu
On
I didn't see this message but was thinking about gsoc too
https://github.com/STEllAR-GROUP/hpx/issues/4274
I volunteer as tribute
JB
From: hpx-users-boun...@stellar.cct.lsu.edu
on behalf of Patrick Diehl
Sent: 10 December 2019 23:17:27
To:
Martin
Sorry for the late reply, I forget to check the users list as most of the
discussions take place on IRC (which I also forget to check these days...).
Anyway
When using hpx in distrivuted mode, you have two options - compile hpx with
NETWORKING on and use either the MPI parcelport
>I use the FetchContent module to integrate HPX into my project, which means my
>project is the main project. I think some paths in APEX' CMakeLists.hpx need
>to be adjusted, which I have done locally. Once I have the whole build working
>I can provide a patch or pull request with my edits.
I forgot to include the main text of the call - just FYI
Call for Papers
PAW-ATM 2019:
Parallel Applications Workshop,
Alternatives To MPI+X
Held in conjunction with SC 19, Denver, CO
Dear List
Please be aware that the "Parallel Applications Workshop, Alternatives To MPI"
is open for submissions. It is a workshop held in conjunction with SC19 in
Denver Nov 2019
https://sourceryinstitute.github.io/PAW/
Yours
JB
--
Dr. John Biddiscombe,email:biddisco
: Making a Case for More Schedules.
14th International Workshop on OpenMP, IWOMP 2018, Barcelona, Spain, September
26–28, 2018, Proceedings
-Original Message-
From: "Biddiscombe, John A."mailto:biddi...@cscs.ch>>
To:
"hpx-users@stellar.cct.lsu.edu<mailto:hpx-users
Ray Kim
Sounds like it could be an interesting project for next year’s gsoc. To write a
decent proposal, you spend a bit of time looking at the existing schedulers and
how they interact with the task creation and destruction, context switching and
stack allocation, because unfortunately,
One project that came up in conversation today is this ...
https://github.com/STEllAR-GROUP/hpx/wiki/GSoC-2018-Project-Ideas#conflict-range-based-locks
conflict/range based locks. The project would not require a great deal of
modification of hpx internals and could be developed initially as a
Thomas et al
>
Something I have in mind, which would be a good fit with clear direction and
specification would be this here:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0796r1.pdf
<
I like this doc. I heard Michael and others talk about this work, but I had not
read the proposal
>I think I prefer the mailing list for communication.
Just a heads up. I for one, only check the mailing list every now and again and
usually don't see messages until a few days after they're posted.
However, I leave and IRC window open on my office desktop 24/7 so usually keep
up to date
well.,,
Regards Hartmut
---
http://stellar.cct.lsu.edu
https://github.com/STEllAR-GROUP/hpx
From:
hpx-users-boun...@stellar.cct.lsu.edu<mailto:hpx-users-boun...@stellar.cct.lsu.edu>
mailto:hpx-users-boun...@stellar.cct.lsu.edu>>
On Behalf Of Biddiscombe, John A.
Sent: Tu
Ahmed
Good to hear from you again.
>I was a participant in GSoC'18 in HPX but I've failed the program.
I still feel bad about that.
>However, I liked the project and the contribution to HPX. I wanted to continue
>contributing to HPX so I want someone to guide me.
Nice.
>What part should I
I added the HPX serialization paper to http://stellar-group.org/publications/
but I do not know how to edit http://stellar.cct.lsu.edu/publications/
could someone please either do it for me, or give me access.
thanks
JB
From: hpx-users-boun...@stellar.cct.lsu.edu
Alanas
We haven’t had anyone else interested in the all-to-all communications project
yet.
The description on the web page dates back to last year - but we’ve had some
new ideas since then (or at least I have had), not so sure about the other
potential mentors.
We added a libfabric parcelport
makes good progress on the first tasks.
JB
From: Marcin Copik [mailto:mco...@gmail.com]
Sent: 15 February 2018 18:02
To: Ashish Jha <amiableashis...@gmail.com>; hpx-users@stellar.cct.lsu.edu
Cc: Patrick Diehl <patrickdie...@gmail.com>; Biddiscombe, John A.
<biddi...@cscs.ch>
just as a "by the way" - some time back, someone proposed a boost histogram
class, that I played with (no idea if it was accepted into boost), but I added
it to the libfabric parcelport on my branch so that I could dump out histograms
of the parcel sizes that were passing through the
I noticed in the IRC chat that Steven had the error
"RuntimeError: partitioner::add_resource: Creation of 2 threads requested by
the resource partitioner, but only 1 provided on the command-line."
in my experience, this always happens when you try to run an application
compiled with
>
By all means, no! Quite the opposite. I noticed, that for just a single call of
async my queue backend received multiple insert operations, especially when
used on the default pool.
From that I concluded, that the thread_data* pushed to the Pending_Queueing
queue do not all originate from
Kilian
>
Is the base class thread_queue safe to use with all worker threads accessing
the same object?
<
Yes but be aware that when you run your code on a KNL machine (for example),
then you may have 272 threads all trying to take work from the same queue. One
of the reasons we maintain
Chris
First question : Which version of hpx are you using?
the verbs stuff was working around the January time frame and then we started
adding libfabric support and in the process broke a few things - they might not
have been on master branch at the time, but there has been a lot of messing
The suggestion on that bug is
set(CUDA_LIBRARIES PUBLIC ${CUDA_LIBRARIES})
On 15.03.17, 23:48, "hpx-users-boun...@stellar.cct.lsu.edu on behalf of Hartmut
Kaiser" wrote:
> On 2017 M03 15, Wed 17:07:54
Just fyi
https://gitlab.kitware.com/cmake/cmake/issues/16602
setting CMP0023 is what I do locally until cmake findcuda is fixed
JB
On 15.03.17, 23:25, "hpx-users-boun...@stellar.cct.lsu.edu on behalf of
Alexander Neundorf"
>
Sad that intel TBB isn't open source anymore.
<
What do you know that I have missed?
https://www.threadingbuildingblocks.org/Licensing
and
https://github.com/01org/tbb
the catch from our point of view is that we only release code under the boost
license, so we can’t include code that
http://pasteboard.co/GcUDlW4fi.png
In this image, we create 6 GPU tasks that insert commands onto a cuda stream.
They return a future that will be set ready when the GPU stream event comes in
(some time later). When the GPU tasks start, the coloured bars look ok, but the
task themselves take
I’m forwarding this email to the list just to keep other informed of the
discussion
From: Biddiscombe, John A.
Sent: 01 March 2017 09:32
To: 'SATYADEV CHANDAK'
Subject: RE: Help for GSoC Project on Concurrent data structures support
Satyadev
How familiar are you with concurrent containers
I am seeing this error on Cray-KNL machine compiled with the intel compiler in
release mode (but not in debug mode). (The problem does not change/go away when
I modify the avx flags passed to the compiler, which can change alignment
apparently)
A parcel is serialized with size 0x8d bytes and
PS. You may find the settings on this page useful they are what I used after
the CSCS cray was upgraded a few weeks ago
https://github.com/biddisco/biddisco.github.io/wiki/daint
Don’t look at the settings below the “older stuff” heading as they may be
obsolete.
JB
From:
On cray machines you need to add a link flag –dynamic
From:
>
on behalf of Justs Zarins >
Reply-To:
Kilian
Just as a follow up on this. I found my code bombing out during startup
with the “lock held whilst suspending” message too and I reverted a recent
commit on hpx master (merge of cv race fix) and the lock error went away.
You might have been unlucky to get a checkout of hpx during a
cmake_minimum_required(VERSION 2.8.12 FATAL_ERROR)
Is what we specify in the top level cmakelists, if the user is using
2.8.11 it should flag an error.
The PUBLIC interfaces were added in cmake 3.0 so we should bump our
version to 3 minimum
JB
On 26/04/16 22:10,
32 matches
Mail list logo