Re: [hpx-users] Help Contributing to HPX Documentation

2020-02-04 Thread Biddiscombe, John A.
Rebecca

I’ll try to write down some short info for the resource partitioner module and 
get it to you before the end of the week

JB

From: hpx-users-boun...@stellar.cct.lsu.edu 
 On Behalf Of Rebecca Stobaugh
Sent: 01 February 2020 22:58
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] Help Contributing to HPX Documentation

Hello,

I am the technical writer for HPX's documentation, and I am currently working 
on developing documentation for HPX modules, as many of these modules have 
little to no text explaining their function. If you have any suggestions for 
what should be included for a particular module or would like to generate a 
short summary yourself, please contact me. I would also appreciate any 
references or further reading for a particular module you could send my way. I 
am not from a computer science background, so I'm relying a lot on outside 
research to generate content. The modules most in need of documentation are 
"affinity," "resource_partitioner" and "static_reinit"; however, the majority 
of the modules could use further work. You can find the full list of modules 
here: 
https://stellar-group.github.io/hpx/docs/sphinx/latest/html/libs/all_modules.html

Have a nice day,
Rebecca Stobaugh
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] annotated_function icw dataflow does not seem to work

2020-01-17 Thread Biddiscombe, John A.
Kor,

please try this and let me know how you get on. Note that annotations for all 
kinds of tasks dispatched using hpx::launch::sync policy are broken, async is 
fine

https://github.com/STEllAR-GROUP/hpx/pull/4311

JB


-Original Message-
From: hpx-users-boun...@stellar.cct.lsu.edu 
 On Behalf Of Jong, K. de (Kor)
Sent: 16 January 2020 17:33
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] annotated_function icw dataflow does not seem to work

Hi,

I am using hpx::util::annotated_function around the lambda passed to 
hpx::dataflow. I think the name used to show up in my Vampir plots. Now 
(version 1.4.0) I see lots of hpx::parallel::execution::parallel_executor::post 
again. Have things related to the annotation of actions changed a bit maybe, 
just before releasing 1.4.0? Does anyone know of a work-around to get the names 
to show up again Vampir?

BTW, names used when annotating lambda's passed to hpx::async do show up in 
Vampir.

Thanks,
Kor


___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Fwd: [GSoC Mentors] Announcing GSoC 2020 - Org apps open January 14-Feb 5

2019-12-14 Thread Biddiscombe, John A.
I didn't see this message but was thinking about gsoc too

https://github.com/STEllAR-GROUP/hpx/issues/4274

I volunteer as tribute


JB


From: hpx-users-boun...@stellar.cct.lsu.edu 
 on behalf of Patrick Diehl 

Sent: 10 December 2019 23:17:27
To: stellar-intern...@stellar.cct.lsu.edu; hpx-de...@stellar.cct.lsu.edu; 
hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] Fwd: [GSoC Mentors] Announcing GSoC 2020 - Org apps open 
January 14-Feb 5

Hi,

we should apply for GSoC again. Please let us know if you are interested
to become a mentor or have a project idea.

Best,

Patrick


 Forwarded Message 
Subject: [GSoC Mentors] Announcing GSoC 2020 - Org apps open January
14-Feb 5
Date:Mon, 9 Dec 2019 18:30:32 -0800 (PST)
From:'sttaylor' via Google Summer of Code Mentors List

Reply-To:sttaylor 
To:  Google Summer of Code Mentors List




Hello Mentors and Org Admins,

We are pleased to announceGoogle Summer of Code 2020 ,
the 16th consecutive year of the program.

The programannouncement
,timeline
,marketing
materials
(slide
deck, flyers),FAQs ,
andshort videos
with tips for
mentors and students are all available. You'll also notice the 2019
program has now beenarchived
.

Organizations-- If you would like to apply for the 2020 program please
start thinking about the projects you would like students to work on and
also reach out to your community members to ask if they would like to be
mentors for the program. Having a thorough and well thought out list of
Project Ideas is the most important part of your application.

Encourage other open source orgs to apply-- Please also consider
encouraging other open source projects to apply and be a first time GSoC
org - we’re hoping to accept more orgs in 2020 than ever before with a
good number of them being first time orgs. When orgs apply they can put
your org down as a reference.

Please note that Org applications will open on January 14th.

This year we have shortened the community bonding period to 3 weeks per
feedback in the 2019 surveys from students and mentors.

We are looking forward to another exciting year of GSoC!

For any questions about the programs please email us at
gsoc-supp...@google.com

Best,

Stephanie Taylor

GSoC Program Lead


--
You received this message because you are subscribed to the Google
Groups "Google Summer of Code Mentors List" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to
google-summer-of-code-mentors-list+unsubscr...@googlegroups.com
.
To view this discussion on the web visit
https://groups.google.com/d/msgid/google-summer-of-code-mentors-list/53de0989-fce8-4e29-ad3b-414b3b002d40%40googlegroups.com
.

___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Request for Experience

2019-12-14 Thread Biddiscombe, John A.
Martin


Sorry for the late reply, I forget to check the users list as most of the 
discussions take place on IRC (which I also forget to check these days...). 
Anyway


When using hpx in distrivuted mode, you have two options - compile hpx with 
NETWORKING on and use either the MPI parcelport (which we assume you have 
compiled and installed on your system as usual), or use the new still slightly 
experimental libfabric parcelport. There is a libfabric provider for PSM2 which 
is the one used on omnipath OPA networks.

see here https://github.com/ofiwg/libfabric/wiki/Provider-Feature-Matrix-master 
for capabilities.

HPX currently runs on the sockets and GNI providers and uses endpoint type 
FI_EP_RDM and makes use of FI_SEND, FI_RECV, FI_RMA, and secondary capability 
FI_SOURCE plus a few others I can't remember from the top of my head.

Looking at the chart, PSM2 provider supports all the things we need, so it 
ought to be possible to run the libfabric network layer on an omnipath machine.


However - currently the master branch of HPX doesn't support this and the stuff 
you'd need is in another branch that needs a bit of work to merge in. I have it 
in my todo list to get the network running on summit (infiniband verbs - no 
FI_SOURCE = problem), but I'm not sure when I'll be able to start work on it.


You should probably just use the MPI parcelport in HPX for now - but If you 
were interested in getting the libfabric stuff running for improved distributed 
performance, it ought to be straightforward to get woking since all the stuff 
we need appears to be supported - however it'd need a bit of tweaking and 
experimenting to get running probably - is there any way I can get access to 
your machine to log in and try a build/test?


If you're more interested in simply using mpi in your existing code and not 
using hpx as a distributed tasking layer, then just turn 
HPX_WITH_NETWORKING=OFF and then use hpx for tasks on a node and your existing 
mpi between nodes.


HTH


JB




From: hpx-users-boun...@stellar.cct.lsu.edu 
 on behalf of Ohlerich, Martin 

Sent: 10 December 2019 10:58:27
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] Request for Experience


Dear Colleagues,


my name is Martin Ohlerich. I'm working at the Leibniz Super-Computing Center 
near Munich (LRZ), and test currently the capabilities of HPX. On our Linux 
cluster with infinitband network, the tests were so far successful. On 
SuperMUC-NG, we've an Intel OPA network, which seems to have some peculiarities 
(that we also observed when trying to employ GPI (GASPI)). Is there any 
experience with such a network type for HPX in the community?

I welcome any hint on where to find about the startup mechanism, and debugging 
possibilities. I tried so far the easy approach to install HPX via Spack. On 
SNG that might be not the correct way to go.


Many thanks in advance! Also in the name of our users!

Best regards,

Martin Ohlerich
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Link error building HPX with support for APEX

2019-06-04 Thread Biddiscombe, John A.
>I use the FetchContent module to integrate HPX into my project, which means my 
>project is the main project. I think some paths in APEX' CMakeLists.hpx need 
>to be adjusted, which I have done locally. Once I have the whole build working 
>I can provide a patch or pull request with my edits.

Please do submit a PR (to the APEX repo though if it needs fixes) - I thought I 
had fixed all those ages ago, but I guess some new ones have crept in.

>>>
Second, my build of HPX with support for APEX almost succeeds, but not 
completely (repro-case at bottom of this message):
<<<

My local projects that build using HPX are also failing with the similar 
problems related to hpx_cache and also hpx/config.hpp include directory 
problems.
Can you confirm if you are using master branch, or a 1.3 release - the cache 
stuff was not present in the 1.3 release.

Thanks

JB

/usr/bin/ld: cannot find -lhpx_cache
collect2: error: ld returned 1 exit status
_deps/hpx-build/src/CMakeFiles/hpx.dir/build.make:2769: recipe for target 
'lib/libhpx.so.1.3.0' failed
make[2]: *** [lib/libhpx.so.1.3.0] Error 1
CMakeFiles/Makefile2:1112: recipe for target 
'_deps/hpx-build/src/CMakeFiles/hpx.dir/all' failed Building with 
HPX_WITH_APEX=OFF succeeds, building with HPX_WITH_APEX=ON fails for lack of 
hpx_cache. I cannot find any reference to the hpx_cache library in the HPX and 
APEX sources. It is mentioned in three files in HPX' build directory, though:
./CMakeFiles/Export/lib/cmake/HPX/HPXTargets.cmake
./lib/cmake/HPX/HPXTargets.cmake
./src/CMakeFiles/hpx.dir/link.txt

The CMake scripts contain this snippet:
set_target_properties(apex PROPERTIES 

   INTERFACE_COMPILE_OPTIONS "-std=c++17" 

   INTERFACE_LINK_LIBRARIES "hpx_cache" 

)

Does anyone know how I can make my build succeed?

Thanks!

Kor



The simplest CMakeLists.txt with which I can recreate the issue is this one:

# dummy/CMakeLists.txt
cmake_minimum_required(VERSION 3.12)
project(dummy LANGUAGES CXX)

include(FetchContent)

set(HPX_WITH_EXAMPLES OFF CACHE BOOL "") set(HPX_WITH_TESTS OFF CACHE BOOL "") 
set(HPX_WITH_APEX ON CACHE BOOL "")

FetchContent_Declare(hpx
 GIT_REPOSITORY https://github.com/STEllAR-GROUP/hpx
 GIT_TAG 1.3.0
)

FetchContent_GetProperties(hpx)

if(NOT hpx_POPULATED)

 FetchContent_Populate(hpx)

 if(HPX_WITH_APEX)
 # I think something like this should be done in
 # APEX' CMakeLists.hpx
 include_directories(
 ${hpx_SOURCE_DIR}/libs/preprocessor/include
 ${hpx_SOURCE_DIR}/apex/src/apex
 ${hpx_SOURCE_DIR}/apex/src/contrib) 

 endif()

 add_subdirectory(${hpx_SOURCE_DIR} ${hpx_BINARY_DIR})

endif()
# / dummy/CMakeLists.txt

The project can be built (until the above link error) like this:

mkdir dummy/build
cd dummy/build
cmake ..
make
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] PAW-ATM : Parallel Applications Workshop, Alternatives To MPI

2019-04-29 Thread Biddiscombe, John A.
I forgot to include the main text of the call - just FYI



Call for Papers

 PAW-ATM 2019:

Parallel Applications Workshop,
Alternatives To MPI+X

   Held in conjunction with SC 19, Denver, CO

   


Summary

Supercomputers are becoming increasingly complex due to the prevalence
of hierarchy and heterogeneity in emerging node and system
architectures.  As a result of these trends, users of conventional
programming models for scalable high-performance applications
increasingly find themselves writing applications using a mix of
distinct programming models-such as Fortran90, C, C++, MPI, OpenMP,
and CUDA-which are also often becoming more complex and
detail-oriented themselves. These trends negatively impact the costs
of developing, porting, and maintaining HPC applications.

Meanwhile, new programming models and languages are being developed
that strive to improve upon the status quo. This is accomplished by
unifying the expression of parallelism and locality across the system,
raising the level of abstraction, making use of modern language design
features, and/or leveraging the respective strengths of programmers,
compilers, runtimes, and operating systems.  These alternatives may
take the form of parallel programming languages (e.g., Chapel, Fortran
2018, Julia, UPC), frameworks for large-scale data processing and
analytics (e.g., Spark, Tensorflow, Dask), or libraries and embedded
DSLs that extend existing languages (e.g., Legion, COMPSs, SHMEM, HPX,
Charm++, UPC++, Coarray C++, Global Arrays).

The PAW-ATM workshop is designed to explore the expression of
applications in scalable parallel programming models that serve as an
alternative to the status quo.  It is designed to bring together
applications experts and proponents of high-level programming models
to present concrete and practical examples of using such alternative
models and to illustrate the benefits of high-level approaches to
scalable programming.


Scope and Aims

The PAW-ATM workshop is designed as a forum for exhibiting studies of
parallel applications developed using high-level parallel programming
models serving as alternatives to MPI+X-based programming. We
encourage the submission of papers and talks that describe practical
distributed-memory applications written using alternatives to MPI+X,
and include characterizations of scalability and performance,
expressiveness and programmability, as well as any downsides or areas
for improvement in such models. Our hope is to create a forum in which
architects, language designers, and users can present, learn about,
and discuss the state of the art in alternative scalable programming
models while also wrestling with how to increase their effectiveness
and adoption.  Beyond well-established HPC scientific simulations, we
also encourage submissions exploring artificial intelligence, big data
analytics, machine learning, and other emerging application areas.



Topics include, but are not limited to:

* Novel application development using high-level parallel programming
  languages and frameworks.

* Examples that demonstrate performance, compiler optimization, error
  checking, and reduced software complexity.

* Applications from artificial intelligence, data analytics,
  bioinformatics, and other novel areas.

* Performance evaluation of applications developed using alternatives
  to MPI+X and comparisons to standard programming models.

* Novel algorithms enabled by high-level parallel abstractions.

* Experience with the use of new compiler and runtime environments.

* Libraries using or supporting alternatives to MPI+X.

* Benefits of hardware abstraction and data locality on algorithm
  implementation.


Submissions

Submissions are solicited in two categories:

1) Full-length papers presenting novel research results:

  * Full-length papers will be published in the workshop proceedings.
Submitted papers must describe original work that has not appeared
in, nor is under consideration for, another conference or
journal. Papers shall be eight (8) pages minimum and not exceed
ten (10) including text, appendices, and figures.  Appendix pages
related to the reproducibility initiative dependencies, namely the
Artifact Description (AD) and Artifact Evaluation (AE), are not
included in the page count.

2) Extended abstracts summarizing preliminary/published results:

  * Extended abstracts will be evaluated separately and will not be
included in the published proceedings; they are intended to
propose timely communications of novel work that will be formally
submitted elsewhere at a later stage, and/or of already published
work that would be of interest to the PAW-ATM audience in terms of
topic and timeliness.  Extended abstracts shall not exceed four
(4) 

[hpx-users] PAW-ATM : Parallel Applications Workshop, Alternatives To MPI

2019-04-29 Thread Biddiscombe, John A.
Dear List

Please be aware that the "Parallel Applications Workshop, Alternatives To MPI" 
is open for submissions. It is a workshop held in conjunction with SC19 in 
Denver Nov 2019
https://sourceryinstitute.github.io/PAW/

Yours

JB


--
Dr. John Biddiscombe,email:biddisco @.at.@ cscs.ch
http://www.cscs.ch/
CSCS, Swiss National Supercomputing Centre  | Tel:  +41 (91) 610.82.07
Via Trevano 131, 6900 Lugano, Switzerland   | Fax:  +41 (91) 610.82.82

___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] More scheduling algorithms

2018-11-07 Thread Biddiscombe, John A.
Ray Kim

Thanks - I understand now - I should have realized from the abstract I read of 
one of the papers that it was for loop scheduling.

In that case - yes, certainly it would be an interesting project - it would fit 
in nicely with some cleanup/fixes that we are planning on the schedulers anyway 
and provide a good way of testing other strategies for task 
assignment/creation. I’ll read the paper you cited - the lead author is one of 
the professors we work with at CSCS, so although I’ve never met her, I know of 
her workl.

So the answer to your original question is yes - this would make an excellent 
gsoc project for next year. If you need help to get started, just ask. Join IRC 
if you plan on chatting ‘live’ with other devs. Usualy about 3 of us are online 
duting EU office hours, and a couple during USA office hours.

JB



From: hpx-users-boun...@stellar.cct.lsu.edu 
 On Behalf Of ???
Sent: 07 November 2018 09:29
To: hpx-users@stellar.cct.lsu.edu
Subject: Re: [hpx-users] More scheduling algorithms




Hi John,

It seems to me that the scheduling algorithms your thinking are task scheduling 
algorithms, am I right?

The scheduling algorithms I listed are chunk scheduling algorithms (or loop 
scheduling).

They try to determine how many chunks they should bundle in order to achieve 
maximum performance.

In the HPX context the job would be to add executors along guided, auto, and 
dynamic. (hpx/parallel/executors)

There was a recent experimental review paper on the subject [1] I recommend 
this one.

By the way if there is a task scheduling algorithm you can suggest, I think I 
would more than happy to look into it.
Ray Kim

Sogang University



[1] Ciorba, Florina & Iwainsky, Christian & Buder, Patrick. (2018). OpenMP Loop 
Scheduling Revisited: Making a Case for More Schedules.

 14th International Workshop on OpenMP, IWOMP 2018, Barcelona, Spain, September 
26–28, 2018, Proceedings





-Original Message-
From: "Biddiscombe, John A."mailto:biddi...@cscs.ch>>
To: 
"hpx-users@stellar.cct.lsu.edu<mailto:hpx-users@stellar.cct.lsu.edu>"mailto:hpx-users@stellar.cct.lsu.edu>>;
Cc:
Sent: 2018-11-07 (수) 16:45:51
Subject: Re: [hpx-users] More scheduling algorithms


Ray Kim



Sounds like it could be an interesting project for next year’s gsoc. To write a 
decent proposal, you spend a bit of time looking at the existing schedulers and 
how they interact with the task creation and destruction, context switching and 
stack allocation, because unfortunately, that’s where most of the overheads in 
our current scheduling lie.



I am not familiar with all of the scheduling algorithms that you listed below - 
I suspect that they make use of cost functions to determine which tasks should 
be executed next. Are there specific use cases where certain scheduling 
algorithms are more applicable than others? If there is a paper you can suggest 
I read that compares some of the trade offs, it’d be nice to have a look at it. 
(I quickly googled, but don’t have time to read all the stuff I found, so maybe 
you could suggest a good one).



JB



From: 
hpx-users-boun...@stellar.cct.lsu.edu<mailto:hpx-users-boun...@stellar.cct.lsu.edu>
 
mailto:hpx-users-boun...@stellar.cct.lsu.edu>>
 On Behalf Of ???
Sent: 07 November 2018 04:44
To: hpx-users@stellar.cct.lsu.edu<mailto:hpx-users@stellar.cct.lsu.edu>
Subject: [hpx-users] More scheduling algorithms



Hi everyone,

I'm a student from Korea researching scheduling algorithms.

I noticed that HPX only has a limited number of scheduling algorithms.

What do you think if I propose to add more scheduling algorithms to HPX for 
GSoC 2019?

Notably the Factoring, Adaptive Factoring, Tapering, Trapezoid, Quadratic 
schedules.

​

Ray Kim



[https://mail.naver.com/readReceipt/notify/?img=ulFcazJobr9vFzFoKxmsK6JSpztwM4F4potwFAk4a6E%2FK4Eda6FvtzFXp6UmaVl5WLl51zlqDBFdp6d5MreRhoRcbH2R%2BBF0bNFgbX30WzwCbSloMXt5WHF974kv%2Bt%3D%3D.gif]


___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] More scheduling algorithms

2018-11-06 Thread Biddiscombe, John A.
Ray Kim

Sounds like it could be an interesting project for next year’s gsoc. To write a 
decent proposal, you spend a bit of time looking at the existing schedulers and 
how they interact with the task creation and destruction, context switching and 
stack allocation, because unfortunately, that’s where most of the overheads in 
our current scheduling lie.

I am not familiar with all of the scheduling algorithms that you listed below - 
I suspect that they make use of cost functions to determine which tasks should 
be executed next. Are there specific use cases where certain scheduling 
algorithms are more applicable than others? If there is a paper you can suggest 
I read that compares some of the trade offs, it’d be nice to have a look at it. 
(I quickly googled, but don’t have time to read all the stuff I found, so maybe 
you could suggest a good one).

JB

From: hpx-users-boun...@stellar.cct.lsu.edu 
 On Behalf Of ???
Sent: 07 November 2018 04:44
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] More scheduling algorithms

Hi everyone,
I'm a student from Korea researching scheduling algorithms.
I noticed that HPX only has a limited number of scheduling algorithms.
What do you think if I propose to add more scheduling algorithms to HPX for 
GSoC 2019?
Notably the Factoring, Adaptive Factoring, Tapering, Trapezoid, Quadratic 
schedules.
​
Ray Kim
[https://mail.naver.com/readReceipt/notify/?img=udFcazJobr9SFqK9MqmsKrKmKo2dpoFCa6KqK6MqpxKmFopCFzUqKxJgMr9TM40npB%2B0Mogq74lR74lcWNFlbX30WLloWrdQarCmDV99brkZbdIq%2BzknWzJZ74Fo%2BVlnbXE5p639.gif]


___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Contributing to HPX

2018-11-06 Thread Biddiscombe, John A.
One project that came up in conversation today is this ...
https://github.com/STEllAR-GROUP/hpx/wiki/GSoC-2018-Project-Ideas#conflict-range-based-locks
conflict/range based locks. The project would not require a great deal of 
modification of hpx internals and could be developed initially as a stand-alone 
test/application that uses HPX, and once something is working, it could be 
gradually migrated into hpx to form a new kind of synchronization mechanism.
if anyone is interested in this, either now, or as a potential next year gsoc 
project and wants to get started on it, then pease feel free to read the 
description and get in touch with me. I would happily have a video call to 
explain a bit more what would be the goals of the project
JB

___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Contributing to HPX

2018-10-31 Thread Biddiscombe, John A.
Thomas et al

>
Something I have in mind, which would be a good fit with clear direction and 
specification would be this here:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0796r1.pdf
<

I like this doc. I heard Michael and others talk about this work, but I had not 
read the proposal until now, thanks for bringing it up. The resource 
partitioner should transition to this quite nicely.
One area where we really need to work is on allocators for the resources. I had 
planned on writing up a doc about this, but now I think I should contact the 
authors of this and exchange some ideas with them. There’s a lot missing from 
p0796 ...

JB
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Contributing to HPX

2018-10-28 Thread Biddiscombe, John A.
>I think I prefer the mailing list for communication. 

Just a heads up. I for one, only check the mailing list every now and again and 
usually don't see messages until a few days after they're posted. 
However, I leave and IRC window open on my office desktop 24/7 so usually keep 
up to date with questions and answers.

I'd recommend IRC over email. Unless of course the spam problems we've had 
recently are still ongoing, and perhaps it's time to consider 
slack/discourse/other - not sure how everyone feels about that though

JB
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Contributing to HPX

2018-10-24 Thread Biddiscombe, John A.
Ahmed

I'm a little bit frightened by the idea of you working on hwloc+hpx - the 
reason is the same as the one that caused problems in gsoc.

* we don't have a well defined plan on what we want to do with hwloc yet. I 
would like to clean it up and make it better, but only because it is missing a 
couple of functions that I would like - but I don't really know hwloc that well 
either. (same as I didn't really know fflib)

If you are working on a a problem and the tools you have don't quite solve the 
problem, then you can tweak the tools to work - for this reason I am looking at 
hwloc and asking myself - how do I find out which socket on the node is 
'closest' to the gpu? Should I create a thread pool for GPU managment on socket 
0, or socket 1 (or 2,3.4 ...). I will be working on a machine with 6 gpu's and 
2 processors. it is probably safe to assume that gpu's 0,1,2 are closest to 
socket 0, and 3,4,5 to socket 1 - but maybe it's 0,2,4 and 1,3,5 - I'd like to 
be able to query hwloc and get this info and build it into our resource 
partitioned in such a way that the programmer can just say "which gpus should I 
use from this socket" or vice versa.
So I know what I want - but I don't really have a plan yet, and usually, one 
knocks up a bit of code, get's something that half works and is good enough, 
then tries to make it fit into the existing framework and realize that 
something else needs to be tweaked to make it fit nicely with the rest of HPX. 
This requires a deal of understanding of the hpx internals and how it all fits 
together.

So in summary, its better for you to find a project that already interests you 
(part of your coursework?) and use HPX for that project - this gives you time 
to learn how hpx works and use it a bit - then extend it gradually as your 
knowledge grows. hwloc/topology is quite a low level part of hpx and does 
require a bit of help. If you were here in the office next door to me, it'd be 
no problem - cos you could get help any time, but working on your own is going 
to be tough.

I'm not saying do't do it (I did suggest it after all), but it will be hard to 
get going.

I will try to think of some other less low level things we could use help with. 
(Ideas Thomas?)

JB


From: hpx-users-boun...@stellar.cct.lsu.edu 
[hpx-users-boun...@stellar.cct.lsu.edu] on behalf of Ahmed Ali 
[ahmed.al...@eng-st.cu.edu.eg]
Sent: 24 October 2018 14:12
To: hartmut.kai...@gmail.com
Cc: hpx-users@stellar.cct.lsu.edu
Subject: Re: [hpx-users] Contributing to HPX

John, Hartmut,

Well I can go with that hwloc part. Where should I start? I think I  should 
start by reading about hwloc from the open-mpi document. Is there any document 
about the HPX interface with hwloc?

Best,
Ahmed Samir

On Tue, Oct 23, 2018 at 1:46 PM Hartmut Kaiser 
mailto:hartmut.kai...@gmail.com>> wrote:

John, Ahmed,

The hwloc integration sounds like a good idea to me. But anything else would be 
fine as well.,,

Regards Hartmut
---
http://stellar.cct.lsu.edu
https://github.com/STEllAR-GROUP/hpx

From: 
hpx-users-boun...@stellar.cct.lsu.edu<mailto:hpx-users-boun...@stellar.cct.lsu.edu>
 
mailto:hpx-users-boun...@stellar.cct.lsu.edu>>
 On Behalf Of Biddiscombe, John A.
Sent: Tuesday, October 23, 2018 3:43 AM
To: hpx-users@stellar.cct.lsu.edu<mailto:hpx-users@stellar.cct.lsu.edu>
Subject: Re: [hpx-users] Contributing to HPX

Ahmed

Good to hear from you again.

>I was a participant in GSoC'18 in HPX but I've failed the program.

I still feel bad about that.

>However, I liked the project and the contribution to HPX. I wanted to continue 
>contributing to HPX so I want someone to guide me.
Nice.

>What part should I work on now? and where should I start?
Well, there is the obvious option of continuing the parcelport work, but I 
suspect you want to do something else since we didn’t help you enough first 
time around. I’d certainly like to carry on with that and get it up and 
running. It’s on my list, but I have a full plate already unfortunately, so it 
has to wait if I’m doing it myself.

There should still be a list of ‘projects’ that were compiled for GSoC that you 
could look through and if something looks interesting, have a go at it.

If you have a project that you are already working on for your studies or 
hobbies - why not try involving that somehow. Experience tells me that if 
someone has a problem to solve and they use a library like HPX to solve it, 
then they get much better results than if they just make up a project and try 
to implement it. The real advantage to this is that you can USE hpx to work on 
a project without having to understand all of hpx under the hood - then when 
you find that a feature you need in hpx doesn’t exist, then you atsrt poking 
around to find a way of implementing it or adding support.

One thing that is catching my eye at the moment is our interface between hwloc 
(https://www.o

Re: [hpx-users] Contributing to HPX

2018-10-23 Thread Biddiscombe, John A.
Ahmed

Good to hear from you again.

>I was a participant in GSoC'18 in HPX but I've failed the program.

I still feel bad about that.

>However, I liked the project and the contribution to HPX. I wanted to continue 
>contributing to HPX so I want someone to guide me.
Nice.

>What part should I work on now? and where should I start?

Well, there is the obvious option of continuing the parcelport work, but I 
suspect you want to do something else since we didn’t help you enough first 
time around. I’d certainly like to carry on with that and get it up and 
running. It’s on my list, but I have a full plate already unfortunately, so it 
has to wait if I’m doing it myself.

There should still be a list of ‘projects’ that were compiled for GSoC that you 
could look through and if something looks interesting, have a go at it.

If you have a project that you are already working on for your studies or 
hobbies - why not try involving that somehow. Experience tells me that if 
someone has a problem to solve and they use a library like HPX to solve it, 
then they get much better results than if they just make up a project and try 
to implement it. The real advantage to this is that you can USE hpx to work on 
a project without having to understand all of hpx under the hood - then when 
you find that a feature you need in hpx doesn’t exist, then you atsrt poking 
around to find a way of implementing it or adding support.

One thing that is catching my eye at the moment is our interface between hwloc 
(https://www.open-mpi.org/projects/hwloc/) and hpx::topology classes. the code 
in there is not very well maintained and could do with a thorough cleaning up. 
I’d like to be able to create a topology object from hwloc and query not just 
numa domains (which we have implemented in the resource partitioner), but also 
PCI bus connections to GPUs and network cards. If I have a 2 socket machine and 
only 1 gpu - which socket is closest to the gpu etc. That hasn’t been 
implemented and it would be very useful.

Hartmut, Thomas Do you have anything in mind that Ahmed could work on?

JB



___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] HPX Citations

2018-04-09 Thread Biddiscombe, John A.
I added the HPX serialization paper to http://stellar-group.org/publications/ 
but I do not know how to edit http://stellar.cct.lsu.edu/publications/
could someone please either do it for me, or give me access.
thanks
JB

From: hpx-users-boun...@stellar.cct.lsu.edu 
 On Behalf Of Adrian Serio
Sent: 04 April 2018 21:31
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] HPX Citations


HPX Team,

In response to the suggestion that we should aggregate publications which 
mention HPX, I have created new pages on the stellar-group.org and 
stellar.cct.lsu.edu web sites titled "Citations". Whenever you come across a 
new article, publication or social media post which mentions the STE||AR Group 
or our work please let us know so we can upload a citation.

Web Pages:

  *   http://stellar-group.org/citations/
  *   http://stellar.cct.lsu.edu/citations/

Thanks for your help!

--

Adrian Serio

Scientific Program Coordinator

2118 Digital Media Center

225.578.8506
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] GSoC 2018

2018-03-17 Thread Biddiscombe, John A.
Alanas

We haven’t had anyone else interested in the all-to-all communications project 
yet.
The description on the web page dates back to last year - but we’ve had some 
new ideas since then (or at least I have had), not so sure about the other 
potential mentors.

We added a libfabric parcelport last year that gives very high performance 
using rdma methods - its quite low level, working right off the network layer 
and integrating into the HPX parcelport framework.

A chap at ETHZ has developed a library for All-toAll communication that would 
fit very nicely with our libfabric implementation - it uses tree methods to 
minimize the total number of messges sent and also supports offloading od some 
operations to the network - if the network has logic capability (modern 
networks have this capability in the switches). If the network doesn’ then an 
abstraction can be created to handle it.
There’s a paper and code here
https://spcl.inf.ethz.ch/Research/Parallel_Programming/FFlib/

This woukd require pretty good knowledge of both the libfabric layer and also 
the library mentioned, so it would be quite a hard project - but if you took it 
on and were able to get even 1 of the algorithms (allGather/AllReduce/etc) 
working inside HPX, then it would be a fantastic contribution. I have not had 
time to work on it myself, but it’s been high on my to-do list for some time.

Have a skim through that material and look at the libfabric web pages - if 
you’re willing to try, please let us know, there’s not much time till the 
project dedline

JB

From: hpx-users-boun...@stellar.cct.lsu.edu 
 On Behalf Of Alanas Plascinskas
Sent: 17 March 2018 00:46
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] GSoC 2018

Hey,

I wanted to ask about the availability of your projects for GSoC 2018.

I would mainly be interested in these projects: All to All Communications, 
Conflict (Range-Based) Locks and Newtonian Physics Sandbox but I"m open to any 
suggestions as well.

Have any of these received any serious proposals and if so do you have any 
other projects that would be important for you and are still open?

A little bit about my background (the first part of the form that you ask to 
fill):


  *   Name: Alanas Plascinskas
  *   College/University: University of Toronto (Year Abroad), University of 
Edinburgh (Home)
  *   Course/Major: Computer Science & Mathematics
  *   Degree Program: BSc
  *   Email: alanas.plascins...@gmail.com
  *   Homepage: https://www.linkedin.com/in/alanaspla/
  *   Availability:

 *   Available for the whole duration of the project (mid-May to 
late-August)
 *   What are your intended start and end dates? 14 May to 14 August
 *   What other factors affect your availability (exams, courses, moving, 
work, etc.)? I will be available throughout without any major disruptions
Best,
Alanas
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] GSoC : "HPX Backend for OpenCV"

2018-02-15 Thread Biddiscombe, John A.
Just an update on the project from my ‘wish list’

Task #1 : have a lookm at opencv and how the threading backends are implemented
Task #2: Get a good understanding of the HPX threading framework. Make sure you 
appreciate the difference between kernel level threads and lightweight HPX 
threads. Reading from a webcam or other device might make low level calls that 
need to be done on a kernel thread rather than an hpx worker/task thread.
Task #3: Implement an HPX backend to replace OpenMP/TBB in openCV and test out 
standard opencv algorithms
Task #4: make sure that the threading implementation doesn’t violate the terms 
of #2 and that hpc threads are not blocking in wait staes whilst polling for 
camera/image data
Task #5: understand that HPX supports thread pools and we can create a custom 
thread for any opencv camera poll work so that the problems I just alluded to 
do not happen
Task #6: Together a simple Qt demo with a GUI that displays images in a nice 
GUI window and overlays some image processing data etc. I have an application I 
wrote for monitoring wildlife in my garden that can be adapted for this - it 
tracks movement by doing pixel comparisons after applying  bunch of filters to 
the webcam images - when movement is detected, it starts recording so I can see 
what happened whilst I was asleep etc (infra-red cameras for night vision).
Task #7: Understand that Qt GUI threads can’t always be run on hpx task threads 
and/or tht Qt guis are not generally thread safe and we may need a special 
thread for GUI updates (or careful use of Qt synchronization), so a thread pool 
dedicated for that might/would be a good idea.

Task #0 - learn as much as you can about all of the above before gsoc starts 
and write a decent proposal “in your own words” that describes how you plan to 
do the above (and insert extra tasks because I wrote  this fast and skipped 
lots of details).

Feel free to ask questions on IRC or here - I’ll update the task list above if 
I think of more - and note that #1-#5 are the main essentials and #6-#7 are 
bonus work for anyone who makes good progress on the first tasks.

JB




From: Marcin Copik [mailto:mco...@gmail.com]
Sent: 15 February 2018 18:02
To: Ashish Jha <amiableashis...@gmail.com>; hpx-users@stellar.cct.lsu.edu
Cc: Patrick Diehl <patrickdie...@gmail.com>; Biddiscombe, John A. 
<biddi...@cscs.ch>
Subject: Re: GSoC : "HPX Backend for OpenCV"

Dear Ashish,

cc'ing this message to the mailing list to let the other mentors and the 
community to chime in and propose their suggestions.

The project idea gives a link to the existing implementation of parallel task 
processing in OpenCV. It's a good starting point since understanding how OpenCV 
currently handles the parallelization is necessary to propose an integration 
scheme for HPX. You may also want to take a look at the guide to writing a 
successful proposal[1].

Best regards,
Marcin Copik

[1] - https://github.com/STEllAR-GROUP/hpx/wiki/Hints-for-Successful-Proposals
śr., 14 lut 2018 o 16:38 użytkownik Ashish Jha 
<amiableashis...@gmail.com<mailto:amiableashis...@gmail.com>> napisał:
[https://api.openbracket.co/track/f8842b410f36ecfb3fc588d5bf81c7c9.gif]
Respected Sir,
I am Ashish Jha, 3rd-year computer science undergrad at NIT Rourkela.

I have worked on OpenMP project and has recently been introduced to OpenCV.  I 
will love to work on the project HPX Backend for OpenCV.

Can you please guide me where to start?
I am having exam until 27th February so I can devote up to 1 or 2 hour to the 
project and after that, I can devote full time on the project

​Attached is my CV.​

-
​​
-
Thanking you,
Ashish Kumar Jha
3rd year CSE Undergrad,
National Institute of Technology, Rourkela.
Contact: +91-9430682550<tel:+91%2094306%2082550>
Facebook <https://www.facebook.com/amiable.ashish> | 
LinkedIn<https://www.linkedin.com/in/ashish-jha-6bb7b3101>
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] GSOC Generic Histogram performance counter project

2018-02-05 Thread Biddiscombe, John A.
just as a "by the way" - some time back, someone proposed a boost histogram 
class, that I played with (no idea if it was accepted into boost), but I added 
it to the libfabric parcelport on my branch so that I could dump out histograms 
of the parcel sizes that were passing through the parcelport.

Would be very nice to see something like this available off the shelf in 
hpx/perf counters

JB

-Original Message-
From: hpx-users-boun...@stellar.cct.lsu.edu 
[mailto:hpx-users-boun...@stellar.cct.lsu.edu] On Behalf Of Hartmut Kaiser
Sent: 04 February 2018 19:22
To: 'Tanmay Tirpankar' 
Cc: hpx-users@stellar.cct.lsu.edu
Subject: Re: [hpx-users] GSOC Generic Histogram performance counter project

Hey Tanmay,

> I was going through your GSOC 2018 project ideas and I was thinking of 
> working on these 2 projects 1)Add More Arithmetic Performance Counters
> 
> 2)Create Generic Histogram Performance Counter I don't have a lot of 
> experience in parallel computing but I would like to learn more about 
> it through this project.

Cool! I think the first (the arithmetic counters) have been implemented by now, 
but the generic histogram counter is a nice, small, and self-contained project.

I'm cc'ing this response to hpx-users@stellar.cct.lsu.edu and I'd suggest that 
all follow-on discussions are done over there. Feel free to post your questions 
on that list as this may yield response from other people as well.

Alternatively, our main communication platform is IRC, feel free to hop on at 
#ste||ar on freenode.net (see here for more communication possibilities: 
https://github.com/STEllAR-GROUP/hpx/blob/master/.github/SUPPORT.md)

Regards Hartmut
---
http://boost-spirit.com
http://stellar.cct.lsu.edu


___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


[hpx-users] Resource partitioner error

2017-12-24 Thread Biddiscombe, John A.
I noticed in the IRC chat that Steven had the error
"RuntimeError: partitioner::add_resource: Creation of 2 threads requested by 
the resource partitioner, but only 1 provided on the command-line."

in my experience, this always happens when you try to run an application 
compiled with
HPX_WITH_MAX_CPU_COUNT=64
HPW_WITH_MORE_THAN_64_THREADS=ON
on a machine that has more than 64 cores.

If that turns out to be the the problem. please file an issue so we remember to 
fix it.

JB

--
John Biddiscombe,email:biddisco @.at.@ cscs.ch
http://www.cscs.ch/
CSCS, Swiss National Supercomputing Centre  | Tel:  +41 (91) 610.82.07
Via Trevano 131, 6900 Lugano, Switzerland   | Fax:  +41 (91) 610.82.82

___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Custom Scheduler & Priority Queue Backend

2017-10-26 Thread Biddiscombe, John A.
>
By all means, no! Quite the opposite. I noticed, that for just a single call of 
async my queue backend received multiple insert operations, especially when 
used on the default pool.
 From that I concluded, that the thread_data* pushed to the Pending_Queueing 
queue do not all originate from user-code calls of async (and the like). That 
means there are thread_data elements that hold _coroutines with parameters that 
I have no control over. Is that correct?
How could I differentiate between thread_data elements created by user-level 
async calls and thread_data elements created by the system?
<

Aha. now I understand what you mean. Sadly the tasks generated internally by 
the system are not distinguishable from user tasks. The way to do it, would be 
to create a custom executor that handled async/then etc (I'm doing that now and 
it's a huge PITA). Then you will only get the tasks that are posted to your own 
executor and not see the ones generated by the system.

A custom pool, with your scheduler, should not see system generated tasks, you 
could leave just one thread for the default pool and use the rest in yours.

We do not yet support it, but the plan is to allow oversubscription - so that 
you could have 9 threads on an 8 core machine, a single one in the default 
pool, then 8 in yours. Then there'd be a bit of overhead with the OS swapping 
those threads out, but it ought to work  well enough. I have not tried doing 
this yet, but it is planned.


>
For all intents and purposes the extra parameter can be handled like it was 
just another parameter to be passed to the action. If need be, I can package 
all actions to have an additional first parameter, which is not passed on to 
the packaged action. My problem is: Can I access the parameters of the function 
represented by a thread_data element via that thread_data element?
<

Not really. I tried that before I started on a custom executor.


re:Priorities and early ray termination. I will ponder this some more.


JB
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Custom Scheduler & Priority Queue Backend

2017-10-26 Thread Biddiscombe, John A.
Kilian

>
Is the base class thread_queue safe to use with all worker threads accessing 
the same object?
<

Yes but be aware that when you run your code on a KNL machine (for example), 
then you may have 272 threads all trying to take work from the same queue. One 
of the reasons we maintain multiple queues is because the effects of contention 
start to become noticeable/significant when this kind of scale is used. This 
problem is only going to get worse as we get more cores per chip as standard, 
and atomics used internally to control access.

>
For the correct use of my priority queue backend one of course needs a priority 
associated with each scheduled thread.
However, when used for the thread_queue PendingQueueing, the queue backend is 
only handed a
hpx::thread::thread_data*

Is it possible -and if yes, how would you go about it- to use the information 
in thread_data to
1.) identify wether a scheduled thread represents a user provided function 
(e.g. actions started with async and
such) or if it emerged otherwise (without the user being in charge of the 
_couroutines parameters)
<

You want to directly insert tasks into the queues without going via async? If 
you go down this path - then you break the nice clean C++ programming model 
we're working so hard to create!
If you did that, just add a flag to the trhead data and set it to 0 by default 
- set it to 1 for your tasks.

>
2.) access a user provided priority for the thread

For 2.) I thought about just using the enum thread_data::thread_priority as an 
integer, but this would require the creation of one executor for each desired 
priority value...
I'd rather pass the priority as an argument in hpx::async and the like (just 
like policies, executors, target components etc.) but can I unpack a parameter 
through the
thread_data* ?
<
The obvious way is to make thread_priority an int and use one executor per 
priority. you could make a table of executors at start, with 256 of them used 
via lookup. they are fairly low cost so it wouldn't be a big deal. if you need 
65536 or more, then I guess that's not such a smart idea.

Adding an extra async parameter is going to require quite a lot of work to make 
sure all the overloads do the right thing. Realistically - how many priorities 
do you think you'll need? I have done some quite serious linear algebra using 
high and normal priority threads only and the shared_priority_scheduler is 
working pretty well - more important than the thread priority is the 
dependencies you create between tasks.

Can you describe what the priorities are coming from that makes you want them 
so precisely?

JB

___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] verbs support?

2017-05-09 Thread Biddiscombe, John A.
Chris

First question : Which version of hpx are you using?

the verbs stuff was working around the January time frame and then we started 
adding libfabric support and in the process broke a few things - they might not 
have been on master branch at the time, but there has been a lot of messing 
around and it's entirely possible verbs did get broken.

In short - if you're not using the master branch from git - then try that 
before anything else.

I will build it after clicking send and test it on my inifinband machine to 
make sure it's working

JB


From: hpx-users-boun...@stellar.cct.lsu.edu 
[hpx-users-boun...@stellar.cct.lsu.edu] on behalf of ct clmsn 
[ct.cl...@gmail.com]
Sent: 09 May 2017 17:42
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] verbs support?

hpx users,

i've attempted to do a shared and static build of hpx with the verbs parcelport 
and have had a handful of build issues. i'll have a patch prepared to handle 
issues related to static-verbs compilation dependencies (libnl and libnuma) 
very soon.

there is one issue that shows up in both the verbs static and shared build.

in the verbs shared build, an error occurs when linking libhpx.so (~ 83%). the 
error claims parcelport_verbs.cpp.o has several undefined references to 
code/symbols found in the rdma_controller's source.

In the verbs static build, the similar errors are thrown when linking the 
component_storage. specifically, the linking errors are related to 
../lib/libhpx.a's invocation_count_registry not being able to find symbols to 
boost::regex::transform and boost::regex::transform_primary (a boost regex 
issue). One the verbs/rdma side of things, ../lib/libhpx.a's 
parcelport_verbs.cpp can't find any of the rdma_controller symbols.

any advice or speculation as to the best way for fixing these issues would be 
appreciated (I'm using cmake 3.6.1). i suspect the issue is related to the rdma 
controller. rdma should be built and listed before the parcelport_verbs object 
code during linking.

thanks in advance!

chris
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] linking/cmake problem

2017-03-15 Thread Biddiscombe, John A.
The suggestion on that bug is 

set(CUDA_LIBRARIES PUBLIC ${CUDA_LIBRARIES})



On 15.03.17, 23:48, "hpx-users-boun...@stellar.cct.lsu.edu on behalf of Hartmut 
Kaiser"  wrote:


> On 2017 M03 15, Wed 17:07:54 CET Hartmut Kaiser wrote:
> > What version of HPX are you using. I thought we did fix this a long time
> ago
> > :/
> 
> master from github.
> It is handled in src/CMakeLists.txt, but as far I can see not in
> examples/qt/
> and tools/, both directories are disabled by default, so you only hit the
> problem once you enable those options.

Ok, so we can fix it for those directories in the same way as we handled it
for the others.

Thanks!
Regards Hartmut
---
http://boost-spirit.com
http://stellar.cct.lsu.edu



___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] linking/cmake problem

2017-03-15 Thread Biddiscombe, John A.
Just fyi
https://gitlab.kitware.com/cmake/cmake/issues/16602

setting CMP0023 is what I do locally until cmake findcuda is fixed

JB

On 15.03.17, 23:25, "hpx-users-boun...@stellar.cct.lsu.edu on behalf of 
Alexander Neundorf"  wrote:

On 2017 M03 15, Wed 17:07:54 CET Hartmut Kaiser wrote:
> What version of HPX are you using. I thought we did fix this a long time 
ago
> :/

master from github.
It is handled in src/CMakeLists.txt, but as far I can see not in 
examples/qt/ 
and tools/, both directories are disabled by default, so you only hit the 
problem once you enable those options.

Alex

___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] [gsoc17] Getting started Concurrent Data structure Support project

2017-03-13 Thread Biddiscombe, John A.

>
Sad that intel TBB isn't open source anymore.
<

What do you know that I have missed?

https://www.threadingbuildingblocks.org/Licensing
and
https://github.com/01org/tbb

the catch from our point of view is that we only release code under the boost 
license, so we can’t include code that requires us to use another license as 
well.

JB


___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


[hpx-users] Apex/HPX task trace question

2017-03-06 Thread Biddiscombe, John A.
http://pasteboard.co/GcUDlW4fi.png

In this image, we create 6 GPU tasks that insert commands onto a cuda stream. 
They return a future that will be set ready when the GPU stream event comes in 
(some time later). When the GPU tasks start, the coloured bars look ok, but the 
task themselves take very little time and return a future almost immediately, 
then they start bouncing around and producing millions of tiny strips in the 
task trace.
This is whilst there is no work to be done and the idle queues are looking for 
work and waiting for the GPU futures to be set ready. Then they all abruptly 
stop and "worker thread" (= idle) kicks in. Why does it look like this? why do 
we see all these small bars of task that are not 'real' during this time, the 
CPUs are just waiting for cuda streams to set futures ready. What's going on in 
apex/hpx to cause this effect?

NB. The GPU futures are set ready, but that does not happen until a point in 
time way off to the right of this graph.

Any insights welcome

Thanks

JB



--
John Biddiscombe,email:biddisco @.at.@ cscs.ch
http://www.cscs.ch/
CSCS, Swiss National Supercomputing Centre  | Tel:  +41 (91) 610.82.07
Via Trevano 131, 6900 Lugano, Switzerland   | Fax:  +41 (91) 610.82.82

___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


[hpx-users] FW: Help for GSoC Project on Concurrent data structures support

2017-03-01 Thread Biddiscombe, John A.
I’m forwarding this email to the list just to keep other informed of the 
discussion

From: Biddiscombe, John A.
Sent: 01 March 2017 09:32
To: 'SATYADEV CHANDAK'
Subject: RE: Help for GSoC Project on Concurrent data structures support

Satyadev

How familiar are you with concurrent containers? If you know the basics, then 
I’d suggest taking a look at the concurrent data structures branch of hpx

https://github.com/STEllAR-GROUP/hpx/tree/concurrent_data_structures/hpx/concurrent

and examining the small amount of work that has been started. also, see

https://github.com/STEllAR-GROUP/hpx/blob/master/plugins/parcelport/verbs/unordered_map.hpp
https://github.com/STEllAR-GROUP/hpx/blob/master/plugins/parcelport/verbs/readers_writers_mutex.hpp
(newer versions of stuff in the concurrent branch)

That I have been working on more recently. They are nasty because they just use 
a normal map and a lock around it, with the bonus that the lock is a 
readers/writer lock so you can concurrently have N readers or 1 writer (great 
if the map is only modified rarely, but accessed frequently). The ability for 
the user to take a lock before iterating is also included. The readers/writers 
mutex is hand rolled from some code off the web, but we really should use std:: 
shared lock for readers and upgrade lock to exclusive etc for writer.

You should have a look at the concurrent vector, and think about how to improve 
it. We really need vector, map (+ set less important) to be properly concurrent 
if possible, and they need unit tests and integration into the main branch of 
HPX.

Have a look at the material and reading about concurrent structures on the web. 
Then you need a plan of
where to start
what to do
how to do it
what are the objectives

that kind of thing

I’m a bit busy today and on vacation for a few days after this, but send more 
questions if you have them and I’ll answer when I can

yours

JB

--
John Biddiscombe,email:biddisco @.at.@ cscs.ch
http://www.cscs.ch/
CSCS, Swiss National Supercomputing Centre  | Tel:  +41 (91) 610.82.07
Via Trevano 131, 6900 Lugano, Switzerland   | Fax:  +41 (91) 610.82.82



From: SATYADEV CHANDAK [mailto:satyadev17chan...@gmail.com]
Sent: 01 March 2017 05:43
To: Biddiscombe, John A.
Subject: Help for GSoC Project on Concurrent data structures support

Hi Sir.
I am willing to work on Concurrent data structures support project by STE||AR 
group for GSoC. Can you please give me some tasks that will help me in writing 
a better proposal for the project ? Thanks in advance :)

Thanks and Regards
Satyadev Chandak.
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] archive data bstream data chunk size mismatch: HPX(serialization_error)

2017-01-09 Thread Biddiscombe, John A.
I am seeing this error on  Cray-KNL machine compiled with the intel compiler in 
release mode (but not in debug mode). (The problem does not change/go away when 
I modify the avx flags passed to the compiler, which can change alignment 
apparently)

A parcel is serialized with size 0x8d bytes and transmitted, but when 
deserialized, it tries to read 0x96 bytes from the stream and throws the 
exception " {what}: archive data bstream is too short: HPX(serialization_error)"

Ideas on how to track down the problem are welcome.

JB





-Original Message-
From: hpx-users-boun...@stellar.cct.lsu.edu 
[mailto:hpx-users-boun...@stellar.cct.lsu.edu] On Behalf Of Thomas Heller
Sent: 12 December 2016 10:27
To: hpx-users@stellar.cct.lsu.edu
Subject: Re: [hpx-users] archive data bstream data chunk size mismatch: 
HPX(serialization_error)

On Montag, 12. Dezember 2016 10:12:11 CET Tim Biedert wrote:
> Thank you for the feedback!   Unfortunately, this hasn't solved the
> deserialization error.  However, I'm sure this has made the code more 
> robust! ;-)
> 
> What do you suggest to localize the deserialization error?  Is it 
> possible to determine which action would have been called with the 
> deserialized data?

Yes, it is. start your application with --hpx:attach-debugger=exception.

Eventually you'll get a message that an exception has occured and the PID as 
well as the host where the exception occurred. Once you attached your debugger, 
for example gdb, you have to figure out which thread threw the exception (info 
threads). The thread that sits in the sleep function is the one you are looking 
for. Switch to it, and inspect the backtrace.

Does this help?

> 
> Best,
> 
> Tim
> 
> On 12/09/2016 04:20 PM, Hartmut Kaiser wrote:
> > Tim,
> > 
> > Sorry, this should have been:
> >  hpx::apply_cb(
> >  
> >  recipient,
> >  [this](boost::system::error_code const&, 
> > hpx::parcelset::parcel
> > 
> > const&) mutable
> > 
> >  { delete this; },
> >  
> >  this->image);
> > 
> > I.e. the callback needs to conform to this prototype:
> >  void callback(
> >  
> >  boost::system::error_code const&,
> >  hpx::parcelset::parcel const&);
> > 
> > HTH
> > Regards Hartmut
> > ---
> > http://boost-spirit.com
> > http://stellar.cct.lsu.edu
> > 
> >> -Original Message-
> >> From: Hartmut Kaiser [mailto:hartmut.kai...@gmail.com]
> >> Sent: Friday, December 9, 2016 9:09 AM
> >> To: 'hpx-users@stellar.cct.lsu.edu' 
> >> Subject: RE: [hpx-users] archive data bstream data chunk size mismatch:
> >> HPX(serialization_error)
> >> 
> >> Tim,
> >> 
> >>> could the following two lines cause the issue?
> >>> 
> >>> hpx::apply(recipient, 
> >>> this->image);
> >>> 
> >>>   delete this;
> >>> 
> >>> Basically, we're invoking a fire and forget action with a member 
> >>> variable (which will be serialized) as parameter. Afterwards, the
> >>> instance is directly deleted.   I guess the serialization/parameter
> >>> transmission of the action does not happen right away and we're 
> >>> deleting the send buffer too early.
> >>> 
> >>> How can we know - without using async() and waiting for the future 
> >>> - that the "fire part" of fire and forget has been completed and 
> >>> we can delete the send buffer?
> >> 
> >> You might want to do this instead:
> >>  hpx::apply_cb(
> >>  
> >>  recipient,
> >>  [this]() mutable { delete this; }
> >>  this->image);
> >> 
> >> instead.
> >> 
> >> hpx::apply_cb calls the supplied function whenever it's safe to 
> >> release the data.
> >> 
> >> Regards Hartmut
> >> ---
> >> http://boost-spirit.com
> >> http://stellar.cct.lsu.edu
> > 
> > ___
> > hpx-users mailing list
> > hpx-users@stellar.cct.lsu.edu
> > https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] HPX build fail

2016-12-29 Thread Biddiscombe, John A.
PS. You may find the settings on this page useful they are what I used after 
the CSCS cray was upgraded a few weeks ago

https://github.com/biddisco/biddisco.github.io/wiki/daint

Don’t look at the settings below the “older stuff” heading as they may be 
obsolete.

JB

From: 
>
 on behalf of cscs >
Reply-To: "hpx-users@stellar.cct.lsu.edu" 
>
Date: Thursday 29 December 2016 at 17:59
To: "hpx-users@stellar.cct.lsu.edu" 
>
Subject: Re: [hpx-users] HPX build fail

On cray machines you need to add a link flag –dynamic


From: 
>
 on behalf of Justs Zarins >
Reply-To: "hpx-users@stellar.cct.lsu.edu" 
>
Date: Thursday 29 December 2016 at 16:40
To: "hpx-users@stellar.cct.lsu.edu" 
>
Subject: Re: [hpx-users] HPX build fail

Hi Hartmut,

I’m attaching my build script, that has most of the relevant information.
I’m using gcc 6.2.0

Regards,
Justs

From: Hartmut Kaiser >
Reply-To: "hartmut.kai...@gmail.com" 
>
Date: Thursday, December 29, 2016 at 3:30 PM
To: "hpx-users@stellar.cct.lsu.edu" 
>
Cc: Justs Zarins >
Subject: RE: [hpx-users] HPX build fail

Hey Justs,

Could you please give us a bit more information about how you built things? 
What compiler, what Boost version, what command lines, etc.?

Thanks!
Regards Hartmut
---
http://boost-spirit.com
http://stellar.cct.lsu.edu

From:hpx-users-boun...@stellar.cct.lsu.edu
 [mailto:hpx-users-boun...@stellar.cct.lsu.edu] On Behalf Of Justs Zarins
Sent: Thursday, December 29, 2016 9:10 AM
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] HPX build fail

Hello,

I’m trying to build HPX on a Cray XC40 machine. The build is failing when 
linking:

[100%] Built target iostreams_component
Linking CXX executable ../bin/hpx_runtime
/usr/bin/ld: attempted static link of dynamic object `../lib/libhpx.so.0.9.99'
collect2: error: ld returned 1 exit status
make[2]: *** [bin/hpx_runtime] Error 1
make[1]: *** [runtime/CMakeFiles/hpx_runtime_exe.dir/all] Error 2


Has cmake generated incorrect Makefiles?

I’ve tried to set –DHPX_WITH_STATIC_LINKING=ON but this results in a different 
linking error:

[100%] Building CXX object 
runtime/CMakeFiles/hpx_runtime_exe.dir/hpx_runtime.cpp.o
Linking CXX executable ../bin/hpx_runtime
../lib/libhpx.a(runtime_support_server.cpp.o): In function 
`hpx::util::plugin::dll::LoadLibrary(hpx::error_code&, bool) [clone 
.constprop.2162]':
runtime_support_server.cpp:(.text+0x147a): warning: Using 'dlopen' in 
statically linked applications requires at runtime the shared libraries from 
the glibc version used for linking
/usr/bin/ld: attempted static link of dynamic object 
`/lus/scratch/jzarins/lib/tcmalloc/lib/libtcmalloc_minimal.so'
collect2: error: ld returned 1 exit status
make[2]: *** [bin/hpx_runtime] Error 1
make[1]: *** [runtime/CMakeFiles/hpx_runtime_exe.dir/all] Error 2
make: *** [all] Error 2


Any pointers?

Regards,
Justs



___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] HPX build fail

2016-12-29 Thread Biddiscombe, John A.
On cray machines you need to add a link flag –dynamic


From: 
>
 on behalf of Justs Zarins >
Reply-To: "hpx-users@stellar.cct.lsu.edu" 
>
Date: Thursday 29 December 2016 at 16:40
To: "hpx-users@stellar.cct.lsu.edu" 
>
Subject: Re: [hpx-users] HPX build fail

Hi Hartmut,

I’m attaching my build script, that has most of the relevant information.
I’m using gcc 6.2.0

Regards,
Justs

From: Hartmut Kaiser >
Reply-To: "hartmut.kai...@gmail.com" 
>
Date: Thursday, December 29, 2016 at 3:30 PM
To: "hpx-users@stellar.cct.lsu.edu" 
>
Cc: Justs Zarins >
Subject: RE: [hpx-users] HPX build fail

Hey Justs,

Could you please give us a bit more information about how you built things? 
What compiler, what Boost version, what command lines, etc.?

Thanks!
Regards Hartmut
---
http://boost-spirit.com
http://stellar.cct.lsu.edu

From: 
hpx-users-boun...@stellar.cct.lsu.edu
 [mailto:hpx-users-boun...@stellar.cct.lsu.edu] On Behalf Of Justs Zarins
Sent: Thursday, December 29, 2016 9:10 AM
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] HPX build fail

Hello,

I’m trying to build HPX on a Cray XC40 machine. The build is failing when 
linking:

[100%] Built target iostreams_component
Linking CXX executable ../bin/hpx_runtime
/usr/bin/ld: attempted static link of dynamic object `../lib/libhpx.so.0.9.99'
collect2: error: ld returned 1 exit status
make[2]: *** [bin/hpx_runtime] Error 1
make[1]: *** [runtime/CMakeFiles/hpx_runtime_exe.dir/all] Error 2


Has cmake generated incorrect Makefiles?

I’ve tried to set –DHPX_WITH_STATIC_LINKING=ON but this results in a different 
linking error:

[100%] Building CXX object 
runtime/CMakeFiles/hpx_runtime_exe.dir/hpx_runtime.cpp.o
Linking CXX executable ../bin/hpx_runtime
../lib/libhpx.a(runtime_support_server.cpp.o): In function 
`hpx::util::plugin::dll::LoadLibrary(hpx::error_code&, bool) [clone 
.constprop.2162]':
runtime_support_server.cpp:(.text+0x147a): warning: Using 'dlopen' in 
statically linked applications requires at runtime the shared libraries from 
the glibc version used for linking
/usr/bin/ld: attempted static link of dynamic object 
`/lus/scratch/jzarins/lib/tcmalloc/lib/libtcmalloc_minimal.so'
collect2: error: ld returned 1 exit status
make[2]: *** [bin/hpx_runtime] Error 1
make[1]: *** [runtime/CMakeFiles/hpx_runtime_exe.dir/all] Error 2
make: *** [all] Error 2


Any pointers?

Regards,
Justs



___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] hpx::util::verify_no_locks

2016-10-18 Thread Biddiscombe, John A.
Kilian

Just as a follow up on this. I found my code bombing out during startup
with the “lock held whilst suspending” message too and I reverted a recent
commit on hpx master (merge of cv race fix) and the lock error went away.
You might have been unlucky to get a checkout of hpx during a possibly
temporary buggy moment.  (I actually did a reset --hard to the commit
before the merge to solve my issue).

JB

On 13/10/16 17:30, "hpx-users-boun...@stellar.cct.lsu.edu on behalf of
Hartmut Kaiser"  wrote:

>Kilian,
>
>> I am not 100% certain that the suspending thread is not
>> holding any locks, trusting your check something seems to
>> slip my attention.
>> However I am not requesting that much locks and each of
>> them with the  std::lock_guard
>> l(mtx); syntax.
>> This results in all held locks being requested in a method
>> on the stack trace, since no intermediately called methods
>> that already finished execution can omit any locks beyond
>> their scope... right?
>
>Only if a lock was not released on scope exit.
>
>> >> Is there a way to query the number and or nature of
>> >>locks
>> >> currently held by the executing thread?
>> >
>> > Not at this point, at least not in the API. Do you need
>> >this functionality?
>> 
>> This conflict between my missing observation of any held
>> locks and the check firing is the motivation for querying
>> all locks held in the current context.
>> I don't really need this functionality for functionality
>> of the application, but it would come in handy during
>> debugging, since I somehow lost track of my locks
>> apparently.
>
>Would you mind creating a ticket describing your feature request?
>
>> Thank you for explaining the decision to check for any
>> locks before suspending. While I think the decision is
>> very plausible, it's still a good thing it's so flexibly
>> deactivatable.
>
>I agree, we use the disabling facilities inside HPX ourselves.
>
>> Thank you for making it clear, that no two hpx-threads can
>> have any influence on each other regarding locks. I am
>> therefore limiting my search for untracked locks to the
>> methods on the stacktrace.
>> 
>> Btw we compiled the used hpx version a few days ago from
>> the master branch.
>
>Ok. I'm still not 100% convinced that this isn't a problem caused by HPX
>itself. Please keep your eyes open and let us know. At the same time, I'd
>be
>happy to have a look at this if you were able to create a small
>reproducing
>test case.
>
>Regards Hartmut
>---
>http://boost-spirit.com
>http://stellar.cct.lsu.edu
>
>
>
>___
>hpx-users mailing list
>hpx-users@stellar.cct.lsu.edu
>https://mail.cct.lsu.edu/mailman/listinfo/hpx-users

___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] compile error -lPUBLIC

2016-04-26 Thread Biddiscombe, John A.
cmake_minimum_required(VERSION 2.8.12 FATAL_ERROR)
  
  



Is what we specify in the top level cmakelists, if the user is using
2.8.11 it should flag an error.

The PUBLIC interfaces were added in cmake 3.0 so we should bump our
version to 3 minimum

JB


On 26/04/16 22:10, "hpx-users-boun...@stellar.cct.lsu.edu on behalf of
Hartmut Kaiser"  wrote:

>
>> Really appreciate the assistance!
>> 
>> - compiling with cmake 2.8.11 on a CentOS7 system
>> 
>> - build steps
>> 
>> 1) created a subdirectory in the hpx source directory called 'build'
>> 2) cd build
>> 3) cmake -DCMAKE_INSTALL_PREFIX=~/opt ..
>> 4) make -j4
>> When grepping in the 'build' subdirectory for the word PUBLIC, there are
>> multiple references to -lPUBLIC. The library is usually listed between -
>> lhwloc and -ldl in all the 'link.txt' files generated by cmake.
>> 
>> The cmake log is output generated when running `make -VERBOSE=1 -j4`?
>
>The cmake logs is the output generated when you run cmake (step 3). The
>build logs are those generated when you run make (step 4).
>
>> Thanks!
>
>>From what I can see from the cmake docs for version 2.8.11
>>(https://cmake.org/cmake/help/v2.8.11/cmake.html), the
>>PUBLIC/PRIVATE/INTERFACE keywords are supported by that version. So I'm
>>still unsure what's going on. Would you mind running cmake again (after
>>deleting the CMakeCache.txt) with -DHPX_CMAKE_LOGLEVEL=Debug, and
>>sending the log as well?
>
>Regards Hartmut
>---
>http://boost-spirit.com
>http://stellar.cct.lsu.edu
>
>
>> ct
>> 
>> 
>> On Tue, Apr 26, 2016 at 2:23 PM, Hartmut Kaiser
>>
>> wrote:
>> Resending...
>> 
>> > I'm trying to compile HPX 0.9.11 and have been having issues when the
>> > cmake scripts attempt to link libhpx.so. ld complains that it can't
>>find
>> -
>> > lPUBLIC.
>> >
>> > Is -lPUBLIC an internal library or an external dependency? (if it's
>> > external, what package would be required to satisfy the requirment, if
>> > it's internal, should I just go ahead and try building 0.9.12)
>> > thanks in advance for the assistance!
>> 
>> We don't refer to a library call named 'PUBLIC'. No idea where this is
>> coming from.
>> Could you please provide us with full logs of your cmake and build steps
>> (including command line options for those)?
>> 
>> Also, as Lars mentioned, it could be caused by a cmake incompatibility
>> we're not aware of. What version of cmake do you use?
>> 
>> Thanks!
>> Regards Hartmut
>> ---
>> http://boost-spirit.com
>> http://stellar.cct.lsu.edu
>> 
>> 
>
>
>___
>hpx-users mailing list
>hpx-users@stellar.cct.lsu.edu
>https://mail.cct.lsu.edu/mailman/listinfo/hpx-users

___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users