Kilian

>
Is the base class thread_queue safe to use with all worker threads accessing 
the same object?
<

Yes but be aware that when you run your code on a KNL machine (for example), 
then you may have 272 threads all trying to take work from the same queue. One 
of the reasons we maintain multiple queues is because the effects of contention 
start to become noticeable/significant when this kind of scale is used. This 
problem is only going to get worse as we get more cores per chip as standard, 
and atomics used internally to control access.

>
For the correct use of my priority queue backend one of course needs a priority 
associated with each scheduled thread.
However, when used for the thread_queue PendingQueueing, the queue backend is 
only handed a
hpx::thread::thread_data*

Is it possible -and if yes, how would you go about it- to use the information 
in thread_data to
1.) identify wether a scheduled thread represents a user provided function 
(e.g. actions started with async and
such) or if it emerged otherwise (without the user being in charge of the 
_couroutines parameters)
<

You want to directly insert tasks into the queues without going via async? If 
you go down this path - then you break the nice clean C++ programming model 
we're working so hard to create!
If you did that, just add a flag to the trhead data and set it to 0 by default 
- set it to 1 for your tasks.

>
2.) access a user provided priority for the thread

For 2.) I thought about just using the enum thread_data::thread_priority as an 
integer, but this would require the creation of one executor for each desired 
priority value...
I'd rather pass the priority as an argument in hpx::async and the like (just 
like policies, executors, target components etc.) but can I unpack a parameter 
through the
thread_data* ?
<
The obvious way is to make thread_priority an int and use one executor per 
priority. you could make a table of executors at start, with 256 of them used 
via lookup. they are fairly low cost so it wouldn't be a big deal. if you need 
65536 or more, then I guess that's not such a smart idea.

Adding an extra async parameter is going to require quite a lot of work to make 
sure all the overloads do the right thing. Realistically - how many priorities 
do you think you'll need? I have done some quite serious linear algebra using 
high and normal priority threads only and the shared_priority_scheduler is 
working pretty well - more important than the thread priority is the 
dependencies you create between tasks.

Can you describe what the priorities are coming from that makes you want them 
so precisely?

JB

_______________________________________________
hpx-users mailing list
[email protected]
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users

Reply via email to