Riccardo,
> I have a conceptual trouble with atomics (admittedly principally related
> to c++11),
> however i would be curious on how this issues are solved within HPX.
> let's imagine i have a vector
> std::vector<double> a(100, 0,0)
> which i specifically want to be of "doubles". I DO NOT want to make it
> std::vector< std::atomic<double> >
> i now wish to update atomically a value of the vector, to do something
> equivalent to
> #pragma omp atomic
> a[50] += 1.0
> as i understand c++11 does not allow me to do
> reinterpret_cast<std::atomic<double> >(a[50]) += 1.0
> or as a very minimum the standard does not ensure me that the code will
> work portably.
> how could i do this effectively using hpx?
This is a question not directly related to HPX, but rather very much related to
C++ in general.
Also, please note, that #pragma omp atomic doesn't have any commonality with
the std::atomic<> data type. OMP atomic marks a section of code (atomic region)
which as a whole should be executed atomically, while std::atomic<> is a data
type exposing atomic operations.
Further, std::atomic<double> will most likely not be lockfree as today's common
hardware does not provide atomic primitives for floating point operations.
The way I'd implement it would be using a mutex or a lightweight equivalent to
protect the code section in question. Note however, that the locking
granularity in the first example below is fairly small (depending on the amount
of code to be protected) which will probably result in very poor performance.
I'm using the for_loops below just for illustrational purposes, any other
concurrent execution of the code-block in question would be equivalent.
// both are semantically equivalent to std::mutex,
// but work with HPX threads instead:
using mutex_type = hpx::lcos::local::spinlock;
// more heavy-weight: hpx::lcos::local::mutex
std::vector<double> v = { ... };
mutex_type m;
// run some operations on all elements concurrently
hpx::future<void> f1 = hpx::parallel::for_loop(
hpx::parallel::par(hpx::parallel::task),
0, v.size(),
[&](std::size_t i)
{
std::lock_guard<mutex_type> l(m);
v[i] += 50.0;
});
hpx::future<void> f2 = hpx::parallel::for_loop(
hpx::parallel::par(hpx::parallel::task),
0, v.size(),
[&](std::size_t i)
{
std::lock_guard<mutex_type> l(m);
v[i] *= 2.0;
});
hpx::wait_all(f1, f2);
If this method is causing performance degradations because of the fine-grain
locking, I'd try to partition the arrays into segments such either no locking
or as little as possible locking is required. For instance, while the example
above has one mutex for all elements of the vector, the following code provides
one mutex for each of the elements (this is probably similar to what OMP would
do behind the scenes):
// both are semantically equivalent to std::mutex,
// but work with HPX threads instead:
using mutex_type = hpx::lcos::local::spinlock;
// more heavy-weight: hpx::lcos::local::mutex
std::vector<double> v = { ... };
std::vector<mutex_type> m(v.size());
// run some operations on all elements concurrently
hpx::future<void> f1 = hpx::parallel::for_loop(
hpx::parallel::par(hpx::parallel::task),
0, v.size(),
[&](std::size_t i)
{
std::lock_guard<mutex_type> l(m[i]);
v[i] += 50.0;
});
hpx::future<void> f2 = hpx::parallel::for_loop(
hpx::parallel::par(hpx::parallel::task),
0, v.size(),
[&](std::size_t i)
{
std::lock_guard<mutex_type> l(m[i]);
v[i] *= 2.0;
});
hpx::wait_all(f1, f2);
Anything in between would work as well, depending on your array sizes, the code
complexities inside the atomic regions, and your performance requirements.
All in all these solutions are all not too satisfactory, but it's the best we
can do today. However, there are some standardization proposals under way,
currently, which might give us std::atomic_view<T> (see
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0019r0.html), with
explicit support for floating point types. Those would help your case directly.
HTH
Regards Hartmut
---------------
http://boost-spirit.com
http://stellar.cct.lsu.edu
> regards
> Riccardo
> --
> Riccardo Rossi
> PhD, Civil Engineer
>
> member of the Kratos Team: www.cimne.com/kratos
> Tenure Track Lecturer at Universitat Politècnica de Catalunya,
> BarcelonaTech (UPC)
> Full Research Professor at International Center for Numerical Methods in
> Engineering (CIMNE)
>
> C/ Gran Capità, s/n, Campus Nord UPC, Ed. C1, Despatx C9
> 08034 – Barcelona – Spain – www.cimne.com -
> T.(+34) 93 401 56 96 skype: rougered4
>
>
>
> Les dades personals contingudes en aquest missatge són tractades amb la
> finalitat de mantenir el contacte professional entre CIMNE i voste. Podra
> exercir els drets d'accés, rectificació, cancel·lació i oposició,
> dirigint-se a [email protected]. La utilització de la seva adreça de
> correu electronic per part de CIMNE queda subjecte a les disposicions de
> la Llei 34/2002, de Serveis de la Societat de la Informació i el Comerç
> Electronic.
> Imprimiu aquest missatge, només si és estrictament necessari.
_______________________________________________
hpx-users mailing list
[email protected]
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users