On Friday, 25 June 2021 at 19:52:23 UTC, seany wrote:
On Friday, 25 June 2021 at 19:30:16 UTC, jfondren wrote:
On Friday, 25 June 2021 at 19:17:38 UTC, seany wrote:
If i use `parallel(...)`it runs.
If i use `prTaskPool.parallel(...`, then in the line : `auto
prTaskPool = new TaskPool(threadCo
On Friday, 25 June 2021 at 19:30:16 UTC, jfondren wrote:
On Friday, 25 June 2021 at 19:17:38 UTC, seany wrote:
If i use `parallel(...)`it runs.
If i use `prTaskPool.parallel(...`, then in the line : `auto
prTaskPool = new TaskPool(threadCount);` it hits the error.
Please help.
parallel() re
On Friday, 25 June 2021 at 19:17:38 UTC, seany wrote:
If i use `parallel(...)`it runs.
If i use `prTaskPool.parallel(...`, then in the line : `auto
prTaskPool = new TaskPool(threadCount);` it hits the error.
Please help.
parallel() reuses a single taskPool that's only established once.
Your
On Friday, 25 June 2021 at 16:37:44 UTC, seany wrote:
On Friday, 25 June 2021 at 16:37:06 UTC, seany wrote:
On Friday, 25 June 2021 at 15:50:37 UTC, seany wrote:
On Friday, 25 June 2021 at 15:16:30 UTC, jfondren wrote:
[...]
Try : (this
version)[https://github.com/naturalmechanics/mwp/tree/
On Friday, 25 June 2021 at 15:50:37 UTC, seany wrote:
On Friday, 25 June 2021 at 15:16:30 UTC, jfondren wrote:
On Friday, 25 June 2021 at 14:44:13 UTC, seany wrote:
This particular location does not cause segfault.
It is segfaulting down the line in a completely unrelated
location... Wait I w
On Friday, 25 June 2021 at 16:37:06 UTC, seany wrote:
On Friday, 25 June 2021 at 15:50:37 UTC, seany wrote:
On Friday, 25 June 2021 at 15:16:30 UTC, jfondren wrote:
[...]
Try : (this
version)[https://github.com/naturalmechanics/mwp/tree/nested-loops]
The goal is to parallelize : `calculate
On Friday, 25 June 2021 at 15:16:30 UTC, jfondren wrote:
On Friday, 25 June 2021 at 14:44:13 UTC, seany wrote:
This particular location does not cause segfault.
It is segfaulting down the line in a completely unrelated
location... Wait I will try to make a MWP.
[Here is MWP](https://github.c
On Friday, 25 June 2021 at 15:16:30 UTC, jfondren wrote:
I reckon that there's some other memory error and that the
parallelism is unrelated.
@safe:
```
source/AI.d(83,23): Error: cannot take address of local `rData`
in `@safe` function `main`
source/analysisEngine.d(560,20): Error: cannot ta
On Friday, 25 June 2021 at 14:44:13 UTC, seany wrote:
This particular location does not cause segfault.
It is segfaulting down the line in a completely unrelated
location... Wait I will try to make a MWP.
[Here is MWP](https://github.com/naturalmechanics/mwp).
Please compile with `dub build
On Friday, 25 June 2021 at 15:08:38 UTC, Ali Çehreli wrote:
On 6/25/21 7:21 AM, seany wrote:
> The code without the parallel foreach works fine. No segfault.
That's very common.
What I meant is, is the code written in a way to work safely in
a parallel foreach loop? (i.e. Is the code "indepen
On 6/25/21 7:21 AM, seany wrote:
> The code without the parallel foreach works fine. No segfault.
That's very common.
What I meant is, is the code written in a way to work safely in a
parallel foreach loop? (i.e. Is the code "independent"?) (But I assume
it is because it's been the common the
On Friday, 25 June 2021 at 14:22:25 UTC, seany wrote:
On Friday, 25 June 2021 at 14:13:14 UTC, jfondren wrote:
On Friday, 25 June 2021 at 13:53:17 UTC, seany wrote:
[...]
A self-contained and complete example would help a lot, but
the likely
problem with this code is that you're accessing pn
On Friday, 25 June 2021 at 14:13:14 UTC, jfondren wrote:
On Friday, 25 June 2021 at 13:53:17 UTC, seany wrote:
[...]
A self-contained and complete example would help a lot, but the
likely
problem with this code is that you're accessing pnts[y][x] in
the
loop, which makes the loop bodies no l
On Friday, 25 June 2021 at 14:10:52 UTC, Ali Çehreli wrote:
On 6/25/21 6:53 AM, seany wrote:
> [...]
workUnitSize)) {
> [...]
Performance is not guaranteed depending on many factors. For
example, inserting a writeln() call in the loop would make all
threads compete with each other fo
On Friday, 25 June 2021 at 13:53:17 UTC, seany wrote:
I tried this .
int[][] pnts ;
pnts.length = fld.length;
enum threadCount = 2;
auto prTaskPool = new TaskPool(threadCount);
scope (exit) {
On 6/25/21 6:53 AM, seany wrote:
> I tried this .
>
> int[][] pnts ;
> pnts.length = fld.length;
>
> enum threadCount = 2;
> auto prTaskPool = new TaskPool(threadCount);
>
> scope (exit) {
> prTaskPool.finish();
> }
>
>
On Friday, 25 June 2021 at 13:53:17 UTC, seany wrote:
On Thursday, 24 June 2021 at 21:19:19 UTC, Ali Çehreli wrote:
[...]
I tried this .
int[][] pnts ;
pnts.length = fld.length;
enum threadCount = 2;
auto prTaskPool = new TaskPo
On Thursday, 24 June 2021 at 21:19:19 UTC, Ali Çehreli wrote:
On 6/24/21 1:41 PM, seany wrote:
> Is there any way to control the number of CPU cores used in
> parallelization ?
Yes. You have to create a task pool explicitly:
import std.parallelism;
void main() {
enum threadCount = 2;
auto
On 6/24/21 1:41 PM, seany wrote:
> Is there any way to control the number of CPU cores used in
> parallelization ?
Yes. You have to create a task pool explicitly:
import std.parallelism;
void main() {
enum threadCount = 2;
auto myTaskPool = new TaskPool(threadCount);
scope (exit) {
m
On Thursday, 24 June 2021 at 20:56:26 UTC, Ali Çehreli wrote:
On 6/24/21 1:33 PM, Bastiaan Veelo wrote:
> distributes the load across all cores (but one).
Last time I checked, the current thread would run tasks as well.
Ali
Indeed, thanks.
— Bastiaan.
On Thursday, 24 June 2021 at 21:05:28 UTC, Bastiaan Veelo wrote:
On Thursday, 24 June 2021 at 20:41:40 UTC, seany wrote:
Is there any way to control the number of CPU cores used in
parallelization ?
E.g : take 3 cores for the first parallel foreach - and then
for the second one, take 3 cores
On Thursday, 24 June 2021 at 20:41:40 UTC, seany wrote:
On Thursday, 24 June 2021 at 20:33:00 UTC, Bastiaan Veelo wrote:
By the way, nesting parallel `foreach` does not make much
sense, as one level already distributes the load across all
cores (but one). Additional parallelisation will likely
On 6/24/21 1:33 PM, Bastiaan Veelo wrote:
> distributes the load across all cores (but one).
Last time I checked, the current thread would run tasks as well.
Ali
On Thursday, 24 June 2021 at 20:33:00 UTC, Bastiaan Veelo wrote:
By the way, nesting parallel `foreach` does not make much
sense, as one level already distributes the load across all
cores (but one). Additional parallelisation will likely just
add overhead, and have a net negative effect.
—
On Thursday, 24 June 2021 at 18:23:01 UTC, seany wrote:
I have seen
[this](https://forum.dlang.org/thread/akhbvvjgeaspmjntz...@forum.dlang.org).
I can't call break form parallel foreach.
Okey, Is there a way to easily call .stop() from such a case?
Yes there is, but it won’t break the `fore
On Thursday, 24 June 2021 at 20:08:06 UTC, seany wrote:
On Thursday, 24 June 2021 at 19:46:52 UTC, Jerry wrote:
On Thursday, 24 June 2021 at 18:23:01 UTC, seany wrote:
[...]
Maybe I'm wrong here, but I don't think there is any way to do
that with parallel.
What I would do is negate someCondi
On Thursday, 24 June 2021 at 19:46:52 UTC, Jerry wrote:
On Thursday, 24 June 2021 at 18:23:01 UTC, seany wrote:
[...]
Maybe I'm wrong here, but I don't think there is any way to do
that with parallel.
What I would do is negate someConditionCheck and instead only
do work when there is work to
On Thursday, 24 June 2021 at 18:23:01 UTC, seany wrote:
I have seen
[this](https://forum.dlang.org/thread/akhbvvjgeaspmjntz...@forum.dlang.org).
I can't call break form parallel foreach.
Okey, Is there a way to easily call .stop() from such a case?
Here is a case to consider:
outer: for
I have seen
[this](https://forum.dlang.org/thread/akhbvvjgeaspmjntz...@forum.dlang.org).
I can't call break form parallel foreach.
Okey, Is there a way to easily call .stop() from such a case?
Here is a case to consider:
outer: foreach(i, a; parallel(array_of_a)) {
foreach(j, b; p
29 matches
Mail list logo