Re: is there a way to: import something = app-xyz-classes-something; ?

2021-06-25 Thread frame via Digitalmars-d-learn

On Saturday, 26 June 2021 at 02:19:18 UTC, someone wrote:

What I like most of it is that it scales well for big and 
complex apps while being really flexible ... and neat :)


If you pass each file to the compiler like in your script then 
every naming convention becomes irrelevant because the compiler 
does not need to do a file system lookup anyway and a module 
"foo_bar" could be also in the file xyz.d.


But however, to have files organized is always a good idea.



Re: is there a way to: import something = app-xyz-classes-something; ?

2021-06-25 Thread someone via Digitalmars-d-learn
On Monday, 21 June 2021 at 13:29:59 UTC, Steven Schveighoffer 
wrote:


That being said, I strongly recommend just to name the file the 
same as the module name.


Although you made it clear that you do not favor this use-case I 
am really satisfied with your solution because, at least to me, 
has some pros; consider the following type "namespace" where sm 
stands for stock manager ie: the app prefix (and fw for framework 
ie: the common library):


```d
import fw.code.common; /// framework; ie: app-independent common 
code

import sm.code.common; /// app-specific common code
import sm.types.common.currency; /// app-specific common types
import sm.types.common.equity;
import sm.types.specific.trade; /// app-specific specific types
import sm.types.specific.position;
import sm.types.specific.account;

/// eg: + whatever
/// eg:   has accounts[]
/// eg:   has positions[]
/// eg:   has trades[]
```


... linked as following with this build script:

```d
#!/bin/bash ### not D code but I do not know how to tag this block

/usr/bin/dmd \
   "../common/code/fw-code-common.d" \
   "./code/sm-code-common.d" \
   "./types/sm-types-common-currency.d" \
   "./types/sm-types-common-equity.d" \
   "./types/sm-types-specific-trade.d" \
   "./types/sm-types-specific-position.d" \
   "./types/sm-types-specific-account.d" \
   -w -de \
   -run "./code/sm-code-app-demo.d" \
   ;
```

... and on each module:

```d
module sm.code.app.demo; /// this matching the main demo app

alias typeCurrencyFormat0 = 
sm.types.common.currency.gstrCurrencyFormat0;
alias typeCurrencyFormat2 = 
sm.types.common.currency.gstrCurrencyFormat2;
alias typeCurrencyFormat4 = 
sm.types.common.currency.gstrCurrencyFormat4;
alias typeCurrencyRange = 
sm.types.common.currency.gudtCurrencyRange;


alias typeStockID = sm.types.common.equity.typeStockID;

alias typeTrade = sm.types.specific.trade.gudtTrade;
alias typePosition = sm.types.specific.position.gudtPosition;
alias typeAccount = sm.types.specific.account.gudtAccount;
```

Naming directory/files/type-namespace and aliasing this way 
allows me to:


- move and rename source files as needed: only need to update the 
build script if so


- aliasing the types I will be using in any given module once at 
the top of the source file and using this alias all the way back 
to the bottom: perfect hierarchical-unambiguously-namespace 
keeping code cleanliness all the way down; no-need to rename 
anything on any source file when its name/placement changes on 
the file-system.


- although I am currently mimicing the type namespace following 
the file-system file/directory structure I am able to build the 
namespace independent of the file-system if I wanted to; say, the 
name-space using all lower-case letters while the file-system 
using mixed caps and spaces instead of _ or - or non-latin glyphs 
or whatever.


What I like most of it is that it scales well for big and complex 
apps while being really flexible ... and neat :)


Re: Are D classes proper reference types?

2021-06-25 Thread kinke via Digitalmars-d-learn
On Friday, 25 June 2021 at 17:05:41 UTC, Ola Fosheim Grøstad 
wrote:
Yes, if you don't want to support weak pointers. I think you 
need two counters if you want to enable the usage of weak 
pointers.


I cannot imagine how weak pointers would work without an ugly 
extra indirection layer. If we're on the same page, we're talking 
about embedding the reference counter *directly* in the class 
instance, and the class ref still pointing directly to the 
instance.


Weak pointers aren't in the language, so I don't see why they 
would matter here. I thought you were after replacing 
GC-allocated class instances by a simple RC scheme.


One reason to put it at a negative offset is that it makes it 
possible to make it fully compatible with shared_ptr.


In modern C++ code I've been looking at so far, shared_ptr was 
used very rarely (and unique_ptr everywhere). AFAIK, the main 
reason being poor performance due to the extra indirection of 
shared_ptr. So replacing every D class ref by a 
shared_ptr-analogon for interop reasons would seem very backwards 
to me.


Re: Are D classes proper reference types?

2021-06-25 Thread IGotD- via Digitalmars-d-learn
On Friday, 25 June 2021 at 20:22:24 UTC, Ola Fosheim Grøstad 
wrote:


Hm. Not sure if I follow, I think we are talking about stuffing 
bits into the counter and not the address?


Then I misunderstood. If it's a counter it should be fine.



But fat pointers are 16 bytes, so quite expensive.


Yes, that's a tradeoff but one I'm willing to take. I'm thinking 
even bigger managed pointers of perhaps 32 bytes which has more 
metadata like the allocated size. Managed languages in general 
have fat pointers which we see everywhere and it is not a big 
deal.


If you are littering pointers you perhaps should refactor your 
code, use an array if loads of objects of the same type. Another 
thing which I'm not that satisfied with D is that there is no 
built in method of expanding member classes into the host class 
like C++ which creates pointer littering and memory fragmentation.


Re: Are D classes proper reference types?

2021-06-25 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 25 June 2021 at 17:37:13 UTC, IGotD- wrote:
You cannot use the most significant bit as it will not work 
with some 32-bit systems. Linux with a 3G kernel position for 
example. Better to use the least significant bit as all 
allocated memory is guaranteed to be aligned. Regardless this 
requires compiler support for masking off this bit.


Hm. Not sure if I follow, I think we are talking about stuffing 
bits into the counter and not the address?


Now where going into halfway fat pointer support. Then we can 
just use fat pointers instead and have full freedom.


But fat pointers are 16 bytes, so quite expensive.



Re: How to call stop from parallel foreach

2021-06-25 Thread jfondren via Digitalmars-d-learn

On Friday, 25 June 2021 at 19:52:23 UTC, seany wrote:

On Friday, 25 June 2021 at 19:30:16 UTC, jfondren wrote:

On Friday, 25 June 2021 at 19:17:38 UTC, seany wrote:

If i use `parallel(...)`it runs.

If i use `prTaskPool.parallel(...`, then in the line : `auto 
prTaskPool = new TaskPool(threadCount);` it hits the error. 
Please help.


parallel() reuses a single taskPool that's only established 
once.


Your code creates two TaskPools per a function invocation and 
you

call that function in a loop.

stracing your program might again reveal the error you're 
hitting.


I have removed one - same problem.
Yes, I do call it in a loop. how can I create a taskpool in a 
function that itself will be called in a loop?


One option is to do as parallel does:
https://github.com/dlang/phobos/blob/master/std/parallelism.d#L3508


Re: How to call stop from parallel foreach

2021-06-25 Thread seany via Digitalmars-d-learn

On Friday, 25 June 2021 at 19:30:16 UTC, jfondren wrote:

On Friday, 25 June 2021 at 19:17:38 UTC, seany wrote:

If i use `parallel(...)`it runs.

If i use `prTaskPool.parallel(...`, then in the line : `auto 
prTaskPool = new TaskPool(threadCount);` it hits the error. 
Please help.


parallel() reuses a single taskPool that's only established 
once.


Your code creates two TaskPools per a function invocation and 
you

call that function in a loop.

stracing your program might again reveal the error you're 
hitting.


I have removed one - same problem.
Yes, I do call it in a loop. how can I create a taskpool in a 
function that itself will be called in a loop?


Re: How to call stop from parallel foreach

2021-06-25 Thread jfondren via Digitalmars-d-learn

On Friday, 25 June 2021 at 19:17:38 UTC, seany wrote:

If i use `parallel(...)`it runs.

If i use `prTaskPool.parallel(...`, then in the line : `auto 
prTaskPool = new TaskPool(threadCount);` it hits the error. 
Please help.


parallel() reuses a single taskPool that's only established once.

Your code creates two TaskPools per a function invocation and you
call that function in a loop.

stracing your program might again reveal the error you're hitting.


Re: How to call stop from parallel foreach

2021-06-25 Thread seany via Digitalmars-d-learn

On Friday, 25 June 2021 at 16:37:44 UTC, seany wrote:

On Friday, 25 June 2021 at 16:37:06 UTC, seany wrote:

On Friday, 25 June 2021 at 15:50:37 UTC, seany wrote:

On Friday, 25 June 2021 at 15:16:30 UTC, jfondren wrote:

[...]


Try : (this 
version)[https://github.com/naturalmechanics/mwp/tree/nested-loops]


The goal is to parallelize : 
`calculate_avgSweepDist_pairwise` at line `3836`. Notice 
there we have 6 nested loops. Thank you.


Ok, i stopped the buss error and the segfault. It was indeed 
an index that was written wrong in the flattened version .


No, I dont have the seg fault any more. But I have "error 
creating thread" - time to time.  Not always.


But, even with the taskpool, it is not spreading to multiple 
cores.


PS: this is the error message : 
"core.thread.threadbase.ThreadError@src/core/thread/threadbase.d(1219): Error creating thread"


If i use `parallel(...)`it runs.

If i use `prTaskPool.parallel(...`, then in the line : `auto 
prTaskPool = new TaskPool(threadCount);` it hits the error. 
Please help.


Re: Are D classes proper reference types?

2021-06-25 Thread IGotD- via Digitalmars-d-learn

On Friday, 25 June 2021 at 17:37:13 UTC, IGotD- wrote:


You cannot use the most significant bit as it will not work 
with some 32-bit systems. Linux with a 3G kernel position for 
example. Better to use the least significant bit as all 
allocated memory is guaranteed to be aligned. Regardless this 
requires compiler support for masking off this bit.


Now where going into halfway fat pointer support. Then we can 
just use fat pointers instead and have full freedom.


One thing I have found out over the years that if you want to 
have full versatility, have a pointer to your free function in 
your fat pointer. By having this you have generic method to free 
your object when it goes out of scope. You have the ability to 
use custom allocators and even change allocators on the fly. If 
you for some reason don't want to free your object automatically, 
just put zero in that field for example.


Re: Are D classes proper reference types?

2021-06-25 Thread IGotD- via Digitalmars-d-learn

On Friday, 25 June 2021 at 07:17:20 UTC, kinke wrote:
Wrt. manual non-heap allocations (stack/data segment/emplace 
etc.), you could e.g. reserve the most significant bit of the 
counter to denote such instances and prevent them from being 
free'd (and possibly finalization/destruction too; this would 
need some more thought I suppose).


You cannot use the most significant bit as it will not work with 
some 32-bit systems. Linux with a 3G kernel position for example. 
Better to use the least significant bit as all allocated memory 
is guaranteed to be aligned. Regardless this requires compiler 
support for masking off this bit.


Now where going into halfway fat pointer support. Then we can 
just use fat pointers instead and have full freedom.


Re: Are D classes proper reference types?

2021-06-25 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 25 June 2021 at 07:17:20 UTC, kinke wrote:
Wrt. manual non-heap allocations (stack/data segment/emplace 
etc.), you could e.g. reserve the most significant bit of the 
counter to denote such instances and prevent them from being 
free'd (and possibly finalization/destruction too; this would 
need some more thought I suppose).


Destruction is a bit tricky. If people rely on the destructor to 
run when the function returns then that cannot be moved to a 
reference counter. For instance if they have implemented some 
kind of locking mechanism or transaction mechanism with classes…


The most tricky one is emplace though as you have no way of 
releasing the memory without an extra function pointer.


Regarding using high bits in the counter; What you would want is 
to have a cheap increment/decrement and instead take the hit when 
the object is released. So you might actually instead want to 
keep track of the allcation-status in the lower 3 bits and 
instead do ±8, but I am not sure how that affects different CPUs. 
The basic idea, would be to make it so you don't trigger 
destruction on 0, but when the result is/becomes negative.





Re: Are D classes proper reference types?

2021-06-25 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 25 June 2021 at 07:01:31 UTC, kinke wrote:
Well AFAIK it's mandated by the language, so an RC scheme 
replacing such allocations by heap ones seems like a definite 
step backwards - it's a useful pattern, and as Stefan pointed 
out, definitely in use. You could still stack-allocate but 
accommodate for the counter prefix in the compiler.


Yes, but then I need to mark it as non-freeable.

It's certainly possible as it's a library thing; some existing 
code may assume the returned reference to point to the 
beginning of the passed memory though (where there'd be your 
counter). What you'd definitely need to adjust is 
`__traits(classInstanceSize)`, accomodating for the extra 
counter prefix.


Yes, if people don't make assumptions about where the class ends 
and overwrites some other object, but I suspect pointer 
arithmetics isn't all that common for classes in D code.


There's very likely existing code out there which doesn't use 
druntime's emplace[Initializer], but does it manually.


I guess, but the compiler could have a release note warning 
against this.


ctor. It's probably easier to have the compiler put it into 
static but writable memory, so that you can mess with the 
counter.


Another reason to add the ability to mark it as non-freeable.

All in all, I think a more interesting/feasible approach would 
be abusing the monitor field of extern(D) classes for the 
reference counter. It's the 2nd field (of pointer size) of each 
class instance, directly after the vptr (pointer to vtable). I 
think all monitor access goes through a druntime call, so you 
could hook into there, disallowing any regular monitor access, 
and put this (AFAIK, seldomly used) monitor field to some good 
use.


Yes, if you don't want to support weak pointers. I think you need 
two counters if you want to enable the usage of weak pointers.


One reason to put it at a negative offset is that it makes it 
possible to make it fully compatible with shared_ptr. And then 
you can also have additional fields such as a weak counter or a 
deallocation function pointer.


I don't think maintaining the D ABI is important, so one could 
add additional fields to the class. Maintaining core language 
semantics shouldn't require ABI support I think.










Re: How to call stop from parallel foreach

2021-06-25 Thread seany via Digitalmars-d-learn

On Friday, 25 June 2021 at 15:50:37 UTC, seany wrote:

On Friday, 25 June 2021 at 15:16:30 UTC, jfondren wrote:

On Friday, 25 June 2021 at 14:44:13 UTC, seany wrote:


This particular location does not cause segfault.
It is segfaulting down the line in a completely unrelated 
location... Wait I will try to make a MWP.


[Here is MWP](https://github.com/naturalmechanics/mwp).

Please compile with `dub build -b release --compiler=ldc2 `. 
Then to run, please use : `./tracker_ai --filename 
21010014-86.ptl `


With ldc2 this segfaults for me even if std.parallelism is 
removed
entirely. With DMD and std.parallelism removed it runs to 
completion.

With DMD and no changes it never seems to finish.

I reckon that there's some other memory error and that the 
parallelism

is unrelated.


Try : (this 
version)[https://github.com/naturalmechanics/mwp/tree/nested-loops]


The goal is to parallelize : `calculate_avgSweepDist_pairwise` 
at line `3836`. Notice there we have 6 nested loops. Thank you.


Ok, i stopped the buss error and the segfault. It was indeed an 
index that was written wrong in the flattened version .


No, I dont have the seg fault any more. But I have "error 
creating thread" - time to time.  Not always.


But, even with the taskpool, it is not spreading to multiple 
cores.


Re: How to call stop from parallel foreach

2021-06-25 Thread seany via Digitalmars-d-learn

On Friday, 25 June 2021 at 16:37:06 UTC, seany wrote:

On Friday, 25 June 2021 at 15:50:37 UTC, seany wrote:

On Friday, 25 June 2021 at 15:16:30 UTC, jfondren wrote:

[...]


Try : (this 
version)[https://github.com/naturalmechanics/mwp/tree/nested-loops]


The goal is to parallelize : `calculate_avgSweepDist_pairwise` 
at line `3836`. Notice there we have 6 nested loops. Thank you.


Ok, i stopped the buss error and the segfault. It was indeed an 
index that was written wrong in the flattened version .


No, I dont have the seg fault any more. But I have "error 
creating thread" - time to time.  Not always.


But, even with the taskpool, it is not spreading to multiple 
cores.


PS: this is the error message : 
"core.thread.threadbase.ThreadError@src/core/thread/threadbase.d(1219): Error creating thread"


Re: How to call stop from parallel foreach

2021-06-25 Thread seany via Digitalmars-d-learn

On Friday, 25 June 2021 at 15:16:30 UTC, jfondren wrote:

On Friday, 25 June 2021 at 14:44:13 UTC, seany wrote:


This particular location does not cause segfault.
It is segfaulting down the line in a completely unrelated 
location... Wait I will try to make a MWP.


[Here is MWP](https://github.com/naturalmechanics/mwp).

Please compile with `dub build -b release --compiler=ldc2 `. 
Then to run, please use : `./tracker_ai --filename 
21010014-86.ptl `


With ldc2 this segfaults for me even if std.parallelism is 
removed
entirely. With DMD and std.parallelism removed it runs to 
completion.

With DMD and no changes it never seems to finish.

I reckon that there's some other memory error and that the 
parallelism

is unrelated.


Try : (this 
version)[https://github.com/naturalmechanics/mwp/tree/nested-loops]


The goal is to parallelize : `calculate_avgSweepDist_pairwise` at 
line `3836`. Notice there we have 6 nested loops. Thank you.


Re: How to call stop from parallel foreach

2021-06-25 Thread jfondren via Digitalmars-d-learn

On Friday, 25 June 2021 at 15:16:30 UTC, jfondren wrote:
I reckon that there's some other memory error and that the 
parallelism is unrelated.


@safe:

```
source/AI.d(83,23): Error: cannot take address of local `rData` 
in `@safe` function `main`
source/analysisEngine.d(560,20): Error: cannot take address of 
local `rd_flattened` in `@safe` function `add_missingPoints`
source/analysisEngine.d(344,5): Error: can only catch class 
objects derived from `Exception` in `@safe` code, not 
`core.exception.RangeError`

```

And then, about half complaints about @system dlib calls and
complaints about `void*` <-> `rawData[]*` casting

The RangeError catch likely does nothing with your compile flags.

Even if you don't intend to use @safe in the end, I'd bet that the
segfaults are due to what it's complaining about.


Re: How to call stop from parallel foreach

2021-06-25 Thread jfondren via Digitalmars-d-learn

On Friday, 25 June 2021 at 14:44:13 UTC, seany wrote:


This particular location does not cause segfault.
It is segfaulting down the line in a completely unrelated 
location... Wait I will try to make a MWP.


[Here is MWP](https://github.com/naturalmechanics/mwp).

Please compile with `dub build -b release --compiler=ldc2 `. 
Then to run, please use : `./tracker_ai --filename 
21010014-86.ptl `


With ldc2 this segfaults for me even if std.parallelism is removed
entirely. With DMD and std.parallelism removed it runs to 
completion.

With DMD and no changes it never seems to finish.

I reckon that there's some other memory error and that the 
parallelism

is unrelated.


Re: How to call stop from parallel foreach

2021-06-25 Thread seany via Digitalmars-d-learn

On Friday, 25 June 2021 at 15:08:38 UTC, Ali Çehreli wrote:

On 6/25/21 7:21 AM, seany wrote:

> The code without the parallel foreach works fine. No segfault.

That's very common.

What I meant is, is the code written in a way to work safely in 
a parallel foreach loop? (i.e. Is the code "independent"?) (But 
I assume it is because it's been the common theme in this 
thread; so there must be something stranger going on.)


Ali


I have added MWP. did you have a chance to look at it?



Re: How to call stop from parallel foreach

2021-06-25 Thread Ali Çehreli via Digitalmars-d-learn

On 6/25/21 7:21 AM, seany wrote:

> The code without the parallel foreach works fine. No segfault.

That's very common.

What I meant is, is the code written in a way to work safely in a 
parallel foreach loop? (i.e. Is the code "independent"?) (But I assume 
it is because it's been the common theme in this thread; so there must 
be something stranger going on.)


Ali



Re: How to call stop from parallel foreach

2021-06-25 Thread seany via Digitalmars-d-learn

On Friday, 25 June 2021 at 14:22:25 UTC, seany wrote:

On Friday, 25 June 2021 at 14:13:14 UTC, jfondren wrote:

On Friday, 25 June 2021 at 13:53:17 UTC, seany wrote:

[...]


A self-contained and complete example would help a lot, but 
the likely
problem with this code is that you're accessing pnts[y][x] in 
the
loop, which makes the loop bodies no longer independent 
because some
of them need to first allocate an int[] to replace the 
zero-length

pnts[y] that you're starting with.

Consider:

```
$ rdmd --eval 'int[][] p; p.length = 5; 
p.map!"a.length".writeln'

[0, 0, 0, 0, 0]
```


This particular location does not cause segfault.
It is segfaulting down the line in a completely unrelated 
location... Wait I will try to make a MWP.


[Here is MWP](https://github.com/naturalmechanics/mwp).

Please compile with `dub build -b release --compiler=ldc2 `. Then 
to run, please use : `./tracker_ai --filename 21010014-86.ptl `


Re: How to call stop from parallel foreach

2021-06-25 Thread seany via Digitalmars-d-learn

On Friday, 25 June 2021 at 14:13:14 UTC, jfondren wrote:

On Friday, 25 June 2021 at 13:53:17 UTC, seany wrote:

[...]


A self-contained and complete example would help a lot, but the 
likely
problem with this code is that you're accessing pnts[y][x] in 
the
loop, which makes the loop bodies no longer independent because 
some
of them need to first allocate an int[] to replace the 
zero-length

pnts[y] that you're starting with.

Consider:

```
$ rdmd --eval 'int[][] p; p.length = 5; 
p.map!"a.length".writeln'

[0, 0, 0, 0, 0]
```


This particular location does not cause segfault.
It is segfaulting down the line in a completely unrelated 
location... Wait I will try to make a MWP.


Re: How to call stop from parallel foreach

2021-06-25 Thread seany via Digitalmars-d-learn

On Friday, 25 June 2021 at 14:10:52 UTC, Ali Çehreli wrote:

On 6/25/21 6:53 AM, seany wrote:

>  [...]
workUnitSize)) {
> [...]

Performance is not guaranteed depending on many factors. For 
example, inserting a writeln() call in the loop would make all 
threads compete with each other for stdout. There can be many 
contention points some of which depending on your program 
logic. (And "Amdahl's Law" applies.)


Another reason: 1 can be a horrible value for workUnitSize. Try 
100, 1000, etc. and see whether it helps with performance.


> [...]
line...
> [...]

Do you still have two parallel loops? Are both with explicit 
TaskPool objects? If not, I wonder whether multiple threads are 
using the convenient 'parallel' function, stepping over each 
others' toes. (I am not sure about this because perhaps it's 
safe to do this; never tested.)


It is possible that the segfaults are caused by your code. The 
code you showed in your original post (myFunction0() and 
others), they all work on independent data structures, right?


Ali


The code without the parallel foreach works fine. No segfault.

In several instances, I do have multiple nested loops, but in 
every case. only the outer one in parallel foreach.


All of them are with explicit taskpool definition.






Re: How to call stop from parallel foreach

2021-06-25 Thread jfondren via Digitalmars-d-learn

On Friday, 25 June 2021 at 13:53:17 UTC, seany wrote:


I tried this .

int[][] pnts ;
pnts.length = fld.length;

enum threadCount = 2;
auto prTaskPool = new TaskPool(threadCount);

scope (exit) {
prTaskPool.finish();
}

enum workUnitSize = 1;

foreach(i, fLine; prTaskPool.parallel(fld, workUnitSize)) {
  //
}



A self-contained and complete example would help a lot, but the 
likely

problem with this code is that you're accessing pnts[y][x] in the
loop, which makes the loop bodies no longer independent because 
some

of them need to first allocate an int[] to replace the zero-length
pnts[y] that you're starting with.

Consider:

```
$ rdmd --eval 'int[][] p; p.length = 5; p.map!"a.length".writeln'
[0, 0, 0, 0, 0]
```



Re: How to call stop from parallel foreach

2021-06-25 Thread Ali Çehreli via Digitalmars-d-learn

On 6/25/21 6:53 AM, seany wrote:

> I tried this .
>
>  int[][] pnts ;
>  pnts.length = fld.length;
>
>  enum threadCount = 2;
>  auto prTaskPool = new TaskPool(threadCount);
>
>  scope (exit) {
>  prTaskPool.finish();
>  }
>
>  enum workUnitSize = 1;
>
>  foreach(i, fLine; prTaskPool.parallel(fld, workUnitSize)) {
>//
>  }
>
>
> This is throwing random segfaults.
> CPU has 2 cores, but usage is not going above 37%

Performance is not guaranteed depending on many factors. For example, 
inserting a writeln() call in the loop would make all threads compete 
with each other for stdout. There can be many contention points some of 
which depending on your program logic. (And "Amdahl's Law" applies.)


Another reason: 1 can be a horrible value for workUnitSize. Try 100, 
1000, etc. and see whether it helps with performance.


> Even much deeper down in program, much further down the line...
> And the location of segfault is random.

Do you still have two parallel loops? Are both with explicit TaskPool 
objects? If not, I wonder whether multiple threads are using the 
convenient 'parallel' function, stepping over each others' toes. (I am 
not sure about this because perhaps it's safe to do this; never tested.)


It is possible that the segfaults are caused by your code. The code you 
showed in your original post (myFunction0() and others), they all work 
on independent data structures, right?


Ali



Re: How to call stop from parallel foreach

2021-06-25 Thread seany via Digitalmars-d-learn

On Friday, 25 June 2021 at 13:53:17 UTC, seany wrote:

On Thursday, 24 June 2021 at 21:19:19 UTC, Ali Çehreli wrote:

[...]


I tried this .

int[][] pnts ;
pnts.length = fld.length;

enum threadCount = 2;
auto prTaskPool = new TaskPool(threadCount);

scope (exit) {
prTaskPool.finish();
}

enum workUnitSize = 1;

foreach(i, fLine; prTaskPool.parallel(fld, workUnitSize)) {
  //
}


This is throwing random segfaults.
CPU has 2 cores, but usage is not going above 37%

Even much deeper down in program, much further down the line...
And the location of segfault is random.


PS. line 
[this](https://forum.dlang.org/thread/gomhpxzolddnodaey...@forum.dlang.org) I am running into bus errors Too , sometimes way down the line after these foreach calls are completed...


Re: How to call stop from parallel foreach

2021-06-25 Thread seany via Digitalmars-d-learn

On Thursday, 24 June 2021 at 21:19:19 UTC, Ali Çehreli wrote:

On 6/24/21 1:41 PM, seany wrote:

> Is there any way to control the number of CPU cores used in
> parallelization ?

Yes. You have to create a task pool explicitly:

import std.parallelism;

void main() {
  enum threadCount = 2;
  auto myTaskPool = new TaskPool(threadCount);
  scope (exit) {
myTaskPool.finish();
  }

  enum workUnitSize = 1; // Or 42 or something else. :)
  foreach (e; myTaskPool.parallel([ 1, 2, 3 ], workUnitSize)) {
// ...
  }
}

I've touched on a few parallelism concepts at this point in a 
presentation:


  https://www.youtube.com/watch?v=dRORNQIB2wA=1332s

Ali


I tried this .

int[][] pnts ;
pnts.length = fld.length;

enum threadCount = 2;
auto prTaskPool = new TaskPool(threadCount);

scope (exit) {
prTaskPool.finish();
}

enum workUnitSize = 1;

foreach(i, fLine; prTaskPool.parallel(fld, workUnitSize)) {
  //
}


This is throwing random segfaults.
CPU has 2 cores, but usage is not going above 37%

Even much deeper down in program, much further down the line...
And the location of segfault is random.




Re: How to recursively accept data from Python server ?

2021-06-25 Thread Utk via Digitalmars-d-learn

On Friday, 25 June 2021 at 05:46:54 UTC, Utk wrote:

On Friday, 25 June 2021 at 03:27:24 UTC, jfondren wrote:

On Friday, 25 June 2021 at 02:55:50 UTC, Utk wrote:

Please help me to resolve this issue.


Try stracing your program to see exactly what it's doing
with the socket, and try std.socket's lastSocketError


I tried using ```lastSocketError()```, it gave an error saying 
*Connection reset by peer*. I'm confused about how the 
connection is getting reset.



Figured it out! The error was on the python side as I was not 
receiving the data from d which was locally buffered.


Re: Are D classes proper reference types?

2021-06-25 Thread kinke via Digitalmars-d-learn
Wrt. manual non-heap allocations (stack/data segment/emplace 
etc.), you could e.g. reserve the most significant bit of the 
counter to denote such instances and prevent them from being 
free'd (and possibly finalization/destruction too; this would 
need some more thought I suppose).


Re: Are D classes proper reference types?

2021-06-25 Thread kinke via Digitalmars-d-learn
On Friday, 25 June 2021 at 06:09:17 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 24 June 2021 at 07:28:56 UTC, kinke wrote:
Yes, class *refs* are always pointers. *scope* classes are 
deprecated (I don't think I've ever seen one); with `scope c = 
new Object`, you can have the compiler allocate a class 
*instance* on the stack for you, but `c` is still a *ref*.


But the user code cannot depend on it being stack allocated? So 
I could replace the Object reference with a reference counting 
pointer and put the counter at a negative offset?


Well AFAIK it's mandated by the language, so an RC scheme 
replacing such allocations by heap ones seems like a definite 
step backwards - it's a useful pattern, and as Stefan pointed 
out, definitely in use. You could still stack-allocate but 
accommodate for the counter prefix in the compiler.


`emplace` doesn't allocate, you have to pass the memory 
explicitly.


This is more of a problem. I was thinking about arrays that 
provide an emplace method, then one could replace emplace with 
heap allocation. I guess it isn't really possible to make 
`emplace` with custom memory work gracefully with reference 
counting with ref count at negative offset.


It's certainly possible as it's a library thing; some existing 
code may assume the returned reference to point to the beginning 
of the passed memory though (where there'd be your counter). What 
you'd definitely need to adjust is `__traits(classInstanceSize)`, 
accomodating for the extra counter prefix.
There's very likely existing code out there which doesn't use 
druntime's emplace[Initializer], but does it manually.


A class *instance* can also live in the static data segment 
(`static immutable myStaticObject = new Object;`);


But it isn't required to? It certainly wouldn't work with 
reference counting if it is stored in read only memory...


Not required to AFAIK, but if it's not statically allocated, 
you'd need to allocate it at runtime via some module or CRT ctor. 
It's probably easier to have the compiler put it into static but 
writable memory, so that you can mess with the counter.


---

All in all, I think a more interesting/feasible approach would be 
abusing the monitor field of extern(D) classes for the reference 
counter. It's the 2nd field (of pointer size) of each class 
instance, directly after the vptr (pointer to vtable). I think 
all monitor access goes through a druntime call, so you could 
hook into there, disallowing any regular monitor access, and put 
this (AFAIK, seldomly used) monitor field to some good use.


Re: Are D classes proper reference types?

2021-06-25 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Thursday, 24 June 2021 at 07:28:56 UTC, kinke wrote:
Yes, class *refs* are always pointers. *scope* classes are 
deprecated (I don't think I've ever seen one); with `scope c = 
new Object`, you can have the compiler allocate a class 
*instance* on the stack for you, but `c` is still a *ref*.


But the user code cannot depend on it being stack allocated? So I 
could replace the Object reference with a reference counting 
pointer and put the counter at a negative offset?


`emplace` doesn't allocate, you have to pass the memory 
explicitly.


This is more of a problem. I was thinking about arrays that 
provide an emplace method, then one could replace emplace with 
heap allocation. I guess it isn't really possible to make 
`emplace` with custom memory work gracefully with reference 
counting with ref count at negative offset.


A class *instance* can also live in the static data segment 
(`static immutable myStaticObject = new Object;`);


But it isn't required to? It certainly wouldn't work with 
reference counting if it is stored in read only memory...


`extern(C++)` class instances can also live on the C++ 
heap/stack etc. etc.


Yes, that cannot be avoided.