Re: parallel threads stalls until all thread batches are finished.

2023-08-28 Thread Ali Çehreli via Digitalmars-d-learn

On 8/28/23 15:37, j...@bloow.edu wrote:

> Basically everything is hard coded to use totalCPU's

parallel() is a function that dispatches to a default TaskPool object, 
which uses totalCPUs. It's convenient but as you say, not all problems 
should use it.


In such cases, you would create your own TaskPool object and call 
.parallel on it:


  https://youtu.be/dRORNQIB2wA?t=1611s

Ali



Re: parallel threads stalls until all thread batches are finished.

2023-08-28 Thread Joe--- via Digitalmars-d-learn
On Monday, 28 August 2023 at 10:33:15 UTC, Christian Köstlin 
wrote:

On 26.08.23 05:39, j...@bloow.edu wrote:

On Friday, 25 August 2023 at 21:31:37 UTC, Ali Çehreli wrote:

On 8/25/23 14:27, j...@bloow.edu wrote:

> "A work unit is a set of consecutive elements of range to be
processed
> by a worker thread between communication with any other
thread. The
> number of elements processed per work unit is controlled by
the
> workUnitSize parameter. "
>
> So the question is how to rebalance these work units?

Ok, your question brings me back from summer hibernation. :)

This is what I do:

- Sort the tasks in decreasing time order; the ones that will 
take the most time should go first.


- Use a work unit size of 1.

The longest running task will start first. You can't get 
better than that. When I print some progress reporting, I see 
that most of the time N-1 tasks have finished and we are 
waiting for that one longest running task.


Ali
"back to sleep"



I do not know the amount of time they will run. They are files 
that are being downloaded and I neither know the file size nor 
the download rate(in fact, the actual download happens 
externally).


While I could use work unit of size 1 then problem then is I 
would be downloading N files at once and that will cause other 
problems if N is large(and sometimes it is).


There should be a "work unit size" and a "max simultaneous 
workers". Then I could set the work unit size to 1 and say the 
max simultaneous workers to 8 to get 8 simultaneous downloads 
without stalling.


I think thats what is implemented atm ...
`taskPool` creates a `TaskPool` of size `defaultPoolThreads` 
(defaulting to totalCPUs - 1). The work unit size is only there 
to optimize for small workloads where task / thread switching 
would be a big performance problem (I guess). So in your case a 
work unit size of 1 should be good.


Did you try this already?

Kind regards,
Christian


Well, I have 32 cores so that would spawn 64-1 threads with hyper 
threading so not really a solution as it is too many simultaneous 
downs IMO.



"These properties get and set the number of worker threads in the 
TaskPool instance returned by taskPool. The default value is 
totalCPUs - 1. Calling the setter after the first call to 
taskPool does not changes number of worker threads in the 
instance returned by taskPool. "


I guess I could try to see if I can change this but I don't know 
what the "first call" is(and I'm using parallel to create it).


Seems that the code should simply be made more robust. Probably a 
just a few lines of code to change/add at most. Maybe the 
constructor and parallel should take an argument to set the 
"totalCPUs" which defaults to getting the total number rather 
than it being hard coded.


I currently don't need or have 32+ downlaods to test ATM so...


   this() @trusted
{
this(totalCPUs - 1);
}

/**
Allows for custom number of worker threads.
*/
this(size_t nWorkers) @trusted
{


Basically everything is hard coded to use totalCPU's and that is 
the ultimate problem. Not all tasks should use all CPU's.


What happens when we get 128 cores? or even 32k at some point?

It shouldn't be a hard coded value, it's really that simple and 
where the problem originates because someone didn't think ahead.





Re: aarch64 plans for D lang ?

2023-08-28 Thread BrianLinuxing via Digitalmars-d-learn

On Monday, 28 August 2023 at 15:38:34 UTC, Sergey wrote:

On Monday, 28 August 2023 at 15:14:52 UTC, BrianLinuxing wrote:

[...]



[...]


I never worked with Pi boards, but in the archive from release 
should be binaries and some internal libraries. Usually just 
unzip + put some environment variables/path is enough.


Also this is bit outdated example (not mine): 
https://gist.github.com/shabunin/8e3af1725c1c45f225174e9c2ee1557a

Maybe it could be reused.

There is also a docker container 
(https://github.com/Reavershark/ldc2-raspberry-pi) where you 
can try to build software for Pi on your regular computer.


Moreover you can try the same installation script as in your 
first message, but use -ldc instead of -dmd in the end. But I 
think using files from GitHub Releases of official LDC repo 
will be better and easier.


Thank you Sergey, that is the answer.

And it seems to work well :)


Re: aarch64 plans for D lang ?

2023-08-28 Thread Anonymouse via Digitalmars-d-learn

On Monday, 28 August 2023 at 15:14:52 UTC, BrianLinuxing wrote:

Thank you that looks good :)

But is it the full installer and all of the bits?


The official [`install.sh`](https://dlang.org/install.html) 
script will download ldc on ARM too, just as well as on x86. I 
use it on my Pi400.


Re: aarch64 plans for D lang ?

2023-08-28 Thread Sergey via Digitalmars-d-learn

On Monday, 28 August 2023 at 15:14:52 UTC, BrianLinuxing wrote:

On Monday, 28 August 2023 at 15:04:25 UTC, Sergey wrote:

On Monday, 28 August 2023 at 14:38:36 UTC, BrianLinuxing wrote:

Afternoon all,

I think D Lang has such potential :)


Both GDC and LDC should support Linux aarch64. LDC even has 
file in Releases 
https://github.com/ldc-developers/ldc/releases/tag/v1.34.0


Thank you that looks good :)

But is it the full installer and all of the bits?




Any helpful pointers would be useful, thanks :)


I never worked with Pi boards, but in the archive from release 
should be binaries and some internal libraries. Usually just 
unzip + put some environment variables/path is enough.


Also this is bit outdated example (not mine): 
https://gist.github.com/shabunin/8e3af1725c1c45f225174e9c2ee1557a

Maybe it could be reused.

There is also a docker container 
(https://github.com/Reavershark/ldc2-raspberry-pi) where you can 
try to build software for Pi on your regular computer.


Moreover you can try the same installation script as in your 
first message, but use -ldc instead of -dmd in the end. But I 
think using files from GitHub Releases of official LDC repo will 
be better and easier.




Re: aarch64 plans for D lang ?

2023-08-28 Thread BrianLinuxing via Digitalmars-d-learn

On Monday, 28 August 2023 at 15:04:25 UTC, Sergey wrote:

On Monday, 28 August 2023 at 14:38:36 UTC, BrianLinuxing wrote:

Afternoon all,

I think D Lang has such potential :)


Both GDC and LDC should support Linux aarch64. LDC even has 
file in Releases 
https://github.com/ldc-developers/ldc/releases/tag/v1.34.0


Thank you that looks good :)

But is it the full installer and all of the bits?

As an aside, I found the build for Android instructions on the 
wiki, https://wiki.dlang.org/Build_D_for_Android


Is there a similar guide, etc for basic *nix systems? Am happy to 
build it from scratch on the Raspberry Pi 400 (Google searches 
didn't reveal that much)


Any helpful pointers would be useful, thanks :)



Re: aarch64 plans for D lang ?

2023-08-28 Thread Sergey via Digitalmars-d-learn

On Monday, 28 August 2023 at 14:38:36 UTC, BrianLinuxing wrote:

Afternoon all,

I think D Lang has such potential :)


Both GDC and LDC should support Linux aarch64. LDC even has file 
in Releases 
https://github.com/ldc-developers/ldc/releases/tag/v1.34.0





aarch64 plans for D lang ?

2023-08-28 Thread BrianLinuxing via Digitalmars-d-learn

Afternoon all,

I think D Lang has such potential :)

I wonder if there are any plans to implement on aarch64? Which 
would be useful in schools/colleges, SBC projects etc.


Or release aarch64 binaries, etc

I just loaded up the installer on a Pi 400 (running Diet Pi 
(based on Debian bullseye)) and got:



"time curl -fsS https://dlang.org/install.sh | bash -s dmd
Downloading https://dlang.org/d-keyring.gpg
 100.0%
Downloading https://dlang.org/install.sh
 100.0%
gpg: keybox '/home/brian/.gnupg/pubring.kbx' created
gpg: /home/brian/.gnupg/trustdb.gpg: trustdb created
The latest version of this script was installed as 
~/dlang/install.sh.

It can be used it to install further D compilers.
Run `~/dlang/install.sh --help` for usage information.

no DMD binaries available for aarch64

real0m4.018s
user0m0.496s
sys 0m0.462s"


Good luck, Brian


Re: parallel threads stalls until all thread batches are finished.

2023-08-28 Thread Christian Köstlin via Digitalmars-d-learn

On 26.08.23 05:39, j...@bloow.edu wrote:

On Friday, 25 August 2023 at 21:31:37 UTC, Ali Çehreli wrote:

On 8/25/23 14:27, j...@bloow.edu wrote:

> "A work unit is a set of consecutive elements of range to be
processed
> by a worker thread between communication with any other
thread. The
> number of elements processed per work unit is controlled by
the
> workUnitSize parameter. "
>
> So the question is how to rebalance these work units?

Ok, your question brings me back from summer hibernation. :)

This is what I do:

- Sort the tasks in decreasing time order; the ones that will take the 
most time should go first.


- Use a work unit size of 1.

The longest running task will start first. You can't get better than 
that. When I print some progress reporting, I see that most of the 
time N-1 tasks have finished and we are waiting for that one longest 
running task.


Ali
"back to sleep"



I do not know the amount of time they will run. They are files that are 
being downloaded and I neither know the file size nor the download 
rate(in fact, the actual download happens externally).


While I could use work unit of size 1 then problem then is I would be 
downloading N files at once and that will cause other problems if N is 
large(and sometimes it is).


There should be a "work unit size" and a "max simultaneous workers". 
Then I could set the work unit size to 1 and say the max simultaneous 
workers to 8 to get 8 simultaneous downloads without stalling.


I think thats what is implemented atm ...
`taskPool` creates a `TaskPool` of size `defaultPoolThreads` (defaulting 
to totalCPUs - 1). The work unit size is only there to optimize for 
small workloads where task / thread switching would be a big performance 
problem (I guess). So in your case a work unit size of 1 should be good.


Did you try this already?

Kind regards,
Christian




Re: Pointer to environment.get

2023-08-28 Thread Basile B. via Digitalmars-d-learn

On Monday, 28 August 2023 at 10:20:14 UTC, Basile B. wrote:

On Monday, 28 August 2023 at 06:38:50 UTC, Vino wrote:

Hi All,

  The the below code is not working, hence requesting your 
help.


Code:
```
import std.stdio;
import std.process: environment;
void main () {
   int* ext(string) = ("PATHEXT");
   writeln(*ext);
}
```


Problems is that "PATHEXT" is a runtime argument. If you really 
want to get a pointer to the function for that runtime argument 
you can use a lambda:


```d
import std.stdio;
import std.process: environment;
void main () {

alias atGet = {return environment.get("PATHEXT");}; // 
really lazy


writeln(atGet); // pointer to the lambda
writeln((*atGet)());// call the lambda
}
```

There might be other ways, but less idiomatic (using a struct + 
opCall, a.k.a  a "functor")


To go further, the correct code for syntax you wanted to use is 
actually


```d
alias Ext_T = string (const char[] a, string b); // define a 
function type

alias Ext_PT = Ext_T*; // define a function **pointer** type
Ext_PT ext = 
```

But as you can see that does not allow to capture the argument. 
Also it only work as AliasDeclaration RHS.


Re: Pointer to environment.get

2023-08-28 Thread Basile B. via Digitalmars-d-learn

On Monday, 28 August 2023 at 06:38:50 UTC, Vino wrote:

Hi All,

  The the below code is not working, hence requesting your help.

Code:
```
import std.stdio;
import std.process: environment;
void main () {
   int* ext(string) = ("PATHEXT");
   writeln(*ext);
}
```


Problems is that "PATHEXT" is a runtime argument. If you really 
want to get a pointer to the function for that runtime argument 
you can use a lambda:


```d
import std.stdio;
import std.process: environment;
void main () {

alias atGet = {return environment.get("PATHEXT");}; // really 
lazy


writeln(atGet); // pointer to the lambda
writeln((*atGet)());// call the lambda
}
```

There might be other ways, but less idiomatic (using a struct + 
opCall, a.k.a  a "functor")


Re: Function to get the current hostname for both Windows and Posix

2023-08-28 Thread Vino via Digitalmars-d-learn

On Sunday, 27 August 2023 at 21:33:57 UTC, Jonathan M Davis wrote:
On Sunday, August 27, 2023 10:02:35 AM MDT vino via 
Digitalmars-d-learn wrote:

Hi All,

  May i know whether these is function to find the current
hostname both in windows and Posix.

From,
Vino


It looks like std.socket's Socket.hostName will do the trick.

https://dlang.org/phobos/std_socket.html#.Socket.hostName

- Jonathan M Davis


Thank you.


Pointer to environment.get

2023-08-28 Thread Vino via Digitalmars-d-learn

Hi All,

  The the below code is not working, hence requesting your help.

Code:
```
import std.stdio;
import std.process: environment;
void main () {
   int* ext(string) = ("PATHEXT");
   writeln(*ext);
}
```