Re: New update fix

2024-03-03 Thread Steven Schveighoffer via Digitalmars-d-learn

On Saturday, 2 March 2024 at 09:18:58 UTC, user1234 wrote:

On Saturday, 2 March 2024 at 08:41:40 UTC, Salih Dincer wrote:

SLM,

What exactly did this patch with the new update fix?


Nothing, it looks like what happened is that the issue was 
wrongly referenced by a dlang.org PR 
(https://github.com/dlang/dlang.org/pull/3701/commits/4e8db30f0bf3c330c3431e83fe8a75f843b40857).


Not wrongly referenced. The pr changed the spec to be clearer 
about the behavior. The behavior did not change.


The bug was closed as “fixed” incorrectly. I switched it to 
“wontfix”


The change log generator must have picked it up because of that.

-Steve


Re: Question on shared memory concurrency

2024-03-03 Thread Richard (Rikki) Andrew Cattermole via Digitalmars-d-learn

A way to do this without spawning threads manually:

```d
import std.parallelism : TaskPool, parallel, taskPool, defaultPoolThreads;
import std.stdio : writeln;
import std.range : iota;

enum NSWEPT = 1_000_000;
enum NCPU = 4;

void main() {
import core.atomic : atomicLoad, atomicOp;

shared(uint) value;

defaultPoolThreads(NCPU);
TaskPool pool = taskPool();

foreach(_; pool.parallel(iota(NSWEPT))) {
atomicOp!"+="(value, 1);
}

writeln(pool.size);
writeln(atomicLoad(value));
}
```

Unfortunately I could only use the default task pool, creating a new one 
took too long on run.dlang.io.


I also has to decrease NSWEPT because anything larger would take too long.


Question on shared memory concurrency

2024-03-03 Thread Andy Valencia via Digitalmars-d-learn
I tried a shared memory parallel increment.  Yes, it's basically 
a cache line thrasher, but I wanted to see what's involved in 
shared memory programming.  Even though I tried to follow all the 
rules to make true shared memory (not thread local) it appears I 
failed, as the wait loop at the end only sees its own local 250 
million increments?


import core.atomic : atomicFetchAdd;
import std.stdio : writeln;
import std.concurrency : spawn;
import core.time : msecs;
import core.thread : Thread;

const uint NSWEPT = 1_000_000_000;
const uint NCPU = 4;

void
doadd(ref shared(uint) val)
{
for (uint count = 0; count < NSWEPT/NCPU; ++count) {
atomicFetchAdd(val, 1);
}
}

void
main()
{
shared(uint) val = 0;

for (int x = 0; x < NCPU-1; ++x) {
spawn(, val);
}
doadd(val);
while (val != NSWEPT) {
Thread.sleep(1.msecs);
}
}


Re: Why does disabling a struct's postblit increase its size in memory?

2024-03-03 Thread Paul Backus via Digitalmars-d-learn

On Saturday, 2 March 2024 at 19:29:47 UTC, Per Nordlöw wrote:

On Saturday, 2 March 2024 at 19:28:08 UTC, Per Nordlöw wrote:

On Saturday, 2 March 2024 at 19:11:42 UTC, kinke wrote:
Not according to run.dlang.io, for all available DMD 
versions. Perhaps your tested `S` was nested in some 
function/aggregate and so had an implicit context pointer.


Ahh. Yes. Indeed. My mistake. Thanks.


Thanks. Neither my websearches nor ChatGPT plus couldn't figure 
that out.


FYI, you can dump the layout of a struct, including hidden 
fields, by iterating over its `.tupleof` property:


```d
void main()
{
struct S
{
@disable this(this);
int n;
}

static foreach (field; S.tupleof)
pragma(msg,
typeof(field).stringof, " ",
__traits(identifier, field), " ",
"at ", field.offsetof
);
}
```

This example prints out

```
int n at 0LU
void* this at 8LU
```