Re: Toolchain with ldc and AArch64 OSX

2023-06-24 Thread max haughton via Digitalmars-d-learn

On Saturday, 24 June 2023 at 15:16:37 UTC, Cecil Ward wrote:
I have LDC running on an ARM Mac. If anyone else out there is 
an LDC or GDC user, could you knock up a quick shell program to 
compile and link a .d file to produce an executable ? found the 
linker but these tools are all new to me and a bit of help 
would save me a lot of trial and error and frustration as I try 
to find docs. GDC would be great too.


I have managed to achieve this before on a Raspberry Pi AArch64 
Linux Debian where the compiler can link and generate an 
executable just in integrated fashion in the one command. The 
OSX tools seem rather different however.


I’m going to try installing GDC on the Mac next, have got that 
running on the Pi too successfully.


I have ldc installed (from `brew`) on my (also arm) Mac, it works 
fine, or do you specifically want to work out which linker to 
invoke manually and so on?


I'm not sure if gdc is currently easy to obtain on arm macs. I 
think it should work fine but some packages hadn't enabled arm 
support on macos yet, last time *I* checked at least.


Re: How to get the body of a function/asm statement in hexadecimal

2023-01-29 Thread max haughton via Digitalmars-d-learn
On Sunday, 29 January 2023 at 21:45:11 UTC, Ruby the Roobster 
wrote:

I'm trying to do something like

```d
void main()
{
auto d = 
*d.writeln;
}

void c()
{
}
```

In an attempt to get the hexadecimal representation of the 
machine code of a function.  Of course, function pointers 
cannot be dereferenced.  What do?


Furthermore, I would like to be able to do the same for an 
`asm` statement.


The function pointer can be casted to a pointer type. It is worth 
saying, however, that it is not trivial to find where the *end* 
of a function is. In X86 it's not even trivial to find the end of 
an instruction!


If you'd just like the bytes for inspection, you could use a tool 
like objdump. For more complicated situations you will need to 
use a hack to tell you where the end of a function is.


Re: Solving optimization problems with D

2023-01-01 Thread max haughton via Digitalmars-d-learn

On Sunday, 1 January 2023 at 21:11:06 UTC, Ogi wrote:
I’ve read this [series if 
articles](https://www.gamedeveloper.com/design/decision-modeling-and-optimization-in-game-design-part-1-introduction) about using Excel Solver for all kinds of optimization problems. This is very neat, but of course, I would prefer to write models with code instead, preferably in D. I glanced at mir-optim but it requires knowledge of advanced math. Is there something more approachable for a layperson?


What do you want to optimize? Optimization in general requires 
reasonably advanced mathematics, whereas a single problem can be 
simplified.


Re: Can we ease WASM in D ?

2022-11-16 Thread max haughton via Digitalmars-d-learn
On Wednesday, 16 November 2022 at 23:00:40 UTC, Adam D Ruppe 
wrote:
On Wednesday, 16 November 2022 at 22:51:31 UTC, bioinfornatics 
wrote:

[...]


It would be pretty cool if you could just mark `@wasm` on a 
function and have it magically convert... the dcompute thing i 
*think* does something like this. but im not sure.



[...]


What I've done before (including the webassembly.arsdnet.net 
website) is have the server call the compiler as-needed when 
requesting the file, which makes it feel pretty transparent. 
You might want to do that too, it'd be on the file level 
instead of on the function level but it works.


This is what it does, albeit with a pretty wacky pipeline because 
GPUs be crazy.


Re: dmd as a library

2022-11-07 Thread max haughton via Digitalmars-d-learn

On Tuesday, 8 November 2022 at 00:05:18 UTC, vushu wrote:

Any where to find learning material for using dmd as a library?

thanks.


What do you want to do with it.


Re: how to benchmark pure functions?

2022-10-29 Thread max haughton via Digitalmars-d-learn

On Thursday, 27 October 2022 at 18:41:36 UTC, Dennis wrote:

On Thursday, 27 October 2022 at 17:17:01 UTC, ab wrote:
How can I prevent the compiler from removing the code I want 
to measure?


With many C compilers, you can use volatile assembly blocks for 
that. With LDC -O3, a regular assembly block also does the 
trick currently:


```D
void main()
{
import std.datetime.stopwatch;
import std.stdio: write, writeln, writef, writefln;
import std.conv : to;

void f0() {}
void f1()
{
foreach(i; 0..4_000_000)
{
// nothing, loop gets optimized out
}
}
void f2()
{
foreach(i; 0..4_000_000)
{
// defeat optimizations
asm @safe pure nothrow @nogc {}
}
}
auto r = benchmark!(f0, f1, f2)(1);
writeln(r[0]); // 4 μs
writeln(r[1]); // 4 μs
writeln(r[2]); // 1 ms
}
```


I recommend a volatile data dependency rather than injecting 
volatile ASM into code FYI i.e. don't modify the pure function 
but rather make sure the result is actually used in the eyes of 
the compiler.


Re: vectorization of a simple loop -- not in DMD?

2022-07-14 Thread max haughton via Digitalmars-d-learn

On Thursday, 14 July 2022 at 13:00:24 UTC, ryuukk_ wrote:
On Thursday, 14 July 2022 at 05:30:58 UTC, Siarhei Siamashka 
wrote:

On Tuesday, 12 July 2022 at 13:23:36 UTC, ryuukk_ wrote:
I wonder if DMD/LDC/GDC have built in tools to profile and 
track performance


Linux has a decent system wide profiler: 
https://perf.wiki.kernel.org/index.php/Main_Page
And there are other useful tools, such as callgrind. To take 
advantage of all these tools, DMD/LDC/GDC only need to provide 
debugging symbols in the generated binaries, which they 
already do. Profiling applications to identify performance 
bottlenecks is very easy nowadays.


I am not talking about linux, and i am not talking about 3rd 
party tools


I am talking about the developers of DMD/LDC/GDC, do they 
profile the compilers, do they provide ways to monitor/track 
performance? do they benchmark specific parts of the compilers?


I am not talking about the output of valgrind

Zig also has: https://ziglang.org/perf/ (very slow to load)

Having such thing is more useful than being able to plug 
valgrind god knows how into the compiler and try to decipher 
what does what and what results correspond to what internally, 
and what about a graph over time to catch regressions?


DMD is very fast at compiling code, so i guess Walter doing 
enough work to monitor all of that


LDC on the other hand.. they'd benefit a lot by having such 
thing in place


Running valgrind on the compiler is completely trivial. Builtin 
profilers are often terrible. LDC and GDC and dmd all have 
instrumenting profilers builtin, of varying quality. gprof in 
particular is somewhat infamous.


dmd isn't particularly fast, it does a lot of unnecessary work. 
LDC is slow because LLVM is slow.


We need a graph over time, yes.


Re: vectorization of a simple loop -- not in DMD?

2022-07-11 Thread max haughton via Digitalmars-d-learn

On Monday, 11 July 2022 at 18:15:16 UTC, Ivan Kazmenko wrote:

Hi.

I'm looking at the compiler output of DMD (-O -release), LDC 
(-O -release), and GDC (-O3) for a simple array operation:


```
void add1 (int [] a)
{
foreach (i; 0..a.length)
a[i] += 1;
}
```

Here are the outputs: https://godbolt.org/z/GcznbjEaf

From what I gather at the view linked above, DMD does not use 
XMM registers for speedup, and does not unroll the loop either. 
 Switching between 32bit and 64bit doesn't help either.  
However, I recall in the past it was capable of at least some 
of these optimizations.  So, how do I enable them for such a 
function?


Ivan Kazmenko.


How long ago is the past? The godbolt.org dmd is quite old.

The dmd backend is ancient, it isn't really capable of these 
kinds of loop optimizations.


Re: Is there any implementation of a 128bit integer?

2022-07-10 Thread max haughton via Digitalmars-d-learn

On Monday, 11 July 2022 at 00:19:23 UTC, Era Scarecrow wrote:

On Friday, 8 July 2022 at 15:32:44 UTC, Rob T wrote:

[...]


There was a discussion on this not long ago. Walter tried 
implementing it recently too, though I'm guessing he gave up.


[...]


Note that you're replying to a message containing a link to the 
implementation that is in the standard library today.


Re: Enforce not null at compile time?

2022-06-20 Thread max haughton via Digitalmars-d-learn

On Monday, 20 June 2022 at 17:48:48 UTC, Antonio wrote:
Is there any way to specify that a variable, member or 
parameter can't be null?


You can use an invariant if it's a member of an aggregate but be 
warned that these are only checked at the boundaries of public 
member functions.


Re: How to map machine instctions in memory and execute them? (Aka, how to create a loader)

2022-06-08 Thread max haughton via Digitalmars-d-learn

On Monday, 6 June 2022 at 15:13:45 UTC, rempas wrote:
I tried to find anything that will show code but I wasn't able 
to find anything expect for an answer on stackoverflow. I would 
find a lot of theory but no practical code that works. What I 
want to do is allocate memory (with execution mapping), add the 
machine instructions and then allocate another memory block for 
the data and finally, execute the block of memory that contains 
the code. So something like what the OS loader does when 
reading an executable. I have come with the following code:


[...]


If you know the instructions ahead of time LDC and GDC will both 
let you put a function in it's own section, and you can then use 
some linker magic to get pointers to the beginning and end of 
that section.


Re: want to confirm: gc will not free a non-gc-allocated field of a gc-allocated object?

2022-06-07 Thread max haughton via Digitalmars-d-learn

On Tuesday, 7 June 2022 at 09:12:25 UTC, ag0aep6g wrote:

On 07.06.22 11:00, max haughton wrote:

On Tuesday, 7 June 2022 at 08:56:29 UTC, ag0aep6g wrote:

[...]

That wasn't mw's question.


I also answered this in my original one IIRC.


You didn't.


Ok I must have assumed it was obvious it wouldn't free it.


Re: want to confirm: gc will not free a non-gc-allocated field of a gc-allocated object?

2022-06-07 Thread max haughton via Digitalmars-d-learn

On Tuesday, 7 June 2022 at 08:56:29 UTC, ag0aep6g wrote:

On 07.06.22 03:02, max haughton wrote:

I'm talking about the data in the array.

void[] might contain pointers, float[] does not so it won't be 
scanned.


That wasn't mw's question.


I also answered this in my original one IIRC. There's nothing too 
free because the GC didn't allocate it.


Every allocator needs to either have a GC or a fast way to free 
memory which includes checking whether it owns memory in the 
first place. annoyingly you can't express this in the malloc/free 
pattern very well at all. One of the many gifts C has given the 
world.


Re: want to confirm: gc will not free a non-gc-allocated field of a gc-allocated object?

2022-06-06 Thread max haughton via Digitalmars-d-learn

On Tuesday, 7 June 2022 at 00:40:56 UTC, ag0aep6g wrote:

On 07.06.22 00:22, max haughton wrote:
float[] doesn't contain pointers, so the GC won't do anything 
to or with it.


wat

float[] is a pointer (plus a length). The GC will deal with it 
like any other pointer.


I'm talking about the data in the array.

void[] might contain pointers, float[] does not so it won't be 
scanned. Or at least shouldn't be...


This is why you shouldn't use void[] for everything if you can 
help it.


```
void main()
{
import std.stdio;
auto voidArray = typeid(void[]);
auto floatArray = typeid(float[]);
writeln(voidArray.next.flags() & 1 );
writeln(floatArray.next.flags() & 1);
}
```


Re: want to confirm: gc will not free a non-gc-allocated field of a gc-allocated object?

2022-06-06 Thread max haughton via Digitalmars-d-learn

On Monday, 6 June 2022 at 22:18:08 UTC, mw wrote:

Hi,

Suppose I have this code:

```
class GCAllocated {
  float[] data;

  this() {
// non-gc-allocated field
this.data = cast(float[])(core.stdc.stdlib.malloc(nBytes)[0 
.. nBytes]);

  }
}

void foo() {
  auto obj = new GCAllocated();  // gc-allocated owning object
  ...
}

```

So when `obj` is cleanup by the GC, obj.data won't be freed by 
the GC: because the `data` is non-gc-allocated (and it's 
allocated on the non-gc heap), the GC scanner will just skip 
that field during a collection scan. Is this understanding 
correct?


I need this behavior for a special purpose, so I want to 
confirm it.


Thanks.


float[] doesn't contain pointers, so the GC won't do anything to 
or with it.


Re: What are (were) the most difficult parts of D?

2022-05-22 Thread max haughton via Digitalmars-d-learn

On Sunday, 22 May 2022 at 20:05:33 UTC, Chris Piker wrote:

On Sunday, 22 May 2022 at 19:33:21 UTC, rikki cattermole wrote:

I should probably jump back to another thread, but maybe one 
more reply isn't too much off topic discussion...


[...]


https://github.com/gcc-mirror/gcc/tree/master/gcc/testsuite

Look for folders starting with gdc.

There is no analogue for buildkite as per se.


Re: What are (were) the most difficult parts of D?

2022-05-21 Thread max haughton via Digitalmars-d-learn

On Saturday, 21 May 2022 at 19:00:04 UTC, Johan wrote:

On Tuesday, 17 May 2022 at 06:28:10 UTC, cc wrote:

On Monday, 16 May 2022 at 15:08:15 UTC, H. S. Teoh wrote:

[...]


According to the dlang.org wiki entry for LDC:


[...]


So I'm not touching it for now.


Lol, what?
Don't misquote the wiki about LDC. If you can run DMD on your 
architecture, LDC has full support too.
The "lacking druntime/phobos support" is about architectures 
that are _not_ x86, x86_64, ARM, AArch64, PowerPC, MIPS, 
WebAssembly. DMD supports only a subset of what LDC (and GDC) 
support.


-Johan


Forget target support, ldc *runs* on more architectures than dmd 
because it doesn't insist on using x87 all the time.


Re: What are (were) the most difficult parts of D?

2022-05-15 Thread max haughton via Digitalmars-d-learn

On Wednesday, 11 May 2022 at 05:41:35 UTC, Ali Çehreli wrote:
What are you stuck at? What was the most difficult features to 
understand? etc.


To make it more meaningful, what is your experience with other 
languages?


Ali


Learning D is almost a complete blur in my memory but I 
distinctly remember not really understanding is expressions well 
into even fiddling around with them inside the compiler.


Re: What are (were) the most difficult parts of D?

2022-05-13 Thread max haughton via Digitalmars-d-learn

On Friday, 13 May 2022 at 21:07:12 UTC, Christopher Katko wrote:
Is there any way we can get numbered errors like C++ / 
Microsoft have?


 E2040 Declaration terminated incorrectly

Because then we could easily have a wiki for common error cases 
with code snippets of it occurring, and a fix for it. Common 
triggers of this error vs the proper code.


And a neat feature that Allegro.cc does, it automatically scans 
forum posts for usages of Allegro functions (e.g. 
al_draw_bitmap() man page will include links to posts that ask 
about al_draw_bitmap and has a Yes/No question next to it "was 
this a helpful post" so you can remove bad matches). Here it 
would be even easier than matching random questions because 
error codes are specific string literals ("E2040") and not some 
sort of regex attempt to match someone asking a question about 
'R[][][] datatypes' or something like that.


Here is an example:

https://www.allegro.cc/manual/5/al_draw_bitmap

and an C++ error page:

https://docwiki.embarcadero.com/RADStudio/Sydney/en/E2040_Declaration_terminated_incorrectly_(C%2B%2B)


We discussed this a while ago in a foundation meeting. The 
details didn't really get hashed out but I think we can 
definitely try it.


I have a PR which alters the error message you get if you do

```d
void main()
{
  func() == 43;
}
```

This is an example of an error message which could be 
counterintuitive to a new D programmer, e.g. maybe they thought 
it would act like an assert or whatever, so we can link to a some 
html somewhere on dlang.org




Re: Best Practice to manage large projects with dub?

2022-05-13 Thread max haughton via Digitalmars-d-learn

On Friday, 13 May 2022 at 17:51:55 UTC, Ozan Süel wrote:

Hi
My situation: I'm working on and with many projects based on 
many libraries. I know how to handle these in Git, GitHub and 
Dub. But what would be the Best Practice in current dub?


Let's say, we have many apps (1 app = 1 package), which use 
several libs (Every app has his own set of used libs, some are 
overlapping with other apps), every lib requires new libs to 
work (some are overlapping),and so on.


Let's say we want to have 10 apps and need in sum 100 packages 
with every possible combinations.


One way would be, to pump the dub package manager with 100 new 
packages for building only 10 apps. I would like to avoid fill 
the dub database with small packages it. An deprecated way is 
to combine dub with github, which still works fine to me.


What would be good way to manage many small libs to build few 
big apps? Maybe the solution is around the corner, but better 
to ask the experts.


Thanks in advance

Greetings, Ozan


I would highly recommend keeping these small libs in a common 
repository (a monorepo if you will) but in a manner that only 
what is needed is required.


Lots of projects/repositories/dub.jsons == pain



Re: What's a good way to disassemble Phobos?

2022-04-30 Thread max haughton via Digitalmars-d-learn

On Saturday, 30 April 2022 at 18:18:02 UTC, Dukc wrote:
I have figured out that my development build of Phobos is for 
some reason including instances of `__cmp` and `dstrcmp` 
templates from DRuntime in the Phobos binary. Since `-betterC` 
client code does not link Phobos in, it fails if it tries to 
use those functions.


The problem: how do I track down what includes those functions 
in the Phobos binary? The -vasm switch of DMD gives a neat 
output, but I can get it to only show the disassembly of files 
I'm compiling (not already combined files), and before the 
linking phase. So no info where the function calls point to.


I also tried ndisasm. It can disassemble already compiled 
binaries, but it's output is utter trash. It can't even figure 
where a function begins or ends, let alone displaying their 
name. Instead it displays just a sea of `add` instructions for 
the areas between the functions.


I'm looking for something where I could search for the call to 
the DRuntime functions in question, from an already combined .o 
or .a. What do you suggest? I'm on Linux.


If they are templates then try compiling whatever is causing them 
with `-vtemplates=list-instances`. If you can't recompile then 
you may be stuck grepping whatever disassembler output works.


The sea of add instructions is padding, lookup what `addBYTE 
PTR [rax],al` assembles to. vasm isn't a good disassembler for 
anything other than debugging the compiler, all the jumps and 
relocations aren't resolved.


Re: stack frame & dangling pointer weirdness

2022-04-21 Thread max haughton via Digitalmars-d-learn

On Thursday, 21 April 2022 at 05:49:12 UTC, Alain De Vos wrote:

Following program:
```
import std.stdio;

void main() @trusted
{

int *p=null;
void myfun(){
int x=2;
p=
writeln(p);
writeln(x);
}
myfun();
*p=16;
writeln(p);
writeln(*p);
}
```

outputs :
7FFFDFAC
2
7FFFDFAC
32767

I don't understand why. Would it be possible to explain  ?


When you pass a pointer to writeln conceptually it gets copied, 
the address that is, but the memory the address points to is in 
no man's land because it was in an old stack frame.


As such, this memory gets "overwritten" (at this point it's 
invalid anyway) when you call writeln, so when you dereference it 
you get something from the old stack of writeln rather than 16.




Re: Do I have to pass parameters with ref explicitly?

2022-04-16 Thread max haughton via Digitalmars-d-learn

On Sunday, 17 April 2022 at 03:00:28 UTC, Elfstone wrote:
I'm reading some d-sources, and it looks like they pass big 
structs by value.

Such as:

Matrix4x4f opBinary(string op)(Matrix4x4f rhs) { ... }

I came from a C++ background, and I would have written:

Matrix4x4f opBinary(string op)(const ref Matrix4x4f rhs) { 
... }


I haven't found anything in the docs yet. Will the d-compiler 
optimize them into virtually the same? Can someone give me a 
reference?


It's the same as C++.


Re: Source code for vibe.d listenTCP()

2022-02-26 Thread max haughton via Digitalmars-d-learn

On Saturday, 26 February 2022 at 22:25:46 UTC, Chris Piker wrote:

Hi D

I'm trying out the vibe.d framework for the first time and it 
looks like many of the functions mutate some hidden global 
state.  Take for example `listenTCP`.  To help me build a 
mental picuture of the framework I'd like to see what global 
state is mutated, but for the life of me I can't even find the 
source code to listenTCP().  The obvious method of:

```bash
git clone g...@github.com:vibe-d/vibe.d.git
cd vibe.d
grep -R -n listenTCP
```
returns many instances where listenTCP is used, but none that 
look like a definition.  It's quite possible I just overlooked 
it, or maybe it's implemented as a mixin or something weird 
like that.


Anyway if someone can just help me find the source code to 
listenTCP inside vibe.d I'd be grateful.


Thanks for your time,


https://dlang.org/spec/traits.html#getLocation can be a 
bruteforce approach to this question when dealing with a new 
codebase.


Re: How to get instance member value from getSymbolsByUDA

2022-02-26 Thread max haughton via Digitalmars-d-learn
On Saturday, 26 February 2022 at 17:06:06 UTC, Remi Thebault 
wrote:
On Saturday, 26 February 2022 at 12:01:14 UTC, max haughton 
wrote:


Getting the UDAs from inside a symbol must be done via a 
recursive procedure in the same manner one would identify the 
aforementioned symbol i.e. you have to go through the fields 
looking for UDAs *then* use getUDAs.


This is because UDAs cannot convey information without their 
context, so the trait doesn't look recursively.


I don't need to get access to the UDA value, I need the value 
of the field decorated with UDA.
Finally I can get access to the member using 
`__traits(getMember, req, paramSymbols[0].stringof)`.


I see, my apologies. FYI if possible you should avoid using 
stringof because it's basically only intended for debugging so 
isn't always consistent - this is what __traits(identifier) is for


Re: How to get instance member value from getSymbolsByUDA

2022-02-26 Thread max haughton via Digitalmars-d-learn
On Saturday, 26 February 2022 at 11:38:16 UTC, Remi Thebault 
wrote:
On Saturday, 26 February 2022 at 11:26:54 UTC, max haughton 
wrote:
On Saturday, 26 February 2022 at 10:39:18 UTC, Remi Thebault 
wrote:

Hi all,

I'm trying to establish a REST API by using the type system 
(used in both client and server code).


[...]


https://dlang.org/phobos/std_traits.html#getUDAs


How do I use `getUDAs` in this context?
I have `getUDAs!(req, Param).length == 0`. I think it only 
works on the struct attributes, not on the fields attributes.


Getting the UDAs from inside a symbol must be done via a 
recursive procedure in the same manner one would identify the 
aforementioned symbol i.e. you have to go through the fields 
looking for UDAs *then* use getUDAs.


This is because UDAs cannot convey information without their 
context, so the trait doesn't look recursively.




Re: How to get instance member value from getSymbolsByUDA

2022-02-26 Thread max haughton via Digitalmars-d-learn
On Saturday, 26 February 2022 at 10:39:18 UTC, Remi Thebault 
wrote:

Hi all,

I'm trying to establish a REST API by using the type system 
(used in both client and server code).


[...]


https://dlang.org/phobos/std_traits.html#getUDAs


Re: stripping binaries from LDC2

2022-02-07 Thread max haughton via Digitalmars-d-learn

On Monday, 7 February 2022 at 14:20:31 UTC, Arjan wrote:

On Monday, 7 February 2022 at 13:14:19 UTC, max haughton wrote:

On Monday, 7 February 2022 at 12:16:53 UTC, Arjan wrote:
In c++ our release builds are build `-O2 -g` and the 
resulting binaries are stripped with GNU/strip.

Is this also possible with LDC2 generated binaries for D code?
So build D code with `-O2 -g` and then strip the resulting 
binary?


Why build with debug info if you're going to strip it anyway?


The stripped release binaries are going to the client, when a 
problem occurs get a core dump, using the core dump + the 
original binary / symbol gives full debug info off site.


It is common practice.


Then yes, you can do that.

D works in basically exactly the same way C++ does in this regard.


Re: stripping binaries from LDC2

2022-02-07 Thread max haughton via Digitalmars-d-learn

On Monday, 7 February 2022 at 12:16:53 UTC, Arjan wrote:
In c++ our release builds are build `-O2 -g` and the resulting 
binaries are stripped with GNU/strip.

Is this also possible with LDC2 generated binaries for D code?
So build D code with `-O2 -g` and then strip the resulting 
binary?


Why build with debug info if you're going to strip it anyway?


Re: gdc or ldc for faster programs?

2022-01-29 Thread max haughton via Digitalmars-d-learn

On Saturday, 29 January 2022 at 18:28:06 UTC, Ali Çehreli wrote:

On 1/29/22 10:04, Salih Dincer wrote:

> Could you also try the following code with the same
configurations?

The program you posted with 2 million random values:

ldc 1.9 seconds
gdc 2.3 seconds
dmd 2.8 seconds

I understand such short tests are not definitive but to have a 
rough idea between two programs, the last version of my program 
that used sprintf with 2 million numbers takes less time:


ldc 0.4 seconds
gdc 0.5 seconds
dmd 0.5 seconds

(And now we know gdc can go about 7% faster with additional 
command line switches.)


Ali


You need to be compiling with PGO to test the compilers optimizer 
to the maximum. Without PGO they have to assume a fairly 
conservative flow through the code which means things like 
inlining and register allocation are effectively flying blind.




Re: Is there a way to make a function parameter accept only values that can be checked at compile time?

2021-12-29 Thread max haughton via Digitalmars-d-learn

On Wednesday, 29 December 2021 at 16:51:47 UTC, rempas wrote:
On Wednesday, 29 December 2021 at 16:27:22 UTC, max haughton 
wrote:
Inlining + constant propagation. Fancier iterations on those 
exist too but 90% of the speedup will come from those since 
for it to matter they likely would've been used in first place.


Sounds like black magic? So If I write this:

```
int add(int num, int num2) { return num1 + num2; }

void main() {
  int number = add(10, 20);
}
```

The parameters are literals so will D translate this to:

```
int add(int num, int num2) { return num1 + num2; } // Normal one
int add_temp_func() { return 30; } // Created for the function 
call in main. No `add` instruction


void main() {
  int number = add(10, 20); // Will actually create and call 
"add_temp_func"

}
```

Or even better, this:

```
int add(int num, int num2) { return num1 + num2; }
void main() {
  int number = add(10, 20); // What we will type and it will 
get replaced with the following line
  int number = 30; // So it calculates the result at compile 
times and doesn't even do a function call

}
```

Is this what D can do? This is what I'm talking about when 
saying been able to use values at compile time.


This is handled by the compiler backend. The simplest way it can 
do this kind of optimization is by "inlining" the function.


This is done by transplanting the function body into the place 
it's used. At this point the compiler simply sees "= 30 + 30" 
which it can trivially turn into "= 60" through something called 
constant-folding.


The compiler can create new function bodies (like the temp one 
you introduce above) but this is a much more niche optimization. 
They favour inlining much more aggressively.


I'm tempted to do a YouTube video of a D program being compiled 
all the way down the machine code, to show what the compiler does 
for you.


Re: Is there a way to make a function parameter accept only values that can be checked at compile time?

2021-12-29 Thread max haughton via Digitalmars-d-learn

On Wednesday, 29 December 2021 at 15:53:38 UTC, rempas wrote:
On Wednesday, 29 December 2021 at 11:09:04 UTC, max haughton 
wrote:
If the value is known at compile time the compiler can pretty 
easily do that for you unless you're really unlucky.


How is this even possible?


Inlining + constant propagation. Fancier iterations on those 
exist too but 90% of the speedup will come from those since for 
it to matter they likely would've been used in first place.


Re: Is there a way to make a function parameter accept only values that can be checked at compile time?

2021-12-29 Thread max haughton via Digitalmars-d-learn

On Wednesday, 29 December 2021 at 08:56:47 UTC, rempas wrote:
On Tuesday, 28 December 2021 at 22:26:33 UTC, max haughton 
wrote:
Why do you need this? What's wrong with a normal branch in 
this case.


Runtime performance. I want the value to get checked at compile 
time and use "static if" with it


If the value is known at compile time the compiler can pretty 
easily do that for you unless you're really unlucky.


Re: Is there a way to make a function parameter accept only values that can be checked at compile time?

2021-12-28 Thread max haughton via Digitalmars-d-learn

On Tuesday, 28 December 2021 at 21:19:29 UTC, rempas wrote:
I would like to know if that's possible. Actually I would like 
to do something like the following:


```
extern (C) void main() {
  void print_num(int num, comp_time_type int mul) {
static if (is(mul == ten)) {
  printf("%d\n", num * 10);
} else static if (is(mul == three)) {
  printf("%d\n", num * 3);
} else {
  printf("%d\n", num);
}
  }

  int multi = 211;
  print_num(10, 3); // Ok, accept this
  print_num(10, multi); // Error, do not accept this
}
```

So I want to have "mul" only accept values that can be 
calculate at compile time so I can use it with things like 
"static if". Is this possible?


Why do you need this? What's wrong with a normal branch in this 
case.


Re: How to print unicode characters (no library)?

2021-12-26 Thread max haughton via Digitalmars-d-learn

On Sunday, 26 December 2021 at 21:22:42 UTC, Adam Ruppe wrote:

On Sunday, 26 December 2021 at 20:50:39 UTC, rempas wrote:

[...]


write just transfers a sequence of bytes. It doesn't know nor 
care what they represent - that's for the receiving end to 
figure out.



[...]


You are mistaken. There's several exceptions, utf-16 can come 
in pairs, and even utf-32 has multiple "characters" that 
combine onto one thing on screen.


I prefer to think of a string as a little virtual machine that 
can be run to produce output rather than actually being 
"characters". Even with plain ascii, consider the backspace 
"character" - it is more an instruction to go back than it is a 
thing that is displayed on its own.



[...]


This is because the *receiving program* treats them as utf-8 
and runs it accordingly. Not all terminals will necessarily do 
this, and programs you pipe to can do it very differently.



[...]


The [w|d|]string.length function returns the number of elements 
in there, which is bytes for string, 16 bit elements for 
wstring (so bytes / 2), or 32 bit elements for dstring (so 
bytes / 4).


This is not necessarily related to the number of characters 
displayed.



[...]


yes, it just passes bytes through. It doesn't know they are 
supposed to be characters...


I think that mental model is pretty good actually. Maybe a more 
specific idea exists, but this virtual machine concept does 
actually explain to the new programmer to expect dragons - or at 
least that the days of plain ASCII are long gone (and never 
happened, e.g. backspace as you say)


Re: First time using Parallel

2021-12-26 Thread max haughton via Digitalmars-d-learn

On Sunday, 26 December 2021 at 06:10:03 UTC, Era Scarecrow wrote:
 This is curious. I was up for trying to parallelize my code, 
specifically having a block of code calculate some polynomials 
(*Related to Reed Solomon stuff*). So I cracked open 
std.parallel and looked over how I would manage this all.


 To my surprise I found ParallelForEach, which gives the 
example of:


```d
foreach(value; taskPool.parallel(range) ){code}
```

Since my code doesn't require any memory management, shared 
resources or race conditions (*other than stdout*), I plugged 
in an iota and gave it a go. To my amazement no compiling 
issues, and all my cores are in heavy use and it's outputting 
results!


 Now said results are out of order (*and early results are 
garbage from stdout*), but I'd included a bitwidth comment so 
sorting should be easy.

```d
0x3,/*7*/
0x11,   /*9*/
0x9,/*10*/
0x1D,   /*8*/
0x5,/*11*/
0x3,/*15*/
0x53,   /*12*/
0x1B,   /*13*/
0x2B,   /*14*/
```
etc etc.

 Previously years ago I remember having to make a struct and 
then having to pass a function and a bunch of stuff from within 
the struct, often breaking and being hard to get to even work 
so I didn't hardly touch this stuff. This is making outputting 
data MUCH faster and so easily; Well at least on a beefy 
computer and not just some chromebook I'm programming on so it 
can all be on the go.



 So I suppose, is there anything I need to know? About shared 
resources or how to wait until all threads are done?


Parallel programming is one of the deepest rabbit holes you can 
actually get to use in practice. Your question at the moment 
doesn't really have much context to it so it's difficult to 
suggest where you should go directly.


I would start by removing the use of stdout in your loop kernel - 
I'm not familiar with what you are calculating, but if you can 
basically have the (parallel) loop operate from (say) one array 
directly into another then you can get extremely good parallel 
scaling with almost no effort.


Not using in the actual loop should make the code faster even 
without threads because having a function call in the hot code 
will mean compilers optimizer will give up on certain 
transformations - i.e. do all the work as compactly as possible 
then output the data in one step at the end.


Re: Is DMD still not inlining "inline asm"?

2021-11-12 Thread max haughton via Digitalmars-d-learn

On Friday, 12 November 2021 at 11:32:16 UTC, rempas wrote:
On Thursday, 11 November 2021 at 19:22:33 UTC, max haughton 
wrote:


There's an attribute to tell it the function is safe to inline.


And can't you do that with inline asm?


Not always. The attribute is intended for naked asm since 
inlining could be completely wrong in this case.


Re: Is DMD still not inlining "inline asm"?

2021-11-11 Thread max haughton via Digitalmars-d-learn

On Thursday, 11 November 2021 at 17:29:33 UTC, rempas wrote:

On Thursday, 11 November 2021 at 13:22:15 UTC, Basile B. wrote:


Yes, this is still the case. A particularity of DMD inliner is 
that it does its job in the front-end, so inlining asm is 
totally impossible. Then, even if inlining was done in the 
backend inlining of asm would not be guaranteed because the 
byte code is generated at a very late stag, which causes 
problem with the registry allocator, the preservation of the 
stack, etc.


For example ldc2 does not inline a trival asm func 
https://godbolt.org/z/1W6r693Tq.


As for now, I know no compiler that can do that.


What? Not even GCC or Clang? Someone said that LDC2 does it 
with two ways in the thread I linked


There's an attribute to tell it the function is safe to inline.


Re: Better debugging?

2021-10-03 Thread max haughton via Digitalmars-d-learn

On Sunday, 3 October 2021 at 22:21:45 UTC, Tim wrote:

Hi all,

I am currently using GDB within VScode with the -gc DMD2 
compiler switch and my debugging is not amazing. Whenever I 
inspect a struct/object it just shows me the pointer rather 
than the object information and strings come up as a gross 
array of the characters. Does anybody happen to know whether 
LDB is better or how I can have a nicer debug environment?


Thanks in advance


Might be something for Iain to weigh in one when it comes to GDC 
specifically but the non-dmd compilers generate better debug info.


Re: avoid codegen pass

2021-10-02 Thread max haughton via Digitalmars-d-learn

On Saturday, 2 October 2021 at 18:05:06 UTC, Dennis wrote:

On Saturday, 2 October 2021 at 16:57:48 UTC, max haughton wrote:
Do you have optimizations turned on? i.e. are you compiling 
with -O by accident?


Not needed, it's declared:
```D
pragma(inline, true) @property _timezone() @safe const pure 
nothrow @nogc

```

DMD does inlining in the frontend, and without the `-inline` 
flag it still inlines functions when requested by 
`pragma(inline, true)`. That's why you see it logged even 
without codegen or `-inline`.


That's not what causes the long compile time though, `dmd -v` 
logs passes before doing them, not after, so it's the semantic3 
before the inline pass that's taking all the time.


I was aware (and am not a fan of) inlining in the frontend, but 
didn't look at the Phobos code.


Honestly dmd shouldn't have an optimizer IMO, it's not fit for 
purpose anymore, if you want optimizations use GDC or LDC. 
Inlining doesn't even respect the semantics of the language IIRC


Re: avoid codegen pass

2021-10-02 Thread max haughton via Digitalmars-d-learn

On Saturday, 2 October 2021 at 14:44:16 UTC, Padlev wrote:

On Saturday, 2 October 2021 at 13:26:27 UTC, Adam D Ruppe wrote:

On Saturday, 2 October 2021 at 13:24:19 UTC, Padlev wrote:

-o-
how to run only semantic and avoid codegen to have a quicker 
run?


-o- does skip codegen already


so why this? it seems to take a long time from the last import 
xxx semantic3 xxx to reach this


inline scan grasshopper
inlined   std.datetime.systime.SysTime._timezone =>
  std.datetime.systime.SysTime.opAssign!().opAssign
inlined   std.datetime.systime.SysTime._timezone =>
  std.datetime.systime.SysTime.opAssign!().opAssign

can not be avoided? i need only till semantic, no care in 
inline et cetera


Do you have optimizations turned on? i.e. are you compiling with 
-O by accident?




Re: How use ldc pragmas?

2021-10-01 Thread max haughton via Digitalmars-d-learn

On Friday, 1 October 2021 at 19:23:06 UTC, james.p.leblanc wrote:

D-ers,

After experimenting with ldc's autovectorization of avx code, 
it appears there may
be counter-intuitiveness to the autovectorization (especially 
for complex numbers).

(My comment may be wrong, so any corrections are quite welcome).

[...]


Is it sqrt.32 or sqrt.f32? Try the latter, LLVM docs seem to 
agree.


Re: Development of the foundation of a programming language

2021-09-14 Thread max haughton via Digitalmars-d-learn

On Tuesday, 14 September 2021 at 05:06:01 UTC, Elronnd wrote:
On Tuesday, 14 September 2021 at 03:24:45 UTC, max haughton 
wrote:

On Tuesday, 14 September 2021 at 03:19:46 UTC, Elronnd wrote:
On Monday, 13 September 2021 at 11:40:10 UTC, max haughton 
wrote:

The dragon book barely mentions SSA for example


In fairness, dmd doesn't use SSA either


That's not a good thing.


No, but if the OP's goal is to contribute to dmd, learning SSA 
wouldn't be very helpful beyond a general acclimation to 
compiler arcana.


(Unless they wish to add SSA to dmd--a worthy goal, but perhaps 
not the best thing to start out with.)


The backend is not where our efforts should be going. There is 
way too much work that needs doing above it to motivate working 
on the backend. The backend's design for the most part is 
extremely simple just buried under 40 years of code.


Besides, there's more to life than dmd, everything else is SSA at 
least some point in compilation (i.e. GCC isn't SSA all the way 
down but GIMPLE is).


Re: object.d: Error: module object is in file 'object.d' which cannot be read

2021-09-13 Thread max haughton via Digitalmars-d-learn
On Tuesday, 14 September 2021 at 03:31:17 UTC, Kenneth Dallmann 
wrote:

On Sunday, 27 March 2011 at 10:28:00 UTC, Ishan Thilina wrote:

[...]





I had the exact same problem, very frustrating.  I believe I 
solved the issue,

it was a simple error.

  When you are installing D, on Windows, it asks you to 
download MSVC, if it isn't

already present.

 Basically you are missing some form of dependency, which could 
be different based

on which compiler you are using.

  On my Windows machine I fixed the issue by downloading the 
Microsoft Visual IDE and then
from there I hit a button within that application to download 
the C compiler, MSVC.


After I did that I reinstalled D and it works now.

Thank you


Please try to avoid resurrecting very old threads if you can.


Re: Development of the foundation of a programming language

2021-09-13 Thread max haughton via Digitalmars-d-learn

On Tuesday, 14 September 2021 at 03:19:46 UTC, Elronnd wrote:
On Monday, 13 September 2021 at 11:40:10 UTC, max haughton 
wrote:

The dragon book barely mentions SSA for example


In fairness, dmd doesn't use SSA either


That's not a good thing.


Re: Development of the foundation of a programming language

2021-09-13 Thread max haughton via Digitalmars-d-learn
On Monday, 13 September 2021 at 04:08:53 UTC, rikki cattermole 
wrote:


On 13/09/2021 3:21 PM, leikang wrote:
Are there any recommended books or videos to learn about the 
principles of compilation? What else should I learn besides 
the principles of compilation?


The classic book on compilers that Walter recommends is the 
dragon book.


https://smile.amazon.com/Compilers-Principles-Techniques-Tools-2nd-dp-0321486811/dp/0321486811

(D Language Foundation is a charity Amazon Smile recognizes).


The dragon book is really really showing it's age these days so I 
would highly recommend getting a copy but not reading it fully. 
"Engineering a compiler" is much better pedagogically. The dragon 
book barely mentions SSA for example, although the sections they 
did properly bother to update towards the end are quite 
interesting.


"Crafting interpreters" is quite good, I recommend it for 
learning how to actually write a parser without getting bogged 
down in totally useless theory.


Stephen Muchnick's "Advanced Compiler Design and Implementation" 
is *the* bible for optimizations, but uses a very weird 
unimplemented language so be careful for bugs.


"Optimizing Compilers for Modern Architectures: A 
Dependence-based Approach" is the only book I'm aware of that 
actually covers even the beginnings of modern loop optimizations 
thoroughly. Even this however is still somewhat set back by it 
being written 20 years ago, the principles are the same but the 
instinct is not i.e. memory latency is worse, ILP is much better.


What all of these books have in common, by the way, is that they 
were all written at a time when it was assumed that x86 would go 
the way of the dodo. So there is a somewhat significant deviation 
from "theory" and practice in some parts as (say) x86 SIMD is 
quite different from how the authors of the aforementioned book 
expected the world to go.


Re: Development of the foundation of a programming language

2021-09-12 Thread max haughton via Digitalmars-d-learn

On Monday, 13 September 2021 at 00:53:06 UTC, leikang wrote:
I want to contribute to the development of the dlang language, 
but I feel that I am insufficient, so I want to ask the big 
guys, can I participate in the development of the Dlang 
language after learning the principles of compilation?


Yes. If you make a PR it should be and will be judged based on 
the code and only the code, not where or who it came from.


Re: do I incur a penality on compile time if I explicitly declare default behavior ?

2021-06-22 Thread max haughton via Digitalmars-d-learn

On Monday, 21 June 2021 at 04:12:55 UTC, someone wrote:

I mean, coding as following:

```d
int intWhatever = 0; /// default being zero anyway

foreach (classComputer objComputer, objComputers) { ... } /// 
explicitly declaring the type instead of letting the compiler 
to figure it out


struc Whatever {

   public doSomething() { ... } /// explicitly declaring scopes 
matching the default ones


}

string[] strWhatever;
if (strWhatever.length > cast(size_t) 1) { ... } /// explicitly 
casting for proper types although not required to at all

```

... and the likes; or, besides unnecessary typing, are there 
any cons that I should be aware of while using DMD ?


Short answer: No

Longer answer: Still no but it does exercise different code in 
the compiler which you could measure if you were mad.


Re: Can't I allocate at descontructor?

2021-03-05 Thread Max Haughton via Digitalmars-d-learn

On Friday, 5 March 2021 at 20:13:54 UTC, Jack wrote:

On Friday, 5 March 2021 at 20:10:39 UTC, Max Haughton wrote:

On Friday, 5 March 2021 at 20:03:58 UTC, Jack wrote:

On Friday, 5 March 2021 at 09:23:29 UTC, Mike Parker wrote:

On Friday, 5 March 2021 at 05:31:38 UTC, Jack wrote:

[...]


https://dlang.org/blog/2021/03/04/symphony-of-destruction-structs-classes-and-the-gc-part-one/


thanks for such good article. So if the object was allocated 
on heap, there's no guarantee that the object's destrutor 
will be called at all? do destrutor allocate at stack are 
guarantee to be run?


Destructors of structs on the stack will always run 
deterministically.


But the ones heap may never run at all, is that right?


You can't rely on the garbage collector for deterministic 
destruction, no.


Re: Can't I allocate at descontructor?

2021-03-05 Thread Max Haughton via Digitalmars-d-learn

On Friday, 5 March 2021 at 20:03:58 UTC, Jack wrote:

On Friday, 5 March 2021 at 09:23:29 UTC, Mike Parker wrote:

On Friday, 5 March 2021 at 05:31:38 UTC, Jack wrote:
The following code returns a memory error. I did notice it 
did happens whenever I did a memory allocation. Is this not 
possible in the descontrutor? if so, why?


https://dlang.org/blog/2021/03/04/symphony-of-destruction-structs-classes-and-the-gc-part-one/


thanks for such good article. So if the object was allocated on 
heap, there's no guarantee that the object's destrutor will be 
called at all? do destrutor allocate at stack are guarantee to 
be run?


Destructors of structs on the stack will always run 
deterministically.


Re: dub support for Mac M1?

2021-03-04 Thread Max Haughton via Digitalmars-d-learn

On Thursday, 4 March 2021 at 22:30:17 UTC, tastyminerals wrote:
I got a company MacBook with M1 chip and gradually migrate all 
the stuff from Linux machine. I got precompiled ldc binary 
installed without any problem now is the time for dub since I 
have couple of D projects I use at work and all of them use dub.


[...]


If someone with an M1 wants to get it working the patch is 
appreciated but I (for one) don't have one so I can't.


Re: How can I make this work?

2021-02-28 Thread Max Haughton via Digitalmars-d-learn

On Sunday, 28 February 2021 at 09:18:56 UTC, Rumbu wrote:

On Sunday, 28 February 2021 at 09:04:49 UTC, Rumbu wrote:

On Sunday, 28 February 2021 at 07:05:27 UTC, Jack wrote:
I'm using a windows callback function where the user-defined 
value is passed thought a LPARAM argument type. I'd like to 
pass my D array then access it from that callback function. 
How is the casting from LPARAM to my type array done in that 
case?


for example, I need something like this to work:

int[] arr = [1, 2, 3];
long l = cast(long) cast(void*) arr.ptr;
int[] a = cast(int[]) cast(void*) l;


LPARAM is not long on 32 bits, it's int. Use LPARAM instead of 
long.


And you are passing only the address of the first element this 
way, loosing the array/slice length. This should work, but keep 
in mind that you have no warranty that the array stays in 
memory and it is not garbage collected.


int[] arr = [1, 2, 3];
LPARAM l = cast(LPARAM)cast(void*)
int[] a = *cast(int[]*)(cast(void*)l);


Do the windows APIs expect the length in memory rather than as a 
parameter?


Also Rumbu can you check your email - I may have emailed you on 
an old email address by accident, but it's about the blog and it 
will be from mh240@...


Re: Problem Computing Dot Product with mir

2021-02-22 Thread Max Haughton via Digitalmars-d-learn

On Monday, 22 February 2021 at 07:14:26 UTC, 9il wrote:
On Sunday, 21 February 2021 at 16:18:05 UTC, Kyle Ingraham 
wrote:
I am trying to convert sRGB pixel values to XYZ with mir using 
the following guide: 
http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html


[...]


mir-glas is deprecated experimental project. It is worth use 
mir-blas or lubeck instead. There is also naming issue. In the 
classic BLAS naming dot refers a function that accept two 1D 
vectors.


Deprecated as in formally dead or postponed?


Re: Mixed language projects (D and C++)

2021-02-19 Thread Max Haughton via Digitalmars-d-learn

On Friday, 19 February 2021 at 10:18:28 UTC, Preetpal wrote:

On Friday, 19 February 2021 at 10:01:36 UTC, Max Haughton wrote:

On Friday, 19 February 2021 at 09:44:15 UTC, Preetpal wrote:
I want to reuse existing C++ code in a new project that I am 
writing in D and I want to use D in an existing C++ code base 
(it is not large). I do not anticipate interop being an issue.


[...]


C++ interop is used every day. The LLVM D compiler, ldc, uses 
it to talk to LLVM efficiently.


There are good resources on it on this very website


I am looking for suggestions on what build system to use. I 
took a look at the LDC project but it looks they are writing 
their own CMake scripts in the repository itself to add support 
for D to CMake. I was hoping there might be something that 
others were using with out of box support for D.


I would keep it simple and use the dub pregenerate step to run 
any old C++ build process then link as usual.


Re: Mixed language projects (D and C++)

2021-02-19 Thread Max Haughton via Digitalmars-d-learn

On Friday, 19 February 2021 at 09:44:15 UTC, Preetpal wrote:
I want to reuse existing C++ code in a new project that I am 
writing in D and I want to use D in an existing C++ code base 
(it is not large). I do not anticipate interop being an issue.


[...]


C++ interop is used every day. The LLVM D compiler, ldc, uses it 
to talk to LLVM efficiently.


There are good resources on it on this very website


Re: Profiling

2021-02-10 Thread Max Haughton via Digitalmars-d-learn
On Wednesday, 10 February 2021 at 13:31:09 UTC, Guillaume Piolat 
wrote:

On Wednesday, 10 February 2021 at 11:52:51 UTC, JG wrote:

[...]


Here is what I use for sampling profiler:

(On Windows)

Build with LDC, x86_64, with dub -b release-debug in order to 
have debug info.

Run your program into:
- Intel Amplifier (free with System Studio)
- AMD CodeXL (more lightweight, and very good)
- Very Sleepy

(On Mac)

Build with dub -b release-debug
Run your program with Instruments.app which you can find in 
your Xcode.app


(On Linux)
I don't know.


Though most of the time to validate the optimization a 
comparison program that runs two siilar programs and computer 
the speed difference can be needed.


All Intel tools I'm aware of support (and are free on) Linux. 
Also, it's just called vTune now and it's been put under the 
"oneAPI" banner.


Re: GC.addRange in pure function

2021-02-09 Thread Max Haughton via Digitalmars-d-learn

On Tuesday, 9 February 2021 at 19:53:27 UTC, Temtaime wrote:

On Sunday, 7 February 2021 at 14:13:18 UTC, vitamin wrote:
Why using 'new' is allowed in pure functions but calling 
GC.addRange or GC.removeRange isn't allowed?


pure is broken. Just don't [use it]



[Citation needed]


Re: My simple internet client made in Dlang.

2021-02-04 Thread Max Haughton via Digitalmars-d-learn

On Thursday, 4 February 2021 at 20:54:15 UTC, Ali Çehreli wrote:

On 2/3/21 8:44 AM, Marcone wrote:

[...]


I think the following would be improvements:

>[...]

I don't know the protocol but obviously 8192 must be sufficient.

[...]


Less calls to std.conv can also mean less surface area for 
exceptions to throw from


Re: Can change vtbl record at runtime ?

2021-02-03 Thread Max Haughton via Digitalmars-d-learn
On Wednesday, 3 February 2021 at 05:30:37 UTC, Виталий Фадеев 
wrote:

Reason:
Reuse component,
bind custom callback without creating new class.

Concept example:
class SaveFilePopup
{
void onSuccess() { /* default operations */ }
}

auto saveFile = new SaveFilePopup();
saveFile.onSuccess = { /* New operations */ }

Delegate:
may be... but, for speed reason, is possible to set the 
default code at compile-time ?


class X
{
   void delegate() onSuccess = { /* default code */ };
}

Context:
GUI, components, callbacks

Possible to change the vtbl record at runtime ?
Has functional for update vtbl records ?


Do you mean "Can I set onSuccess" at runtime? The virtual tables 
are relied upon by the compiler so I wouldn't play with them.


Re: Refactoring tools

2021-02-03 Thread Max Haughton via Digitalmars-d-learn

On Wednesday, 3 February 2021 at 07:20:06 UTC, Imperatorn wrote:

As the title says:

1. Are there any (automated) refactoring tools for D?
2. If not, why? (Is D still too small?)


D is also designed to not need as much refactoring as other 
languages, so even for our size there isn't a huge amount of 
demand.


Previously the actual implementation would've been hard but now 
we have the compiler working fairly well as a library it's just a 
question of profitability (be that money, fun, or social kudos).


Re: Minimize GC memory footprint

2021-01-31 Thread Max Haughton via Digitalmars-d-learn

On Saturday, 30 January 2021 at 16:42:35 UTC, frame wrote:
Is there a way to force the GC to re-use memory in already 
existing pools?


I set maxPoolSize:1 to gain pools that can be quicker released 
after there no longer in use. This already reduces memory usage 
to 1:3. Sadly the application creates multiple pools that are 
not necessary in my POV - just fragmented temporary slice data 
like from format(). What can I do to optimize?


I can't tell you much about the inner workings of the GC, but 
maybe take a look at experimental.allocator and see if anything 
there can help you (specifically that you understand your program 
better than the GC)


Re: Why many programmers don't like GC?

2021-01-15 Thread Max Haughton via Digitalmars-d-learn

On Friday, 15 January 2021 at 21:49:07 UTC, H. S. Teoh wrote:
On Fri, Jan 15, 2021 at 09:04:13PM +, welkam via 
Digitalmars-d-learn wrote:

[...]


As the joke goes, "you can write assembly code in any 
language". :-D  If you code in a sloppy way, it doesn't matter 
what language you write in, your program will still suck.  No 
amount of compiler magic will be able to help you.  The 
solution is not to blame this or that, it's to learn how to use 
what the language offers you effectively.




[...]


And with D, it's actually easy to do this, because D gives you 
tools like slices and by-value structs.  Having slices backed 
by the GC is actually a very powerful combination that people 
seem to overlook: it means you can freely refer to data by 
slicing the buffer.  Strings being slices, as opposed to 
null-terminated, is a big part of this.  In C, you cannot 
assume anything about how the memory of a buffer is managed 
(unless you allocated it yourself); as a result, in typical C 
code strcpy's, strdup's are everywhere.  Want a substring?  You 
can't null-terminate the parent string without affecting code 
that still depends on it; solution? strdup.  Want to store a 
string in some persistent data structure?  You can't be sure 
the pointer will still be valid (or that the contents pointed 
to won't change); solution? strdup, or strcpy.  Want to parse a 
string into words?  Either you modify it in-place (e.g. 
strtok), invalidating any other references to it, or you have 
to make new allocations of every segment.  GC or no GC, this 
will not lead to a good place, performance-wise.


I could not have written fastcsv if I had to work under the 
constraints of C's null-terminated strings under manual memory 
management.  Well, I *could*, but it would have taken 10x the 
amount of effort, and the API would be 5x uglier due to the 
memory management paraphrenalia required to do this correctly 
in C.  And to support lazy range-based iteration would require 
a whole new set of API's in C just for that purpose.  In D, I 
can simply take slices of the input -- eliminating a whole 
bunch of copying.  And backed by the GC -- so the code doesn't 
have to be cluttered with memory management paraphrenalia, but 
can have a simple, easy-to-use API compatible across a large 
range of use cases. Lazy iteration comes "for free", no need to 
introduce an entire new API. It's a win-win.


All that's really needed is for people to be willing to drop 
their C/C++/Java coding habits, and write D the way it's meant 
to be written: with preference for stack-allocated structs and 
by-value semantics, using class objects only for more 
persistent data. Use slices for maximum buffer reuse, avoid 
needless copying. Use compile-time introspection to generate 
code statically where possible instead of needlessly 
recomputing stuff at runtime.  Don't fear the GC; embrace it 
and use it to your advantage.  If it becomes a bottleneck, 
refactor that part of the code.  No need to rewrite the entire 
project the painful way; most of the time GC performance issues 
are localised and have relatively simple fixes.



T


I agree that the GC is useful, but it is a serious hindrance on 
the language not having an alternative other than really bad 
smart pointers (well written but hard to know their overhead) and 
malloc and free. I don't mind using the GC for my own stuff, but 
it's too difficult to avoid it at the moment for the times when 
it gets in the way.


I think the way forward is some robust move semantics and 
analysis like Rust. I suppose ideally we would have some kind of 
hidden ARC behind the scenes but I don't know how that would play 
with structs.


One more cynical argument for having a modern alternative is that 
it's a huge hindrance on the languages "cool"Ness in the next 
generation of programmers and awareness is everything (most 
people won't have heard of D)


Re: DConf talk : Exceptions will disappear in the future?

2021-01-05 Thread Max Haughton via Digitalmars-d-learn
On Tuesday, 5 January 2021 at 19:42:40 UTC, Ola Fosheim Grøstad 
wrote:

On Tuesday, 5 January 2021 at 18:23:25 UTC, sighoya wrote:
No error handling model was the HIT and will never be, 
therefore I would recommend to leave things as they are and to 
develop alternatives and not to replace existing ones.


Or implement C++ exceptions, so that D can catch C++ exceptions 
transparently (ldc catch clang++ exceptions and gdc catch g++ 
exceptions).


Walter already got quite a lot of the way there on that. There 
are some PRs on dmd about it but it's not in a state worth 
documenting yet if it's still there (the tests are still there so 
I assume it still works)


Re: Get the code of any D-entity as string?

2020-12-26 Thread Max Haughton via Digitalmars-d-learn

On Friday, 25 December 2020 at 21:25:40 UTC, sighoya wrote:

I've read a bit in Dlang traits.

It now has the ability to retrieve all method signatures of an 
overload set.

Big plus from me.

Is generally possible to get the declaration of a 
type/module/value as string in traits?


I didn't have any concrete use case for it, but it would 
essentially allow us to reflect over any code (we may have to 
respect privacy, of course).


On top of that, people could write their own token or ast 
frameworks as library solutions.


Further, most of the trait functionality given in the trait 
library could be simply implemented as a library solution.


I don't know D enough to be sure it isn't yet possible. 
Theoretically, we could read all files in a source folder, but 
I want also to read declarations from types outside my source 
folders, e.g. from those located in dyn libs.


Not possible although implementing as a __trait would be about 15 
lines I think.


I think read only AST access in some form would actually be quite 
nice, if not for dmd's AST being fuckugly (the hierarchy is fine 
but it's more gcc than clang)


Re: If statements and unused template parameters in Phobos documentation

2020-12-20 Thread Max Haughton via Digitalmars-d-learn

On Sunday, 20 December 2020 at 13:51:08 UTC, Rekel wrote:
I found a lot of the Phobos documentation to contain template 
arguments and if statements that made no sense to me, for 
example:


```
 uint readf(alias format, A...) (
 auto ref A args
)
if (isSomeString!(typeof(format)));

uint readf(A...) (
 scope const(char)[] format,
 auto ref A args
);
``` https://dlang.org/library/std/stdio/file.readf.html

From stdio.readf & stdio.File.readf. I'm assuming this is some 
kind of template, but often it seems there are more parameters 
in the first '()' part than are ever given. Am I missing 
something? Additionally, what is that if statement for? It 
precedes nothing.


The if is a template constraint.


Re: low-latency GC

2020-12-06 Thread Max Haughton via Digitalmars-d-learn
On Sunday, 6 December 2020 at 11:35:17 UTC, Ola Fosheim Grostad 
wrote:

On Sunday, 6 December 2020 at 11:27:39 UTC, Max Haughton wrote:

[...]


No, unique doesnt need indirection, neither does ARC, we put 
the ref count at a negative offset.


shared_ptr is a fat pointer with the ref count as a separate 
object to support existing C libraries, and make weak_ptr easy 
to implement. But no need for indirection.



[...]


I think you need a new IR, but it does not have to be used for 
code gen, it can point back to the ast nodes that represent ARC 
pointer assignments.


One could probably translate the one used in Rust, even.


https://gcc.godbolt.org/z/bnbMeY


Re: low-latency GC

2020-12-06 Thread Max Haughton via Digitalmars-d-learn
On Sunday, 6 December 2020 at 11:07:50 UTC, Ola Fosheim Grostad 
wrote:

On Sunday, 6 December 2020 at 10:44:39 UTC, Max Haughton wrote:
On Sunday, 6 December 2020 at 05:29:37 UTC, Ola Fosheim 
Grostad wrote:
It has to be either some kind of heavily customisable small GC 
(i.e. with our resources the GC cannot please everyone), or 
arc. The GC as it is just hurts the language.


Realistically, we probably need some kind of working group or 
at least serious discussion to really narrow down where to go 
in the future. The GC as it is now must go, we need borrowing 
to work with more than just pointers, etc.


The issue is that it can't just be done incrementally, it 
needs to be specified beforehand.


ARC can be done incrementally, we can do it as a library first 
and use a modified version existing GC for detecting failed 
borrows at runtime during testing.


But all libraries that use owning pointers need ownership to be 
made explicit.


A static borrow checker an ARC optimizer needs a high level IR 
though. A lot of work though.


ARC with a library will have overhead unless the compiler/ABI is 
changed e.g. unique_ptr in C++ has an indirection.


The AST effectively is a high-level IR. Not a good one, but good 
enough. The system Walter has built shows the means are there in 
the compiler already.


As things are at the moment, the annotations we have for pointers 
like scope go a long way, but the language doesn't deal with 
things like borrowing structs (and the contents of structs i.e. 
making a safe vector) properly yet. That is what needs thinking 
about.


Re: low-latency GC

2020-12-06 Thread Max Haughton via Digitalmars-d-learn
On Sunday, 6 December 2020 at 05:29:37 UTC, Ola Fosheim Grostad 
wrote:

On Sunday, 6 December 2020 at 05:16:26 UTC, Bruce Carneal wrote:
How difficult would it be to add a, selectable, low-latency GC 
to dlang?


Is it closer to "we cant get there from here" or "no big deal 
if you already have the low-latency GC in hand"?


I've heard Walter mention performance issues (write barriers 
IIRC).  I'm also interested in the GC-flavor performance trade 
offs but here I'm just asking about feasibility.


The only reasonable option for D is single threaded GC or ARC.


It has to be either some kind of heavily customisable small GC 
(i.e. with our resources the GC cannot please everyone), or arc. 
The GC as it is just hurts the language.


Realistically, we probably need some kind of working group or at 
least serious discussion to really narrow down where to go in the 
future. The GC as it is now must go, we need borrowing to work 
with more than just pointers, etc.


The issue is that it can't just be done incrementally, it needs 
to be specified beforehand.




Re: Doubts about the performance of array concatenation

2020-12-01 Thread Max Haughton via Digitalmars-d-learn

On Wednesday, 2 December 2020 at 00:08:55 UTC, ddcovery wrote:

On Tuesday, 1 December 2020 at 23:43:31 UTC, Max Haughton wrote:

On Tuesday, 1 December 2020 at 22:49:55 UTC, ddcovery wrote:
Yesterday I really shocked when, comparing one algorithm 
written in javascript and the equivalent in D, javascript 
performed better!!!


[...]


Use ldc, rdmd can invoke it for you. DMD's optimizer is not 
even close to as advanced as a modern JavaScript engine's - 
that's why we use it for fast build times instead of 
performance.


Impressive performance increase:

$ ldc2 -O -release  --run sorted.d
# D
1.0M: 810 ms
1.5M: 1324 ms
3.0M: 2783 ms
6.0M: 5727 ms

Thanks Max


I've written a github issue on your repository with flags to get 
the most out of LLVM.


Re: Dude about ~ array concatenation performance

2020-12-01 Thread Max Haughton via Digitalmars-d-learn

On Tuesday, 1 December 2020 at 22:49:55 UTC, ddcovery wrote:
Yesterday I really shocked when, comparing one algorithm 
written in javascript and the equivalent in D, javascript 
performed better!!!


[...]


Use ldc, rdmd can invoke it for you. DMD's optimizer is not even 
close to as advanced as a modern JavaScript engine's - that's why 
we use it for fast build times instead of performance.


Re: How to unit-test a phobos module?

2020-11-25 Thread Max Haughton via Digitalmars-d-learn

On Wednesday, 25 November 2020 at 21:36:36 UTC, Q. Schroll wrote:
On Wednesday, 25 November 2020 at 21:16:15 UTC, Steven 
Schveighoffer wrote:

I typically do:

make -f posix.mak std/.test

-Steve


For some reason, [1] says `make.exe` would be installed by the 
DMD installer, but I found none. It explicitly says not to use 
GNU make. (I'm on Windows.)


[1] https://wiki.dlang.org/Building_under_Windows


The digital mars make is actually distributed with the c compiler.

Developing on Windows is such a complete pain e.g. setting it up 
so it can find the linker and cl. For that reason I currently 
just do all dmd/phobos/druntime hacking inside WSL because that 
way I can just press build (dmd has build.d which is 
cross-platform but druntime and phobos both aren't fun to build 
on windows unless you've done it recently)


Re: How to construct a tree data structure with differently static nodes types

2020-11-22 Thread Max Haughton via Digitalmars-d-learn
On Monday, 23 November 2020 at 01:20:04 UTC, data pulverizer 
wrote:

Hi all,

I am trying to construct a tree data structure composed of 
differently (statically) typed nodes. The basic case is a 
binary tree. So you have a node like:


```
struct Node(T)
{
  T value;
  Node* next;
  Node* prev;
}

void main()
{
  auto x = Node!(int)(2);
  auto y = Node!(double)(3.2);
  x.next =  //gives error
}
```
Error: cannot implicitly convert expression & y of type 
Node!double* to Node!int*


So implicity Node!(T) will produce an object with prev, and 
next type Node!(T)*. But once I give them different types:


```
struct Node(T, P, N)
{
  T value;
  Node!(P...)* prev;
  Node!(N...)* next;
}
```

I can no longer specify the types at all, they become 
circularly referenced. Would appreciate the solution to this.


Many thanks.


If you want to keep things simple, use OOP (classes).

If you need to use structs, the "sumtype" may be just what you 
need (it's a bit more lightweight than std.algebraic in the 
standard library). If you want to implement this yourself then 
you need to write something called a tagged union.


Re: Request our suggestio: better way to insert data from Array!string[string] into a database table

2020-11-16 Thread Max Haughton via Digitalmars-d-learn

On Monday, 16 November 2020 at 17:44:08 UTC, Vino wrote:

Hi All,

  Request your suggestion, we have a program which call's an 
api, the output of the api is parsed using json parser and the 
result is stored in an array(Array!string[string] data), then 
these stored result are inserted into MySQL table, for 
inserting the data into the table we use the below code, this 
code is a small code which just contains 2 data items 
(Type,Hostname) and we have similar api's which  contains 15-20 
data items, hence request your suggestion on is there any 
better way than the below code, using the below logic the 
foreach line is will run into multiple lines eg:


[...]


What are you looking to improve? Do you want to make the code 
prettier or faster?


It doesn't look too bad to my eye although my personal style 
would be to unpack t and h inside the foreach loop.


Re: presence of function template prevents diagnostic

2020-11-16 Thread Max Haughton via Digitalmars-d-learn
On Monday, 16 November 2020 at 17:03:32 UTC, Steven Schveighoffer 
wrote:

On 11/14/20 5:44 PM, kdevel wrote:


$ dmd -version=X -i foo
$ ./foo
void A.bar(int s)

Is the latter behavior intended or a bug?


That seems like a bug. It shouldn't be less ambiguous because 
you *added* an overload that can also handle it...


-Steve


It could be technically kosher because of template lookup but I'm 
not sure if the behaviour is actually specified in the standard. 
Probably should be a bug.


Re: Why was new(size_t s) { } deprecated in favor of an external allocator?

2020-10-15 Thread Max Haughton via Digitalmars-d-learn
On Thursday, 15 October 2020 at 06:39:00 UTC, Patrick Schluter 
wrote:
On Wednesday, 14 October 2020 at 20:32:51 UTC, Max Haughton 
wrote:

On Wednesday, 14 October 2020 at 20:27:10 UTC, Jack wrote:

What was the reasoning behind this decision?


Andrei's std::allocator talk from a few years ago at cppcon 
covers this (amongst other things)


Yes, and what did he say?
You seriously don't expect people to search for a random talk 
from a random event from a random year?


It's the first result when you search for "Andrei std::allocator"


Re: Why was new(size_t s) { } deprecated in favor of an external allocator?

2020-10-14 Thread Max Haughton via Digitalmars-d-learn

On Wednesday, 14 October 2020 at 20:27:10 UTC, Jack wrote:

What was the reasoning behind this decision?


Andrei's std::allocator talk from a few years ago at cppcon 
covers this (amongst other things)


Re: Range format specifiers in other languages?

2020-10-11 Thread Max Haughton via Digitalmars-d-learn

On Sunday, 11 October 2020 at 23:57:31 UTC, Ali Çehreli wrote:

I find D's %( and %) range format specifiers very useful:

import std.stdio;
import std.range;

void main() {
  5.iota.writefln!"%(%s, %)";  // Prints 0, 1, 2, 3, 4
}

Are there similar features in other languages?

Thank you,
Ali


I think rust can do something similar with struct pretty 
printing. The syntax has curly braces in it but I can't recall it 
right now.


Possibly worth showing off (especially given that some people at 
first don't even know the templated format string exists)


Re: Taking arguments by value or by reference

2020-10-04 Thread Max Haughton via Digitalmars-d-learn

On Sunday, 4 October 2020 at 14:26:43 UTC, Anonymouse wrote:

On Saturday, 3 October 2020 at 23:47:32 UTC, Max Haughton wrote:
The guiding principle to your function parameters should be 
correctness - if I am passing a big struct around, if I want 
to take ownership of it I probably want to take it by value 
but if I want to modify it I should take it by reference (or 
by pointer but don't overcomplicate, notice in the previous 
example they lower to the same thing). If I just want to look 
at it, it should be taken by const ref if possible (D const 
isn't the same as C++ const, this may catch you out).


Const-correctness is a rule to live by especially with an big 
unwieldy struct.


I would avoid the new in for now, but I would go with const 
ref from what you've described so far.


I mostly really only want a read-only view of the struct, and 
whether a copy was done or not is academic. However, profiling 
showed (what I interpret as) a lot of copying being done in 
release builds specifically.


https://i.imgur.com/JJzh4Zc.jpg

Naturally a situation where I need ref I'd use ref, and in the 
rare cases where it actually helps to have a mutable copy 
directly I take it mutable. But if I understand what you're 
saying, and ignoring --preview=in, you'd recommend I use const 
ref where I would otherwise use const?


Is there some criteria I can go by when making this decision, 
or does it always reduce to looking at the disassembly?


This is skill you only really hone with experience, but it's not 
too bad once you're used to it.


For a big struct, I would just stick to expressing what you want 
it to *do* rather than how you want it to perform. If you want to 
take ownership you basically have to take by value, but if you 
(as you said) want a read only view definitely const ref. If I 
was reading your code, ref immediately tells me not to think 
about ownership and const ref immediately tells me you just want 
to look at the goods.


One thing I haven't mentioned so far is that not all types have 
non-trivial semantics when it comes to passing them around by 
value, so if you are writing generic code it is often best to 
avoid these.


Re: Taking arguments by value or by reference

2020-10-03 Thread Max Haughton via Digitalmars-d-learn

On Saturday, 3 October 2020 at 23:00:46 UTC, Anonymouse wrote:
I'm passing structs around (collections of strings) whose 
.sizeof returns 432.


The readme for 2.094.0 includes the following:

This release reworks the meaning of in to properly support all 
those use cases. in parameters will now be passed by reference 
when optimal, [...]


* Otherwise, if the type's size requires it, it will be passed 
by reference.
Currently, types which are over twice the machine word size 
will be passed by
reference, however this is controlled by the backend and can 
be changed based

on the platform's ABI.


However, I asked in #d a while ago and was told to always pass 
by value until it breaks, and only then resort to ref.


[18:32:16]  at what point should I start passing my 
structs by ref rather than by value? some are nested in 
others, so sizeofs range between 120 and 620UL

[18:33:43]  when you start getting stack overflows
[18:39:09]  so if I don't need ref for the references, 
there's no inherent merit to it unless I get in trouble 
without it?

[18:39:20]  pretty much
[18:40:16]  in many cases the copying is merely 
theoretical and doesn't actually happen when optimized


I've so far just been using const parameters. What should I be 
using?


Firstly, the new in semantics are very new and possibly subtly 
broken (take a look at the current thread in general).


Secondly, as to the more specific question of how to pass a big 
struct around it may be helpful to look at this quick godbolt 
example (https://d.godbolt.org/z/nPvTWz). Pay attention to the 
instructions writing to stack memory (or not). A struct that big 
will be passed around on the stack, whether it gets copied or not 
depends on the semantics of the struct etc.


The guiding principle to your function parameters should be 
correctness - if I am passing a big struct around, if I want to 
take ownership of it I probably want to take it by value but if I 
want to modify it I should take it by reference (or by pointer 
but don't overcomplicate, notice in the previous example they 
lower to the same thing). If I just want to look at it, it should 
be taken by const ref if possible (D const isn't the same as C++ 
const, this may catch you out).


Const-correctness is a rule to live by especially with an big 
unwieldy struct.


I would avoid the new in for now, but I would go with const ref 
from what you've described so far.




Re: Question about publishing a useful function I have written

2020-07-14 Thread Max Haughton via Digitalmars-d-learn

On Tuesday, 14 July 2020 at 21:58:49 UTC, Cecil Ward wrote:
I have written something which may or may not be novel and I’m 
wondering about how to distribute it to as many users as 
possible, hoping others will find it useful. What’s the best 
way to publish a D routine ?


[...]


GitHub is the best place to publish code. Does GDC actually use 
the optimization? I tried something like that before but I 
couldn't seem to get it to work properly.





Re: Passing iterators into functions

2020-06-24 Thread Max Haughton via Digitalmars-d-learn

On Thursday, 25 June 2020 at 03:35:00 UTC, repr-man wrote:

I have the code:

int[5] a = [0, 1, 2, 3, 4];
int[5] b = [5, 6, 7, 8, 9];
auto x = chain(a[], b[]).chunks(5);
writeln(x);

It produces a range of slices as is expected: [[0, 1, 2, 3, 4], 
[5, 6, 7, 8, 9]]


However, when I define a function as follows and pass in the 
result of the chain iterator:


auto func(R)(R r, size_t width)
if(isRandomAccessRange!R)
{
return r.chunks(width);
}

void main()
{
int[5] a = [0, 1, 2, 3, 4];
int[5] b = [5, 6, 7, 8, 9];
auto x = func!(int[])(chain(a[], b[]), 5);
writeln(x);
}

It gives me an error along the lines of:
Error: func!(int[]).func(int[] r, ulong width) is not callable 
using argument types (Result, int)
   cannot pass argument chain(a[], b[]) of type Result to 
parameter int[] r


I was hoping it would return the same result as the first 
program.


This seems to have to do with the fact that all iterators 
return their own unique type.  Could someone help me understand 
the reason behind this design and how to remedy my situation?


Chain returns a range not an int[]. You need to either convert 
the range to an array via .array or allow the compiler to infer 
the type of the parameter of func (You'll need to import 
std.range to have the range interface available)


mhh


Re: dmd memory usage

2019-11-18 Thread Max Haughton via Digitalmars-d-learn
On Monday, 18 November 2019 at 00:20:12 UTC, Steven Schveighoffer 
wrote:
I'm fighting some out of memory problems using DMD and some 
super-template heavy code.


I have ideas on how to improve the situation, but it involves 
redesigning a large portion of the design. I want to do it 
incrementally, but I need to see things improving.


Is there a straightforward way to figure out how much memory 
the compiler uses during compilation? I though maybe 
/usr/bin/time, but I feel like I don't trust the output to be 
the true max resident size to be what I'm looking for (or that 
it's 100% accurate). Is there a sure-fire way to have DMD print 
it's footprint?


-Steve


Massif is good for this. ms_print will give you a graphical 
summary, and the data is human and machine readable.


The only setback being that massif can make the execution slower 
however I can't give exact numbers but it can be terrible.


Re: Execute certain Tests?

2019-11-02 Thread Max Haughton via Digitalmars-d-learn

On Saturday, 2 November 2019 at 17:27:14 UTC, Martin Brezel wrote:

Is there a trick to execute only the test, defined in one file?
Or the Tests of a certain Module?

Or in general: How to avoid to execute all the tests, when 
running "dub test"?

It doesn't has to be dub, though.


Not by default, if you know the language well then you can use 
UDAs to write a fancy test runner in ten minutes, if not then use 
Attila's unit-threaded (or similar)



Max


Re: On D's garbage collection

2019-10-08 Thread Max Haughton via Digitalmars-d-learn

On Tuesday, 8 October 2019 at 16:28:51 UTC, Marcel wrote:
I'm been thinking about using D in conjunction with C11 to 
develop a set of applications with hard real-time requirements. 
While initially the goal was to use C++ instead, it has become 
clear that D's introspection facilities will offer significant 
advantages. Unfortunately, the project will heavily rely on 
custom memory allocators written in C, so the presence of 
garbage collection in the language is a problem. While I'm 
aware that the nogc attribute exists, I haven't actually seen a 
way to apply it to a whole project. Is this possible?


Do you want to write D code that just doesn't use the GC or the 
whole runtime?


If the former then use @nogc at the entry point of your D code 
(This means that - say - main cannot call anything non-@nogc and 
therefore guarantees the program is @nogc), if the latter then 
use -betterC


IMO, if the interface to your memory allocators is stable then 
just link with them and write the whole thing in D (Interfacing 
with C is a solved problem but C is just awful compared to the 
features you get for free in D)


Re: how to definition a non-const pointer that point a const var.

2019-08-24 Thread Max Haughton via Digitalmars-d-learn
On Saturday, 24 August 2019 at 05:03:43 UTC, Jonathan M Davis 
wrote:
On Friday, August 23, 2019 10:14:56 PM MDT lili via 
Digitalmars-d-learn wrote:

Hi:
   In C we can definition const int *ncp_to_cv;
or int * const cp_to_ncv;
   How to do this in D.


D uses parens to restrict how much of the type is const.

const int* - const pointer to const int
const(int*) - const pointer to const int
const(int)* - mutable pointer to const int

Similarly,

const(int*)* - mutable pointer to const pointer to const int
const(int)** - mutable pointer to mutable pointer to const int

D's const is transitive, so it's not possible to have a const 
pointer to a mutable type.


- Jonathan M Davis


As to const pointers to mutable types, it can be done in a 
library (Final in std.typecons). I don't know what the overhead 
is but I imagine it wraps it in a struct




Re: How should I sort a doubly linked list the D way?

2019-08-13 Thread Max Haughton via Digitalmars-d-learn

On Tuesday, 13 August 2019 at 18:54:58 UTC, H. S. Teoh wrote:
On Tue, Aug 13, 2019 at 11:28:35AM -0700, Ali Çehreli via 
Digitalmars-d-learn wrote: [...]
Summary: Ditch the linked list and put the elements into an 
array. :)

[...]

+1.  The linked list may have been faster 20 years ago, before 
the advent of modern CPUs with caching hierarchies and memory 
access predictors.


These days, with CPU multi-level caches and memory access 
predictors, in-place arrays are often the best option for 
performance, up to a certain size.  In general, avoid 
indirection where possible, in order to avoid defeating the 
memory access predictor and reduce the number of cache misses.



T


I saw a Bjarne Stroustrup talk where he benchmarked that the for 
n > 1, std::vector was a lot faster than a linked list for all 
supported operations. I don't know how clever the caching 
strategies are on a modern processor (Pointer chasing), but to my 
knowledge the only way of getting a cache efficient linked list 
would be to effectively have a very contiguous allocator (Which 
obviously defeats the purpose of using a list in the first place)


Found it: https://www.youtube.com/watch?v=YQs6IC-vgmo


Re: Desktop app with vibe.d

2019-08-12 Thread Max Haughton via Digitalmars-d-learn

On Monday, 12 August 2019 at 10:41:57 UTC, GreatSam4sure wrote:
Pls I want to know if it is possible to build desktop app with 
vibe.d just like nodejs. I am not satisfy with the GUI of Dlang 
such as dlangui and gtkd. I don't think they have good styling 
capabilities like HTML and CSS.


I will be happy if I can build an app in D with fanciful ui. I 
will also be happy if you know any other way to build a 
fanciful ui in D like adobe flex, javafx, etc.


vibe.d is a backend framework so unless you ran it and had the 
user connect via a web browser I don't think so.


Honestly, avoid web technology on the desktop like the plague. 
It's slow and memory hogging.


Re: Question about ubyte x overflow, any safe way?

2019-08-04 Thread Max Haughton via Digitalmars-d-learn

On Sunday, 4 August 2019 at 18:22:30 UTC, matheus wrote:

On Sunday, 4 August 2019 at 18:15:30 UTC, Max Haughton wrote:
What do you want to do? If you just want to count to 255 then 
use a foreach


This was just an example, what I'd like in this code is either: 
Get an error (exception) when overflow or even an warning (Only 
if "some" flag was active).


If you want to prevent overflow you must either use BigInt or 
wrap ubyte in a struct that doesn't allow overflow


Could you please elaborate about this struct wrapping? Do you 
mean manually check on change?


Matheus.


Std.experimental.checkedint maybe exactly what you are looking for




Re: Question about ubyte x overflow, any safe way?

2019-08-04 Thread Max Haughton via Digitalmars-d-learn

On Sunday, 4 August 2019 at 18:12:48 UTC, matheus wrote:

Hi,

The snippet below will produce an "infinite loop" because 
obviously "ubyte u" will overflow after 255:


import std.stdio;
void main(){
ubyte u = 250;
for(;u<256;++u){
writeln(u);
}
}

Question: Is there a way (Flag) to prevent this?

Matheus.


What do you want to do? If you just want to count to 255 then use 
a foreach



If you want to prevent overflow you must either use BigInt or 
wrap ubyte in a struct that doesn't allow overflow


Re: Calling / running / executing .d script from another .d script

2019-07-28 Thread Max Haughton via Digitalmars-d-learn

On Sunday, 28 July 2019 at 12:56:12 UTC, BoQsc wrote:
Right now, I'm thinking what is correct way to run another .d 
script from a .d script. Do you have any suggestions?


You'd need to bring a compiler with you and then build it into a 
shared library (then dlopen it).


To do this you'd need a clear API defined in the file to be 
compiled, so you can call it. If you want to use druntime in said 
file you need to use loadLibrary in core instead of dlopen


Re: Is betterC affect to compile time?

2019-07-26 Thread Max Haughton via Digitalmars-d-learn

On Thursday, 25 July 2019 at 14:20:03 UTC, Ali Çehreli wrote:

On 07/25/2019 05:46 AM, Oleg B wrote:
On Thursday, 25 July 2019 at 12:34:15 UTC, rikki cattermole 
wrote:

Those restrictions don't stop at runtime.


It's vary sad.

What reason for such restrictions? It's fundamental idea or 
temporary implementation?


It looks like a bug to me.

Ali


If the spec is to be believed then it is.

I filed a bugzilla, https://issues.dlang.org/show_bug.cgi?id=20086


Re: Is there a way to bypass the file and line into D assert function ?

2019-07-19 Thread Max Haughton via Digitalmars-d-learn

On Friday, 19 July 2019 at 15:30:25 UTC, Newbie2019 wrote:

for example:

void ASSERT(string fmt, string file = __FILE_FULL_PATH__, 
size_t line = __LINE__, T...) (bool c, scope T a)  @nogc {

   assert(c, string, file, line);
}

but i get this error:

error.d(39): Error: found file when expecting )
error.d(39): Error: found ) when expecting ; following statement
error.d(39): Deprecation: use { } for an empty statement, not ;

I want d to print the error message with some format 
information, and show the right file and line for the original 
location.


Is it doable ?


Isn't assert a template (file and line) rather than a plain 
function call?


Worst comes to worst, you can provide your own _d_assert(?) and 
override object.d then just call the C assert


Re: How to use std.windows.registry, there are no documentations.

2019-07-11 Thread Max Haughton via Digitalmars-d-learn

On Thursday, 11 July 2019 at 08:53:35 UTC, BoQsc wrote:

https://dlang.org/phobos/std_windows_registry.html
https://github.com/dlang/phobos/blob/master/std/windows/registry.d

Can someone provide some examples on how to:
set, change, receive something from the Windows registry using 
Phobos std.windows.registry library?


I didn't know that existed, but the source code is documented but 
not in the style that the doc generated recognizes as far as I 
can tell.


Re: Memory allocation failed in CT

2019-07-09 Thread Max Haughton via Digitalmars-d-learn
Is this a 64 or 32 bit compiler? Also could you post the source 
code if possible?


You could try "--DRT-gcopt=profile:1" druntime flag to see if the 
compiler is running out of memory for real


Re: Memory allocation failed in CT

2019-07-09 Thread Max Haughton via Digitalmars-d-learn

On Tuesday, 9 July 2019 at 17:48:52 UTC, Andrey wrote:

Hello,
I have got a problem with compile-time calulations.
I have some code generator that should create some long string 
of code during CT and after generation I mixin it. If I run it 
normally - in run time - then there is no error and I get 
expected output - string with size ~ 3.5 MB.

If I run it in CT then I recieve an error:

[...]


I don't understand why...
The only operation in my generator is string concatination: 
_result ~= "some code...".


Are you using the -lowmem flag? This enables the GC during 
compilation i.e. you might be running out of memory (CTFE is not 
efficient with memory during evaluation)


Re: Why are immutable array literals heap allocated?

2019-07-05 Thread Max Haughton via Digitalmars-d-learn

On Friday, 5 July 2019 at 16:25:10 UTC, Nick Treleaven wrote:

On Thursday, 4 July 2019 at 11:06:36 UTC, Eugene Wissner wrote:

static immutable arr = [1, 2];

You have to spell it out that the data is static.


Yes, I was wondering why the compiler doesn't statically 
allocate it automatically as an optimization.


LDC might be able to optimize it away but by default its heap 
allocated, I imagine for thread safety


Re: Transform a function's body into a string for mixing in

2019-06-20 Thread Max Haughton via Digitalmars-d-learn

On Thursday, 20 June 2019 at 19:09:11 UTC, Emmanuelle wrote:

Hello!

Is there any trait or Phobos function for transforming a 
function/delegate/lambda/whatever's body into a string suitable 
for `mixin(...)`? For example:


---
__traits(getBody, (int a, int b) => a + b); // returns "(int a, 
int b) => a + b"
// or maybe just "a 
+ b"

---

If not, is there any way to do this _without_ using strings? 
They are very inconvenient and could hide errors.


Thanks!


We don't have anything AST-macro ish or a trait as described.

The trait isn't impossible to implement but I could imagine it 
being a nightmare for compile times


Re: DIP 1016 and const ref parameters

2019-06-19 Thread Max Haughton via Digitalmars-d-learn
On Wednesday, 19 June 2019 at 19:25:59 UTC, Jonathan M Davis 
wrote:
On Wednesday, June 19, 2019 12:28:12 PM MDT XavierAP via 
Digitalmars-d-learn wrote:

[...]


The DIPs are here: https://github.com/dlang/DIPs

[...]


DIP1014 has not been implemented in DMD or druntime yet, AFAIK


Re: How to compile my DMD fork?

2019-06-14 Thread Max Haughton via Digitalmars-d-learn

On Friday, 14 June 2019 at 18:07:11 UTC, Q. Schroll wrote:
Basically the headline. I want to try to implement my DIP. I've 
already forked DMD from GitHub. Now, what would I have to do in 
order to get a D compiler with my changes?


I have Windows on x86-64 and Visual Studio on my machine.


It might just be a quirk of my install (but It happened recently 
enough ago that it's worth mentioning) but if you get a really 
cryptic error message about some msbuild dll being missing you 
need to find the configuration option and manually set it to the 
dll actually present.


Re: Does slicing have an effect?

2019-05-25 Thread Max Haughton via Digitalmars-d-learn

On Tuesday, 21 May 2019 at 20:31:58 UTC, Dennis wrote:
I was replacing a memcpy with a slice assignment and 
accidentally used == instead of =.

Usually the compiler protects me from mistakes like that:

```
int[4] a;
a == a;
```
Error: a == a has no effect

However, because I was using slices it didn't:
```
int[4] a;
a[] == a[];
```
No errors

Does slicing have an effect I'm not aware of, or is this a bug?


It calls druntime( https://d.godbolt.org/z/3gS3-E ), so 
technically it does have an effect, even if that effect is 
completely unused and therefore optimized away. Whether this 
should be considered an effect or not is up to you


  1   2   >