Re: Dynamically binding to D code using extern(D)

2021-09-30 Thread Mike Parker via Digitalmars-d-learn

On Thursday, 30 September 2021 at 22:30:30 UTC, jfondren wrote:

3. dynamic linking (option 2), performed arbitrarily at 
runtime, by your program. If linking fails, you can do whatever 
you want about that.


That's actually "dynamic loading".

https://en.wikipedia.org/wiki/Dynamic_loading


Re: Dynamically binding to D code using extern(D)

2021-09-30 Thread Hipreme via Digitalmars-d-learn
Okay, I do agree with you that I may have exaggerated with 
absolute intuitiveness, but I was talking about that 
intuitiveness for loading a symbol from a shared library.



You're limited to using C's types
- I think I don't understood what you meant with that, if the 
data type is known before head, it is possible to just declare it 
from the other side



On Thursday, 30 September 2021 at 22:30:30 UTC, jfondren wrote:
- you can't use overloading, lazy parameters, default values; 
you can't rely on scope parameters, etc., etc.


- That seems to be pretty much more a problem for dynamically 
loading a function, although default values can be mirrored to in 
D API.



- you can't casually hand over GC-allocated data and expect the 
other side to handle it right, or structs with lifetime 
functions that you expect to be called


- That is another problem that doesn't seem related to the 
external linkage too, handling GC-allocated data with extern(D) 
doesn't stop it from it being garbage collected, I'm fixing that 
kind of error right now again.


separate applications and some form of interprocess 
communication (pipes, unix sockets, TCP sockets) instead of 
function calls.


- I'm pretty interested in how to make that thing work, but I 
think that would change a lot in how I'm designing my code, and 
with that way, it would probably become absolutely data oriented, 
right?


Re: Dynamically binding to D code using extern(D)

2021-09-30 Thread jfondren via Digitalmars-d-learn

On Thursday, 30 September 2021 at 18:09:46 UTC, Hipreme wrote:
I write this post as both a learning tool, a question and an 
inquiry.


There are just a lot of drawbacks in trying to do function 
exporting while using D.


The terms that people use are a bit sloppy. There are three kinds 
of 'linking' here:


1. static linking, performed during compilation, once. If linking 
fails, the compile files.
2. dynamic linking (option 1), performed when an executable 
starts up, before your program gains control, by the system 
linker. If linking fails, your program never gets control.
3. dynamic linking (option 2), performed arbitrarily at runtime, 
by your program. If linking fails, you can do whatever you want 
about that.


All of the loadSymbol and 'userdata module' hassle that you're 
frustrated by is from option 2. Option 1 is really the normal way 
to link large shared libraries and there's nothing to it. What 
your code looks like that loads a shared library is just `import 
biglib;`, and the rest of the work is in dub, pkg-config, 
`LD_LIBRARY_PATH`, etc. Phobos is commonly linked in this way.


Pretty much anything that isn't a plugin in a plugin directory 
can use option 1 instead of option 2.



extern(C) advantages:

- Code callable from any language as it is absolutely intuitive
- Well documented



You can call scalding water 'hot' even when you're fresh from 
observing a lava flow. People still find the C ABI frustrating in 
a lot of ways, and especially when they encounter it for the 
first time.


But the C ABI rules the world right now, yes. The real advantages 
are


- it 'never' changes
- 'everyone' already makes it easy to use


extern(C) disadvantages:

- You will need to declare your function pointer as extern(C) 
or it will swap the arguments order.


- you're limited to using C's types
- you can't use overloading, lazy parameters, default values; you 
can't rely on scope parameters, etc., etc.
- you can't casually hand over GC-allocated data and expect the 
other side to handle it right, or structs with lifetime functions 
that you expect to be called
- very little of importance is statically checked: to use a C ABI 
right you need to very carefully read documentation that needs to 
exist to even know who is expected to clean up a pointer and how, 
how large buffers should be. (I wasn't feeling a lot of the C 
ABI's "absolute intuitiveness" when I was passing libpcre an 
ovector sized to the number of pairs I wanted back rather than 
the correct number of `pairs*3/2`)


Option 2 dynamic linking of D libraries sounds pretty 
frustrating. Even with a plugin architecture, maybe I'd prefer 
just recompiling the application each time the plugins change to 
retain option 1 dynamic linking. Using a C ABI instead is a good 
idea if just to play nice with other languages.


And if you were wanting something like untrusted plugins, a way 
to respond to a segfault in a plugin, like I think you mentioned 
in Discord, then I'd still suggest not linking at all but having 
separate applications and some form of interprocess communication 
(pipes, unix sockets, TCP sockets) instead of function calls. 
This is something that you could design, or with D's reflection, 
generate code for against the function calls you already have. 
But this is even more work that you'll have to do. If we add "a 
separate process telling you what to do with some kind of 
protocol" as a fourth kind of linking, then the respective effort 
is


1. free! it compiles, it's probably good!
2. free! if the program starts, it's probably good!
3. wow, why don't you just write your own loadSymbol DSL?
4. wow, why don't you just reimplement Erlang/OTP and call it 
std.distributed? maybe protobufs will be enough.


Dynamically binding to D code using extern(D)

2021-09-30 Thread Hipreme via Digitalmars-d-learn
I write this post as both a learning tool, a question and an 
inquiry.


There are just a lot of drawbacks in trying to do function 
exporting while using D.


That interface is absurdly confuse and that is probably why I've 
never seen a project here which made an use of extern(D) while 
using a DLL.


While I'm making my DLL's generation, there are a lot of pitfalls 
that I can feel into.


**Simple Function**

```

module something;
extern(D) export int sum(int a, int b){return a + b;}
```

The correct way to bind to that function would be:

```

module app;
import core.demangle

int function(int a, int b) sum;

void main()
{
sum = cast(typeof(sum))GetProcAddress(someDll, 
mangleFunc!(typeof(sum)("something.sum");

}

```

And that should be it for loading a simple function.

Now, lets make our case a bit more complicated:

**Overloaded function**


```

module something;

extern(D) export int add(int a, int b)
{
return a + b;
}

extern(D) export float add(float a, float b)
{
   return a+b;
}
```


For loading those functions, the correct way would be

```

module app;
import core.demangle;


int function(int a, int b) sumInt;
float function(float a, float b) sumFloat;

int sum(int a, int b){return sumInt(a, b);}
float sum(float a, float b){return sumFloat(a,b);}

void main()
{
sumInt = cast(typeof(sumInt))GetProcAddress(dll, 
mangleFunc!(typeof(sumInt))("something.sum"));
sumFloat = cast(typeof(sumFloat))GetProcAddress(dll, 
mangleFunc!(typeof(sumFloat))("something.sum"));

}
```

Notice how much the overall complexity starts to increase as 
there seems to be no way to put get the overloads and there 
doesn't seem to be any advantage in using extern(D).



**Static Methods**

The only difference from the default functions is that we need to 
pass the class name as a module name.



**Static Methods returning user data**

That is mainly the reason I'm writing that post. It made me 
really wonder if I should really use extern(D).



This section will use 3 files because after all, there is really 
a (consistency?) problem



```
module supertest;
import ultratest;

class SuperTest
{
   extern(D) export static SuperTest getter(){return new 
SuperTest();}
   extern(D) export static UltraTest ultraGetter(){return new 
UltraTest();}


   import core.demangle;

   pragma(msg, 
mangleFunc!(typeof())("supertest.SuperTest.getter"));

   //Prints _D9supertest9SuperTest6getterFZCQBeQx
   pragma(msg, 
mangleFunc!(typeof())("supertest.SuperTest.ultraGetter"));
   //Prints 
_D9supertest9SuperTest11ultraGetterFZC9ultratest9UltraTest


}
```

```
module ultratest;
class UltraTest{}
```

```
module app;
import core.demangle;

void main()
{
   //???

}
```

As you can see at module supertest, the pattern seems to break 
when returning user data
for another module. From my knowledge, I don't know how could I 
get this function, specially because you will need to know: the 
module that you're importing the function + the module that where 
the userdata is defined for getting it.



It seems pretty insane to work with that.


extern(D) advantages:

-

extern(D) disadvantages:

- Code only callable in D(probably no other language as a 
demangler)
- I don't remember seeing any other code before in that post 
doing that, so, no documentation at all
- You will need to call the demangler for binding to a symbol, 
which in my project,  it could make each call to a unique type 
from the demangler costs 15KB
- You will need to know the module which you imported your 
function
- If your function returns userdata from another function, there 
doesn't seem to be any workaround
- Doesn't provide any overloading binding support though the 
language has support to overloading



extern(C) advantages:

- Code callable from any language as it is absolutely intuitive
- Well documented

extern(C) disadvantages:

- You will need to declare your function pointer as extern(C) or 
it will swap the arguments order.




I have not even entered in the case where I tried overloading 
static methods, which I think it would need to declarate aliases 
to the static methods typings for actually generating a mangled 
name.


I want to know if extern(D) is actually meant to not be touched. 
adr said that his use for that was actually when doing



extern(C):
//Funcs defined here


extern(D): //Resets the linkage to the default one


So, there are just too many disadvantages for doing extern(D) for 
binding it to any code, I would like to know where we can get 
more documentation than what I posted here right now (really, 
I've never saw any code binding to an extern(D) code). And I do 
believe that is the main reason why people usually don't use 
dynamic libs in D, it is just inviable as you would need to 
regenerate all the API yourself


Re: Rather Bizarre slow downs using Complex!float with avx (ldc).

2021-09-30 Thread Johan via Digitalmars-d-learn
On Thursday, 30 September 2021 at 16:40:03 UTC, james.p.leblanc 
wrote:

D-Ers,

I have been getting counterintuitive results on avx/no-avx 
timing

experiments.


This could be an template instantiation culling problem. If the 
compiler is able to determine that `Complex!float` is already 
instantiated (codegen) inside Phobos, then it may decide not to 
codegen it again when you are compiling your code with 
AVX+fastmath enabled. This could explain why you don't see 
improvement for `Complex!float`, but do see improvement with 
`Complex!double`. This does not explain the worse performance 
with AVX+fastmath vs without it.


Generally, for performance issues like this you need to study 
assembly output (`--output-s`) or LLVM IR (`--output-ll`).

First thing I would look out for is function inlining yes/no.

cheers,
  Johan



Rather Bizarre slow downs using Complex!float with avx (ldc).

2021-09-30 Thread james.p.leblanc via Digitalmars-d-learn

D-Ers,

I have been getting counterintuitive results on avx/no-avx timing
experiments.  Storyline to date (notes at end):

**Experiment #1)** Real float data type (i.e. non-complex 
numbers),

speed comparison.
  a)  moving from non-avx --> avx shows non-realistic speed up of 
15-25 X.

  b)  this is weird, but story continues ...

**Experiment #2)** Real double data type (non-complex numbers),
  a)  moving from non-avx --> avx again shows amazing gains, but 
the
  gains are about half of those seen in Experiment #1, so 
maybe

  this looks plausible?

**Experiment #3)**  Complex!float datatypes:
  a)  now **going from non-avx to avx shows a serious performance 
LOSS**

  of 40% to breaking even at best.  What is happening here?

**Experiment #4)**  Complex!double:
  a)  non-avx --> avx shows performancegains again about 2X (so 
the

  gains appear to be reasonable).


The main question I have is:

**"What is going on with the Complex!float performance?"**  One 
might expect

floats to have a better perfomance than doubles as we saw with the
real-value data (becuase of vector packaging, memory bandwidth, 
etc).


But, **Complex!float shows MUCH WORSE avx performance than 
Complex!Double

(by a factor of almost 4).**

```d
//Table of Computation Times
//
//   self math  std math
// explicit  no-explicit   explicit  no-explicit
//   align  alignalign  align
//   0.12   0.21  0.15  0.21 ;  # Float with AVX
//   3.23   3.24  3.30  3.22 ;  # Float without 
AVX

//   0.31   0.42  0.31  0.42 ;  # Double with AVX
//   3.25   3.24  3.24  3.27 ;  # Double without 
AVX
//   6.42   6.62  6.61  6.59 ;  # Complex!float 
with AVX
//   4.04   4.17  6.68  5.82 ;  # Complex!float 
without AVX
//   1.67   1.69  1.73  1.71 ;  # Complex!double 
with AVX
//   3.34   3.42  3.28  3.31# Complex!double 
without AVX

```

Notes:

1) Based on forum hints from ldc experts, I got good guidance
   on enabling avx ( i.e. compiling modules on command line, using
   --fast-math and -mcpu=haswell on command line).

2) From Mir-glas experts I received hints to try to implement own 
version
   of the complex math.  (this is what the "self-math" column 
refers to).


I understand that detail of the computations are not included 
here, (I
can do that if there is interest, and if I figure out an 
effective way to present

it in a forum.)

But, I thought I might begin with a simple question, **"Is there 
some well-known
issue that I am missing here".  Have others been done this road 
as well?**


Thanks for any and all input.
Best Regards,
James

PS  Sorry for the inelegant table ... I do not believe there is a 
way
to include the beautiful bars charts on this forum.  Please 
correct me

if there is a way...)



Code coverage exit code 1 on failure?

2021-09-30 Thread wjoe via Digitalmars-d-learn

What's the reasoning behind picking exit code 1 ?
Makes it pretty much impossible to distinguish between a lack of 
coverage code 1 and a process code 1.


Is there a handler where it can be overridden ?


Re: Why sometimes stacktraces are printed and sometimes not?

2021-09-30 Thread wjoe via Digitalmars-d-learn
On Wednesday, 29 September 2021 at 12:15:30 UTC, Steven 
Schveighoffer wrote:

On 9/29/21 6:57 AM, JN wrote:
What makes the difference on whether a crash stacktrace gets 
printed or not?


Sometimes I get a nice clean stacktrace with line numbers, 
sometimes all I get is "segmentation fault error -1265436346" 
(pseudo example) and I need to run under debugger to get the 
crash location.


segmentation faults are memory access errors. It means you are 
accessing a memory address that is not valid for your 
application. If you are accessing the wrong memory, it means 
something is terribly wrong in your program.


[...]


So on Linux, I don't know the behavior on other OSs, the kernel 
sends SIGSEGV to your process which, if unhandled, simply 
terminates your program.
It's an abnormal termination and thus the D runtime or whatever 
library that in a normal case takes care of printing the traces 
doesn't get a chance to do so anymore.


You also change the signal in your handler to get a core dump, 
look here 
http://www.alexonlinux.com/how-to-handle-sigsegv-but-also-generate-core-dump




Re: Why sometimes stacktraces are printed and sometimes not?

2021-09-30 Thread bauss via Digitalmars-d-learn
On Wednesday, 29 September 2021 at 12:15:30 UTC, Steven 
Schveighoffer wrote:

On 9/29/21 6:57 AM, JN wrote:
What makes the difference on whether a crash stacktrace gets 
printed or not?


Sometimes I get a nice clean stacktrace with line numbers, 
sometimes all I get is "segmentation fault error -1265436346" 
(pseudo example) and I need to run under debugger to get the 
crash location.


segmentation faults are memory access errors. It means you are 
accessing a memory address that is not valid for your 
application. If you are accessing the wrong memory, it means 
something is terribly wrong in your program.


Note that on Windows in 32-bit mode, I believe you get a stack 
trace. On Linux, there is the undocumented 
`etc.linux.memoryhandler` which allows you to register an 
error-throwing signal handler.


Signals are not really easy to deal with in terms of properly 
throwing an exception. This only works on Linux, so I don't 
know if it's possible to port to other OSes. I've also found 
sometimes that it doesn't work right, so I only enable it when 
I am debugging.


-Steve


You might also mention that even if you had a stacktrace where 
the error happened, that's usually not where the error was 
caused. It's most likely a completely different place in the code.


The only time where you somewhat can be sure where it happens is 
when you try to access ex. a reference type that hasn't been 
instantiated.


That's why in languages like C# you don't get a segfault/access 
violation, but you get a NullReferenceException.


It's not a concept D has, so it defaults to segfault/access 
violation.


Which means you're in really deep water when you encounter one 
because you have no idea what caused it and where it was caused.