Re: Introspection of exceptions that a function can throw

2021-02-25 Thread Arafel via Digitalmars-d-learn
You'd hit a very big wall with separate compilation unless you can 
inspect all the code, and know where to find it.


But you'd have a problem, for instance, if you are writing a plugin (.so 
/ DLL) for a product for which you only have .di files.


Or even worse the other way round: if you want to allow people to write 
plugins for your product, you can't know what they'll throw, even if 
they have your code, unless you enforce a `nothrow` interface.


But I guess that if you're not doing any of this, it should be 
possible... although I'd still do it as a separate pre-compilation step, 
so it could be cached.


On 26/2/21 3:21, James Blachly wrote:

On 2/24/21 2:38 PM, Mark wrote:
Is there a way to obtain a list, at compile-time, of all the exception 
types that a function might throw (directly or through a call to 
another function)?


Thanks.


Crazy idea:

Could a program import its own source file as a string (`string source = 
import('thisfile.d')`) and `-J` , then use a lexer/parser to 
generate AST of the source code and extract exceptions potentially 
thrown by given functions -- all at compile  time?




Re: Optimizing for SIMD: best practices?(i.e. what features are allowed?)

2021-02-25 Thread Bruce Carneal via Digitalmars-d-learn

On Thursday, 25 February 2021 at 11:28:14 UTC, z wrote:
How does one optimize code to make full use of the CPU's SIMD 
capabilities?
Is there any way to guarantee that "packed" versions of SIMD 
instructions will be used?(e.g. vmulps, vsqrtps, etc...)
To give some context, this is a sample of one of the functions 
that could benefit from better SIMD usage :

float euclideanDistanceFixedSizeArray(float[3] a, float[3] b) {
  float distance;
  a[] -= b[];
  a[] *= a[];
  static foreach(size_t i; 0 .. 3/+typeof(a).length+/){
  distance += a[i].abs;//abs required by the caller
  }
  return sqrt(distance);
}
vmovsd xmm0,qword ptr ds:[rdx]
vmovss xmm1,dword ptr ds:[rdx+8]
vmovsd xmm2,qword ptr ds:[rcx+4]
vsubps xmm0,xmm0,xmm2
vsubss xmm1,xmm1,dword ptr ds:[rcx+C]
vmulps xmm0,xmm0,xmm0
vmulss xmm1,xmm1,xmm1
vbroadcastss xmm2,dword ptr ds:[<__real@7fff>]
vandps xmm0,xmm0,xmm2
vpermilps xmm3,xmm0,F5
vaddss xmm0,xmm0,xmm3
vandps xmm1,xmm1,xmm2
vaddss xmm0,xmm0,xmm1
vsqrtss xmm0,xmm0,xmm0
vmovaps xmm6,xmmword ptr ss:[rsp+20]
add rsp,38
ret


I've tried to experiment with dynamic arrays of float[3] but 
the output assembly seemed to be worse.[1](in short, it's 
calling internal D functions which use "vxxxss" instructions 
while performing many moves.)


Big thanks
[1] https://run.dlang.io/is/F3Xye3


If you are developing for deployment to a platform that has a 
GPU, you might consider going SIMT (dcompute) rather than SIMD.  
SIMT is a lot easier on the eyes.  More importantly, if you're 
targetting an SoC or console, or have relatively chunky 
computations that allow you to work around the PCIe transit 
costs, the path is open to very large performance improvements.  
I've only been using dcompute for a week or so but so far it's 
been great.


If your algorithims are very branchy, or you decide to stick with 
multi-core/SIMD for any of a number of other good reasons, here 
are a few things I learned before decamping to dcompute land that 
may help:


  1)  LDC is pretty good at auto vectorization as you have 
probably observed.  Definitely worth a few iterations to try and 
get the vectorizer engaged.


  2)  LDC auto vectorization was good but explicit __vector 
programming is more predictable and was, at least for my tasks, 
much faster. I couldn't persuade the auto vectorizer to "do the 
right thing" throughout the hot path but perhaps you'll have 
better luck.


  3)  LDC does a good job of going between T[N] <==> 
__vector(T[N]) so using the static array types as your 
input/output types and the __vector types as your compute types 
works out well whenever you have to interface with an unaligned 
world. LDC issues unaligned vector loads/stores for casts or full 
array assigns: v = cast(VT)sa[];  or v[] = sa[];  These are quite 
good on modern CPUs.  To calibrate note that Ethan recently 
talked about a 10% gain he experienced using full alignment, 
IIRC, so there's that.


  4) LDC also does a good job of discovering SIMD equivalents 
given static foreach unrolled loops with explicit complie-time 
indexing of vector element operands.  You can use those along 
with pragma(inline, true) to develop your own "intrinsics" that 
supplement other libs.


  5) If you adopt the __vector approach you'll have to handle the 
partials manually. (array length % vec length != 0 indicates a 
partial or tail fragment).  If the classic (copying/padding) 
approaches to such fragmentation don't work for you I'd suggest 
using nested static functions that take ref T[N] inputs and 
outputs.  The main loops become very simple and the tail handling 
reduces to loading stack allocated T[N] variables explicitly, 
calling the static function, and unloading.


Good luck.




Re: How do I check if a type is assignable to null at compile time?

2021-02-25 Thread Nathan S. via Digitalmars-d-learn

On Friday, 26 February 2021 at 05:34:26 UTC, Paul Backus wrote:

On Friday, 26 February 2021 at 05:25:14 UTC, Jack wrote:

I started with:

enum isAssignableNull(T) = is(T : Object) || isPointer(T);

but how do I cover all cases?


Something like this should work:

enum isAssignableNull(T) = __traits(compiles, (T t) => t = 
null);


`isAssignableNull!(immutable void*)` is true with his definition 
but false with yours. Of course you are correct that you cannot 
assign to an immutable pointer.


Re: How do I check if a type is assignable to null at compile time?

2021-02-25 Thread Nathan S. via Digitalmars-d-learn

On Friday, 26 February 2021 at 05:25:14 UTC, Jack wrote:

I started with:

enum isAssignableNull(T) = is(T : Object) || isPointer(T);

but how do I cover all cases?


If I understand what you mean by "is assignable to null", this 
should do it:


---
enum bool isAssignableNull(T) = is(typeof(null) : T);

// Tests:

immutable string arrayslice = null;
static assert(isAssignableToNull!(immutable string));

void function(int) funptr = null;
static assert(isAssignableToNull!(typeof(funptr)));

int[int] aa = null;
static assert(isAssignableToNull!(int[int]));
---



Re: How do I check if a type is assignable to null at compile time?

2021-02-25 Thread Paul Backus via Digitalmars-d-learn

On Friday, 26 February 2021 at 05:25:14 UTC, Jack wrote:

I started with:

enum isAssignableNull(T) = is(T : Object) || isPointer(T);

but how do I cover all cases?


Something like this should work:

enum isAssignableNull(T) = __traits(compiles, (T t) => t = null);


How do I check if a type is assignable to null at compile time?

2021-02-25 Thread Jack via Digitalmars-d-learn

I started with:

enum isAssignableNull(T) = is(T : Object) || isPointer(T);

but how do I cover all cases?


Re: Optimizing for SIMD: best practices?(i.e. what features are allowed?)

2021-02-25 Thread tsbockman via Digitalmars-d-learn

On Thursday, 25 February 2021 at 11:28:14 UTC, z wrote:

float euclideanDistanceFixedSizeArray(float[3] a, float[3] b) {


Use __vector(float[4]), not float[3].


  float distance;


The default value for float is float.nan. You need to explicitly 
initialize it to 0.0f or something if you want this function to 
actually do anything useful.



  a[] -= b[];
  a[] *= a[];


With __vector types, this can be simplified (not optimized) to 
just:

a -= b;
a *= a;


  static foreach(size_t i; 0 .. 3/+typeof(a).length+/){
  distance += a[i].abs;//abs required by the caller


(a * a) above is always positive for real numbers. You don't need 
the call to abs unless you're trying to guarantee that even nan 
values will have a clear sign bit.


Also, there is no point to adding the first component to zero, 
and copying element [0] from a SIMD register into a scalar is 
free, so this can become:


float distance = a[0];
static foreach(size_t i; 1 .. 3)
distance += a[i];


  }
  return sqrt(distance);
}


Final assembly output (ldc 1.24.0 with -release -O3 
-preview=intpromote -preview=dip1000 -m64 -mcpu=haswell 
-fp-contract=fast -enable-cross-module-inlining):


vsubps  xmm0, xmm1, xmm0
vmulps  xmm0, xmm0, xmm0
vmovshdup   xmm1, xmm0
vaddss  xmm1, xmm0, xmm1
vpermilpd   xmm0, xmm0, 1
vaddss  xmm0, xmm0, xmm1
vsqrtss xmm0, xmm0, xmm0
ret


Re: Optimizing for SIMD: best practices?(i.e. what features are allowed?)

2021-02-25 Thread tsbockman via Digitalmars-d-learn

On Thursday, 25 February 2021 at 11:28:14 UTC, z wrote:
Is there any way to guarantee that "packed" versions of SIMD 
instructions will be used?(e.g. vmulps, vsqrtps, etc...)
To give some context, this is a sample of one of the functions 
that could benefit from better SIMD usage :

float euclideanDistanceFixedSizeArray(float[3] a, float[3] b) {


You need to use __vector(float[4]) instead of float[3] to tell 
the compiler to pack multiple elements per SIMD register. Right 
now your data lacks proper alignment for SIMD load/stores.


Beyond that, SIMD code is rather difficult to optimize. Code 
written in ignorance or in a rush is unlikely to be meaningfully 
faster than ordinary scalar code, unless the data flow is very 
simple. You will probably get a bigger speedup for less effort 
and pain by first minimizing heap allocations, maximizing 
locality of reference, minimizing indirections, and minimizing 
memory use. (And, of course, it should go without saying that 
choosing an asymptotically efficient high-level algorithm is more 
important than any micro-optimization for large data sets.) 
Nevertheless, if you are up to the challenge, SIMD can sometimes 
provide a final 2-3x speed boost.


Your algorithms will need to be designed to minimize mixing of 
data between SIMD channels, as this forces the generation of lots 
of extra instructions to swizzle the data, or worse to unpack and 
repack it. Something like a Cartesian dot product or cross 
product will benefit much less from SIMD than vector addition, 
for example. Sometimes the amount of swizzling can be greatly 
reduced with a little algebra, other times you might need to 
refactor an array of structures into a structure of arrays.


Per-element conditional branches are very bad, and often 
completely defeat the benefits of SIMD. For very short segments 
of code (like conditional assignment), replace them with a SIMD 
conditional move (vcmp and vblend). Bit-twiddling is your friend.


Finally, do not trust the compiler or the optimizer. People love 
to make the claim that "The Compiler" is always better than 
humans at micro-optimizations, but this is not at all the case 
for SIMD code with current systems. I have found even LLVM to 
produce quite bad SIMD code for complex algorithms, unless I 
carefully structure my code to make it as easy as possible for 
the optimizer to get to the final assembly I want. A sprinkling 
of manual assembly code (directly, or via a library) is also 
necessary to fill in certain instructions that the compiler 
doesn't know when to use at all.


Resources I have found very helpful:

Matt Godbolt's Compiler Explorer online visual disassembler 
(supports D):

https://godbolt.org/

Felix Cloutier's x86 and amd64 instruction reference:
https://www.felixcloutier.com/x86/

Agner Fog's optimization guide (especially the instruction 
tables):

https://agner.org/optimize/


Re: Introspection of exceptions that a function can throw

2021-02-25 Thread James Blachly via Digitalmars-d-learn

On 2/24/21 2:38 PM, Mark wrote:
Is there a way to obtain a list, at compile-time, of all the exception 
types that a function might throw (directly or through a call to another 
function)?


Thanks.


Crazy idea:

Could a program import its own source file as a string (`string source = 
import('thisfile.d')`) and `-J` , then use a lexer/parser to 
generate AST of the source code and extract exceptions potentially 
thrown by given functions -- all at compile  time?


Re: Compile-Time Function Parameters That Aren't Types?

2021-02-25 Thread Kyle Ingraham via Digitalmars-d-learn

On Wednesday, 24 February 2021 at 20:15:08 UTC, Ali Çehreli wrote:

On 2/23/21 7:52 PM, Kyle Ingraham wrote:


Where would one find information on this


There are Point and Polygon struct templates on the following 
page where one can pick e.g. the dimension (e.g. three 
dimensional space) by a size_t template parameter.



http://ddili.org/ders/d.en/templates_more.html#ix_templates_more.value%20template%20parameter

Ali


Thank you very much Ali. This is another great example. Your book 
has been most helpful!


tiny alternative to std library

2021-02-25 Thread Anthony via Digitalmars-d-learn

Hello,

I noticed that importing some std libraries causes the build time 
to jump to around 1 - 3 secs.


I started creating my own helper functions to avoid importing std 
for scripting and prototyping in order to keep the compile time 
at around 0.5 secs.


I was wondering if anyone knows of any libraries that are geared 
towards something like this?


Thanks


Re: DUB is not working correctly

2021-02-25 Thread user1234 via Digitalmars-d-learn

On Thursday, 25 February 2021 at 17:34:29 UTC, Maxim wrote:
In my opinion, that happens because of DUB can't accept my 
changes to the file. But how to make the manager admit it? I 
apologize to everyone who understood me wrong.


If you use an ide to edit the DUB receipt maybe it's possible 
that you forgot to save the project before retriggering a 
rebuild. If so then the file used by DUB is still in its previous 
state.





Re: DUB is not working correctly

2021-02-25 Thread Maxim via Digitalmars-d-learn

On Thursday, 25 February 2021 at 17:47:57 UTC, user1234 wrote:

On Thursday, 25 February 2021 at 17:34:29 UTC, Maxim wrote:
In my opinion, that happens because of DUB can't accept my 
changes to the file. But how to make the manager admit it? I 
apologize to everyone who understood me wrong.


If you use an ide to edit the DUB receipt maybe it's possible 
that you forgot to save the project before retriggering a 
rebuild. If so then the file used by DUB is still in its 
previous state.


I'm using a text editor.


Re: DUB is not working correctly

2021-02-25 Thread Maxim via Digitalmars-d-learn

On Thursday, 25 February 2021 at 17:34:29 UTC, Maxim wrote:

On Wednesday, 24 February 2021 at 16:13:48 UTC, Maxim wrote:

[...]


I think, I need to rephrase the question for the present 
situtation: how can I force DUB to change targetName? It 
doesn't read my changes in dub.json, and I don't know why. In 
my opinion, that happens because of DUB can't accept my changes 
to the file. But how to make the manager admit it? I apologize 
to everyone who understood me wrong.


And my problem has been substituted by another! Now DUB says to 
me to set targetName with this problem:

'No target name set.'


Re: DUB is not working correctly

2021-02-25 Thread Maxim via Digitalmars-d-learn

On Wednesday, 24 February 2021 at 16:13:48 UTC, Maxim wrote:
Hello, I have problems with working in dub environment. If I 
try to init my project with 'dub init', all needed files will 
be created successfully. However, when I run 'dub run', the 
manager gives me an error:


'Configuration 'application' of package application contains no 
source files. Please add {"targetType": "none"} to its package 
description to avoid building it.
Package with target type "none" must have dependencies to 
build.'


If I set targetType in dub.json to "none", the only message:
 'Package with target type "none" must have dependencies to 
build.'


will remain. When I set targetName to "app" or any other name, 
the problem will appear again with the same message above. I 
flicked through many forums and videos how to correctly install 
dub, and nothing helped.


I am using dub 1.23.0 built on Sep 25 2020 on Windows 10 x64. 
Will be very thankful for your help!


I think, I need to rephrase the question for the present 
situtation: how can I force DUB to change targetName? It doesn't 
read my changes in dub.json, and I don't know why. In my opinion, 
that happens because of DUB can't accept my changes to the file. 
But how to make the manager admit it? I apologize to everyone who 
understood me wrong.


Re: byte array to string

2021-02-25 Thread Ali Çehreli via Digitalmars-d-learn

On 2/24/21 10:58 PM, FeepingCreature wrote:

On Thursday, 25 February 2021 at 06:57:57 UTC, FeepingCreature wrote:

On Thursday, 25 February 2021 at 06:47:11 UTC, Mike wrote:

hi all,

If i have an array:
byte[3] = [1,2,3];

How to get string "123" from it?

Thanks in advance.


string str = format!"%(%s)"(array);


Er sorry, typo, that should be "%(%s%)". "Print the elements of the 
array, separated by nothing." Compare "%(%s, %)" for a comma separated 
list.


I have an explanation of that syntax here:

  https://youtu.be/dRORNQIB2wA?t=981

Ali


Re: How to get output of piped process?

2021-02-25 Thread kdevel via Digitalmars-d-learn

On Monday, 22 February 2021 at 13:23:40 UTC, Danny Arends wrote:

On Friday, 19 February 2021 at 15:39:25 UTC, kdevel wrote:


[...]

Fortunately the D runtime /does/ take care and it throws---if 
the signal

is ignored beforehand. I filed issue 21649.


[...]


Perhaps a bit late,


It's never too late.™ :-)


but this is how I deal with pipes and spawnShell.
Read one byte at a time from stdout and stderr:

https://github.com/DannyArends/DaNode/blob/master/danode/process.d


Is this immune to SIGPIPE and is this design able to serve 
infinite
streams? BTW: Why does run use spawnShell and not spawnProcess 
(would

save one File object).

If the design is not intended to serve infinite streams I would
suggest to open two temporary files "out" and "err", delete them,
and let the child process write stdout/stderr into those files.
IFAICS this avoid threads, sleep, pipe and reading with fgetc.


Re: Optimizing for SIMD: best practices?(i.e. what features are allowed?)

2021-02-25 Thread Guillaume Piolat via Digitalmars-d-learn

On Thursday, 25 February 2021 at 11:28:14 UTC, z wrote:
How does one optimize code to make full use of the CPU's SIMD 
capabilities?
Is there any way to guarantee that "packed" versions of SIMD 
instructions will be used?(e.g. vmulps, vsqrtps, etc...)


https://code.dlang.org/packages/intel-intrinsics




Re: DUB is not working correctly

2021-02-25 Thread Maxim via Digitalmars-d-learn

On Thursday, 25 February 2021 at 13:07:08 UTC, Siemargl wrote:

On Thursday, 25 February 2021 at 12:49:05 UTC, Maxim wrote:

Wait, I have just noticed that you said about dub which came 
with dmd. So, was dub on your computer right after compiler 
installation or you did you do this manually?


Many years DMD come together with dub.

Look at dmd2\windows\bin\ folder


I thought, if I run native dub (without installation from 
github), all will be fine, but it isn't.


Re: DUB is not working correctly

2021-02-25 Thread Siemargl via Digitalmars-d-learn

On Thursday, 25 February 2021 at 12:49:05 UTC, Maxim wrote:

Wait, I have just noticed that you said about dub which came 
with dmd. So, was dub on your computer right after compiler 
installation or you did you do this manually?


Many years DMD come together with dub.

Look at dmd2\windows\bin\ folder




Re: DUB is not working correctly

2021-02-25 Thread Maxim via Digitalmars-d-learn

On Wednesday, 24 February 2021 at 19:48:10 UTC, Siemargl wrote:

On Wednesday, 24 February 2021 at 19:42:05 UTC, Maxim wrote:

On Wednesday, 24 February 2021 at 19:35:46 UTC, Siemargl wrote:
On Wednesday, 24 February 2021 at 19:18:32 UTC, Siemargl 
wrote:

[...]


Yes, i see th error.
If in your dub.json change > "targetType": "executable",

[...]



Tried and everything seems to be okay with underscores but the 
error message about the targetType appears even when I start 
my new pure project without any dependencies.


Just downloaded latest dmd 2.095.1, it comes with dub 1.24.1

Works fine on void dub init project, but underscore problem 
whith dsfml still here


Wait, I have just noticed that you said about dub which came with 
dmd. So, was dub on your computer right after compiler 
installation or you did you do this manually?


Optimizing for SIMD: best practices?(i.e. what features are allowed?)

2021-02-25 Thread z via Digitalmars-d-learn
How does one optimize code to make full use of the CPU's SIMD 
capabilities?
Is there any way to guarantee that "packed" versions of SIMD 
instructions will be used?(e.g. vmulps, vsqrtps, etc...)
To give some context, this is a sample of one of the functions 
that could benefit from better SIMD usage :

float euclideanDistanceFixedSizeArray(float[3] a, float[3] b) {
  float distance;
  a[] -= b[];
  a[] *= a[];
  static foreach(size_t i; 0 .. 3/+typeof(a).length+/){
  distance += a[i].abs;//abs required by the caller
  }
  return sqrt(distance);
}
vmovsd xmm0,qword ptr ds:[rdx]
vmovss xmm1,dword ptr ds:[rdx+8]
vmovsd xmm2,qword ptr ds:[rcx+4]
vsubps xmm0,xmm0,xmm2
vsubss xmm1,xmm1,dword ptr ds:[rcx+C]
vmulps xmm0,xmm0,xmm0
vmulss xmm1,xmm1,xmm1
vbroadcastss xmm2,dword ptr ds:[<__real@7fff>]
vandps xmm0,xmm0,xmm2
vpermilps xmm3,xmm0,F5
vaddss xmm0,xmm0,xmm3
vandps xmm1,xmm1,xmm2
vaddss xmm0,xmm0,xmm1
vsqrtss xmm0,xmm0,xmm0
vmovaps xmm6,xmmword ptr ss:[rsp+20]
add rsp,38
ret


I've tried to experiment with dynamic arrays of float[3] but the 
output assembly seemed to be worse.[1](in short, it's calling 
internal D functions which use "vxxxss" instructions while 
performing many moves.)


Big thanks
[1] https://run.dlang.io/is/F3Xye3