Re: Which language futures make D overcompicated?

2018-02-09 Thread Ralph Doncaster via Digitalmars-d

On Friday, 9 February 2018 at 21:05:10 UTC, H. S. Teoh wrote:
On Fri, Feb 09, 2018 at 08:49:24PM +, Meta via 
Digitalmars-d wrote: [...]
I think the perception of D being complicated is more from 
programmers coming from Python/Ruby/JS (and to a lesser 
extent, Haskell/Scheme/Java). D is quite different if you're 
coming from a "VM" or "scripting" language because it exposes 
you to a lot of new concepts such as static typing, value 
types, templates, monomorphization, immutability, memory 
layout, linking and compilation, compile-time vs. runtime, 
etc. It's not that these programmers are less skilled or less 
knowledgeable; it's that if they've never used a language that 
has forced them to consider these concepts, then it looks to 
them like D is a massive step up in complexity compared to the 
language that they're used to.


I think if you asked 100 C++ programmers whether they thought 
D was a complicated language, 99 of them would say no. If you 
ask 100 Python programmers, 99 would probably say yes.


Thanks for this very insightful post.

Before reading this, I couldn't understand why people thought D 
was complex... I come from a strong C/C++ background, so to me 
D is like a breath of fresh air in terms of understandability, 
flexibility, and verbosity level. "Complex" certainly isn't 
what I'd think of when I think about D.  But I suppose if 
someone is coming primarily from a Python background, D could 
certainly be considered quite a step up in perceived complexity!


I've done lots of C++ (though more in the earlier years), and I 
have to disagree.  I'd agree C++11 is more complicated than D, 
but D is still complicated.  I think I've programmed in enough 
languages (from asm, Perl, Java,...) and in large enough projects 
to have a good idea of what languages can be like.


I'll probably continue to stick it out and play with D for 
personal projects because of the things I like and find 
interesting, but professionally it's a no-go (pardon the pun).


Frankly, I think it is doomed to be a niche-use language.  While 
many more things were done right compared to C++, too many things 
were done wrong and there doesn't seem to be interest in breaking 
backward compatibility to excise them from D.




Re: Which language futures make D overcompicated?

2018-02-09 Thread Ralph Doncaster via Digitalmars-d

On Friday, 9 February 2018 at 16:33:21 UTC, bachmeier wrote:
On Friday, 9 February 2018 at 16:05:52 UTC, Ralph Doncaster 
wrote:


It might be clear and simple to you, but it's not to me.  And 
I'm a rather advanced developer.
While there are lots of things I like about D compared to C++ 
such as getting rid of #include hell, there's too many "messy" 
things and the learning curve is too steep for me to consider 
suggesting it for any consulting projects.  I think it 
could've been better if there was more focus on keeping the 
language (and standard library) clean and simple instead of 
making it more like a swiss army knife.


When I read things like that page, I think "Haskell's not that 
bad".


So far a strategy that has worked for me is to ignore most of 
that stuff. Must be my C background.


What I enjoy most is assembler programming in RISC-like 
instruction sets.  Due to the cost of silicon, it's much less 
common for them have multiple different instructions for doing 
exactly the same thing.


Re: uint[3] not equivalent to void[12]?

2018-02-09 Thread Ralph Doncaster via Digitalmars-d-learn

On Friday, 9 February 2018 at 16:28:50 UTC, Nicholas Wilson wrote:
On Friday, 9 February 2018 at 15:50:24 UTC, Ralph Doncaster 
wrote:
Is there a way I can make a function that takes an array of any 
type but only of a specific size in bytes?


With a template that constrains the types size:

void foo(T)(T t) if (T.sizeof == 12)
{
//...
}

alternately reject incorrect values at runtime

void foo(ubyte[] t)
in
{
assert(t.length == 12);
}
do
{
//...
}


Thanks.  That's what I thought.  If I stick with using D, I'll 
probably go with the runtime check.




Re: Which language futures make D overcompicated?

2018-02-09 Thread Ralph Doncaster via Digitalmars-d

On Friday, 9 February 2018 at 15:46:56 UTC, Mike Parker wrote:
On Friday, 9 February 2018 at 15:37:12 UTC, Ralph Doncaster 
wrote:



I think you are proving my point.  You say there is no 
difference between:

const MAX_IN = 20;
vs
immutable MAX_IN = 20;

So now I have to try both, and look at the generated code to 
be sure.


Or read the docs:

https://dlang.org/spec/const3.html

p.s. I prefer const since it is easier for C/C++ coders to 
understand.  Using immutable invites the coder to go down the 
whole rat hole of trying to understand how is it different 
than const.


It's not a rathole. The document page above explains the 
differences rather well. They only happen to be identical when 
initialized with compile-time constants.


Well this part of the docs is a rathole to me:
https://dlang.org/spec/const3.html#implicit_qualifier_conversions

It might be clear and simple to you, but it's not to me.  And I'm 
a rather advanced developer.
While there are lots of things I like about D compared to C++ 
such as getting rid of #include hell, there's too many "messy" 
things and the learning curve is too steep for me to consider 
suggesting it for any consulting projects.  I think it could've 
been better if there was more focus on keeping the language (and 
standard library) clean and simple instead of making it more like 
a swiss army knife.




Re: uint[3] not equivalent to void[12]?

2018-02-09 Thread Ralph Doncaster via Digitalmars-d-learn

On Friday, 9 February 2018 at 15:24:27 UTC, Mike Parker wrote:
On Friday, 9 February 2018 at 15:05:33 UTC, Ralph Doncaster 
wrote:
This seems odd to me.  Is there a way I can make a function 
that takes an array of any type but only of a specific size in 
bytes?


void.d(8): Error: function void.foo (void[12] arr) is not 
callable using argument types (uint[3])

Failed: ["/usr/bin/dmd", "-v", "-o-", "void.d", "-I."]
void foo(void [12] arr)
{
}

void main()
{
uint[3] arr;
foo(arr);
}


void has no size, so what does it mean to have 12 of them?


according to the docs and my testing, the size of a void array 
element is 1, so the following code prints 12:

import std.stdio;

void foo(void [] arr)
{
writeln("length: " arr.length);
}

void main()
{
uint[3] arr;
foo(arr);
}

I thought about using templates, but I was looking for a simple 
way of making a function that takes an array of 12 bytes, whether 
it is uint[3], ubyte[12], or ushort[6].




Re: Which language futures make D overcompicated?

2018-02-09 Thread Ralph Doncaster via Digitalmars-d

On Friday, 9 February 2018 at 15:09:10 UTC, Mike Parker wrote:
On Friday, 9 February 2018 at 14:59:38 UTC, Ralph Doncaster 
wrote:



const auto MAX_IN = 20;


const MAX_IN = 20;

The auto is superfluous and is only needed when there's no 
storage class or type.




Others say to use enums.  It turns out enums seems to be the 
best, as they don't create a symbol in the object file, but 
const auto does.


If you need to take the address of a constant, use immutable or 
const (doesn't really matter, but I prefer immutable). If you 
don't need the address, use enum.


I think you are proving my point.  You say there is no difference 
between:

const MAX_IN = 20;
vs
immutable MAX_IN = 20;

So now I have to try both, and look at the generated code to be 
sure.
p.s. I prefer const since it is easier for C/C++ coders to 
understand.  Using immutable invites the coder to go down the 
whole rat hole of trying to understand how is it different than 
const.




Re: Which language futures make D overcompicated?

2018-02-09 Thread Ralph Doncaster via Digitalmars-d

On Friday, 9 February 2018 at 15:03:02 UTC, Adam D. Ruppe wrote:
On Friday, 9 February 2018 at 14:59:38 UTC, Ralph Doncaster 
wrote:
The docs say using "private" will keep the symbol out of the 
file, but that is not true.

https://dlang.org/spec/attribute.html#static


That sentence isn't well written since D's private and C's 
static
 are different... what D's private does is make the symbol 
invisible to other D files; when you import the module, it 
doesn't import the private names so it acts like they don't 
exist.


But they do still exist in the object file. I'm pretty sure C's 
statics do too actually just without an exported name...


Yes, C's "const static" will still create a symbol in the object 
file.  I just checked with gcc5, and it creates a local readonly 
symbol "r".


My point is that in D, the docs say "private" will keep the 
definition out of the object file.  I think you'll have to agree 
that it's at least confusing.




uint[3] not equivalent to void[12]?

2018-02-09 Thread Ralph Doncaster via Digitalmars-d-learn
This seems odd to me.  Is there a way I can make a function that 
takes an array of any type but only of a specific size in bytes?


void.d(8): Error: function void.foo (void[12] arr) is not 
callable using argument types (uint[3])

Failed: ["/usr/bin/dmd", "-v", "-o-", "void.d", "-I."]
void foo(void [12] arr)
{
}

void main()
{
uint[3] arr;
foo(arr);
}



Re: Which language futures make D overcompicated?

2018-02-09 Thread Ralph Doncaster via Digitalmars-d

On Friday, 9 February 2018 at 07:54:49 UTC, Suliman wrote:

Which language futures by your opinion make D harder?


Too many choices.  I tend to obsess over the best way to do 
something, and when the language gives me several options I want 
to try them all.  One example is constants (that would typically 
be #define in C).  From reading forum posts, I've seen many 
suggest:

const auto MAX_IN = 20;

Others say to use enums.  It turns out enums seems to be the 
best, as they don't create a symbol in the object file, but const 
auto does.  The docs say using "private" will keep the symbol out 
of the file, but that is not true.

https://dlang.org/spec/attribute.html#static




Re: missing HexString documentation

2018-02-08 Thread Ralph Doncaster via Digitalmars-d
On Thursday, 8 February 2018 at 18:49:51 UTC, Steven 
Schveighoffer wrote:
I wonder if it's an issue with how obj2asm prints it out? 
Surely, that data array must be contiguous, and they must be 
bytes. Otherwise the resulting code would be wrong.


OK.  I didn't even know about obj2asm until you mentioned it.  
objdump seems to work perfectly fine on the .o's that dmd 
generates, and I can tell that x"deadbeef" generates 4 contiguous 
bytes (objdump -D):


Disassembly of section .rodata.str1.1:

 <_TMP0>:
   0:   de  .byte 0xde
   1:   ad  lods   %ds:(%rsi),%eax
   2:   be  .byte 0xbe
   3:   ef  out%eax,(%dx)
...



Re: missing HexString documentation

2018-02-08 Thread Ralph Doncaster via Digitalmars-d

On Thursday, 8 February 2018 at 18:31:06 UTC, Walter Bright wrote:

On 2/8/2018 5:26 AM, Steven Schveighoffer wrote:
The extra data in the object file comes from the inclusion of 
the hexStringImpl function, and from the template parameter 
(the symbol 
_D3std4conv__T9hexStringVAyaa8_6465616462656566ZQBiyAa is in 
there as well, which will always be larger than the actual 
string passed to hexString).


I also see the data in there twice for some reason.


This is no longer the case with the PR.

  import std.conv;

  void test() {
__gshared immutable char[4] s = hexString!"deadbeef";
  }

produces the following, with no sign of the template and the 
data is there only once:


_DATA   segment
_D5test24testFZ1syG4a:
db  0ffdeh,0ffadh,0ffbeh,0ffefh ;
_DATA   ends


But it looks like they are all dchar, so 4x the space vs 
x"deadbeef"?


Re: missing HexString documentation

2018-02-08 Thread Ralph Doncaster via Digitalmars-d

On Thursday, 8 February 2018 at 17:06:55 UTC, H. S. Teoh wrote:
On Thu, Feb 08, 2018 at 08:26:03AM -0500, Steven Schveighoffer 
via Digitalmars-d wrote: [...]
The extra data in the object file comes from the inclusion of 
the hexStringImpl function, and from the template parameter 
(the symbol 
_D3std4conv__T9hexStringVAyaa8_6465616462656566ZQBiyAa is in 
there as well, which will always be larger than the actual 
string passed to hexString).

[...]

This is one area that really should be improved.  Is there some 
easy way in the compiler to mark a template function as "only 
used in CTFE", and not emit it into the object file if there 
are no other runtime references to it?  I'm thinking of some 
kind of boolean attribute that defaults to false, and gets set 
if the function is referenced by runtime code.  During codegen, 
any function that doesn't have this attribute set will be 
skipped over.


My speculation is that this would lead to a good amount of 
reduction in template bloat, given how pervasively CTFE is used 
in Phobos (and idiomatic D in general).


Or maybe you can get away with just using a good compiler/linker 
that supports LTO.  It's quite mature in GCC now, so it's 
probably worth trying with GDC.

http://hubicka.blogspot.ca/2014/04/linktime-optimization-in-gcc-1-brief.html



Re: My choice to pick Go over D ( and Rust ), mostly non-technical

2018-02-08 Thread Ralph Doncaster via Digitalmars-d
On Thursday, 8 February 2018 at 15:59:28 UTC, Nicholas Wilson 
wrote:
On Wednesday, 7 February 2018 at 15:16:46 UTC, Ralph Doncaster 
wrote:
On Wednesday, 7 February 2018 at 15:10:36 UTC, Ralph Doncaster 
wrote:
On Wednesday, 7 February 2018 at 08:05:46 UTC, Nicholas 
Wilson wrote:

For OpenCL I develop and maintain DCompute:
http://code.dlang.org/packages/dcompute
https://github.com/libmir/dcompute

It has a much beautified interface to OpenCL (and is mostly 
consistent with its CUDA interface). You can also write 
kernels directly in D, however this requires that LDC is 
built against my fork of LLVM: 
https://github.com/thewilsonator/llvm


It's still in dev but should be usable. Please let me know 
if you have issues using it.


I saw your library before, but it looked like it is ONLY for 
native D on GPUs.  I looked at it again, and don't see any 
documentation or example showing that it works with standard 
OpenCL kernels written in C.


Yeah its a wrapper for OpenCL so as long as the names and 
signatures of the symbols match it should work.


OK, maybe I'll take a closer look.

p.s. since you seem to be a green team guy, you might not know 
that llvm optimization sucks on AMD.  I use -legacy when 
building my kernels to get the good old compiler.


"green team guy"?

It that with the OpenCL C compiler?


nVidia's logo is green, while AMD's logo is often red.
On Linux with AMDGPU-Pro 17 and up, the driver uses llvm/amdgpu.  
The driver still has the old gcc-based? compiler.  The old 
compiler can be selected with clBuildProgram using the option 
"-legacy".





Re: Quora: Why hasn't D started to replace C++?

2018-02-08 Thread Ralph Doncaster via Digitalmars-d
On Wednesday, 7 February 2018 at 22:31:58 UTC, John Gabriele 
wrote:
I'm not sure how long dub has been around, but having an easy 
to use CPAN-alike (online module repo) is HUGE. Dub is great 
for sales. The better dub and the repo gets, the more 
attractive D gets.


I completely agree that the availability of libraries is a huge 
factor.  I almost gave up on D because of the limited amount of 
3rd party libs.

I think just improving the search function would help.
http://code.dlang.org/search?q=keccak
Comes up with nothing, so I started porting a sha3/keccak lib 
from C to D.  Then someone pointed out botan has sha3 support, 
which can be found if you search for "crypto"

http://code.dlang.org/search?q=crypto



Re: missing HexString documentation

2018-02-07 Thread Ralph Doncaster via Digitalmars-d

On Thursday, 8 February 2018 at 01:53:43 UTC, Walter Bright wrote:

On 2/7/2018 11:29 AM, Ralph Doncaster wrote:
I just did a quick check, and with DMD v2.078.1, the hexString 
template increases code size by ~300 bytes vs the hex literal. 
So yet one more reason to prefer the hex literals.


Indeed it does, and that is the result of a poor implementation 
of hexString. I've figured out how to fix that, and hope to 
make a PR for it shortly.


  https://issues.dlang.org/show_bug.cgi?id=18397


While the fix is a huge improvement, it doesn't match the code 
generated by the hex literals.  hexString!"deadbeef" stores the 
null-terminated string in the data section of the object file, 
while x"deadbeef" only stores 4 bytes in the data section.


Re: missing HexString documentation

2018-02-07 Thread Ralph Doncaster via Digitalmars-d

On Thursday, 8 February 2018 at 01:27:46 UTC, Seb wrote:

On Thursday, 8 February 2018 at 00:55:28 UTC, Seb wrote:

On Wednesday, 7 February 2018 at 15:41:37 UTC, Seb wrote:
On Wednesday, 7 February 2018 at 15:25:05 UTC, Steven 
Schveighoffer wrote:

[...]


They are deprecated:

https://dlang.org/changelog/pending.html#hexstrings
https://dlang.org/deprecate.html#Hexstring%20literals

Hence, the grammar has been incompletely updated. As it's not 
an error to use them now, it should have stated that they are 
deprecated.


Anyhow, you can always go back in time:

https://docarchives.dlang.io/v2.078.0/spec/lex.html#HexString


PR: https://github.com/dlang/dlang.org/pull/2190


... and back online: http://dlang.org/spec/lex.html#hex_strings


I'm impressed.  I think I'll keep using D for at least a little 
while longer.  While it has it warts, I'm attracted to a language 
that has an intelligent group of people working to cauterize 
those warts.


Re: missing HexString documentation

2018-02-07 Thread Ralph Doncaster via Digitalmars-d

On Thursday, 8 February 2018 at 00:24:22 UTC, Walter Bright wrote:

On 2/7/2018 8:03 AM, Ralph Doncaster wrote:

As expected,
auto data = cast(ubyte[]) x"deadbeef";
works with -betterC, but
auto data = cast(ubyte[]) hexString!"deadbeef";
does not.



When I tried it:

  import std.conv;
  void test() {
auto data = cast(ubyte[]) hexString!"deadbeef";
  }

with:

  dmd -c -betterC test2.d

it compiled without complaint. Are you doing something 
different? (This is why posting complete examples, not 
snippets, is better. That way I don't have to fill in the 
blanks with guesswork.)


I didn't think it would be that hard to guess I'm trying to make 
an executable.


ralphdoncaster@gl1u:~/code/d$ dmd -betterC hex.d
hex.o: In function 
`_D3std4conv__T10hexStrImplTAyaZQrFNaNbNfMQoZAa':

hex.d:(.text._D3std4conv__T10hexStrImplTAyaZQrFNaNbNfMQoZAa[_D3std4conv__T10hexStrImplTAyaZQrFNaNbNfMQoZAa]+0x2e):
 undefined reference to `_D11TypeInfo_Aa6__initZ'
hex.d:(.text._D3std4conv__T10hexStrImplTAyaZQrFNaNbNfMQoZAa[_D3std4conv__T10hexStrImplTAyaZQrFNaNbNfMQoZAa]+0x33):
 undefined reference to `_d_arraysetlengthiT'
hex.d:(.text._D3std4conv__T10hexStrImplTAyaZQrFNaNbNfMQoZAa[_D3std4conv__T10hexStrImplTAyaZQrFNaNbNfMQoZAa]+0x7c):
 undefined reference to `_D3std5ascii10isHexDigitFNaNbNiNfwZb'
hex.d:(.text._D3std4conv__T10hexStrImplTAyaZQrFNaNbNfMQoZAa[_D3std4conv__T10hexStrImplTAyaZQrFNaNbNfMQoZAa]+0x160):
 undefined reference to `_D11TypeInfo_Aa6__initZ'
hex.d:(.text._D3std4conv__T10hexStrImplTAyaZQrFNaNbNfMQoZAa[_D3std4conv__T10hexStrImplTAyaZQrFNaNbNfMQoZAa]+0x165):
 undefined reference to `_d_arraysetlengthiT'
collect2: error: ld returned 1 exit status
Error: linker exited with status 1
ralphdoncaster@gl1u:~/code/d$ cat hex.d
import std.conv;

extern (C) int main() {
//auto data = cast(ubyte[]) x"deadbeef";
auto data = cast(ubyte[]) hexString!"deadbeef";
return cast(int) data[0];
}

While the string hex literal version works fine:
ralphdoncaster@gl1u:~/code/d$ dmd -betterC hex.d
ralphdoncaster@gl1u:~/code/d$ ./hex
ralphdoncaster@gl1u:~/code/d$ echo $?
222
ralphdoncaster@gl1u:~/code/d$ cat hex.d
//import std.conv;

extern (C) int main() {
auto data = cast(ubyte[]) x"deadbeef";
//auto data = cast(ubyte[]) hexString!"deadbeef";
return cast(int) data[0];
}



Re: missing HexString documentation

2018-02-07 Thread Ralph Doncaster via Digitalmars-d

On Wednesday, 7 February 2018 at 16:51:02 UTC, Seb wrote:
On Wednesday, 7 February 2018 at 16:03:36 UTC, Steven 
Schveighoffer wrote:
Seems like the same code you would need to parse the first is 
reusable for the second, no? I don't see why this deprecation 
was necessary, and now we have more library/template baggage.


-Steve


For the same reason why octal literals have been deprecated 
years ago:


https://dlang.org/deprecate.html#Octal%20literals

The library solution works as well and it's one of the features 
that are rarely used and add up to the steep learning curve.


I, like Steve, disagree.
Coming from c/c++ (and some Java), this was really simple to 
understand:

x"deadbeef"
While this took a lot more time to understand:
hexString!"deadbeef"

For hexString, I had to understand that ! is for function 
template instantiation, and I also had to find out what library 
to import.




Re: missing HexString documentation

2018-02-07 Thread Ralph Doncaster via Digitalmars-d
On Wednesday, 7 February 2018 at 19:25:37 UTC, Ralph Doncaster 
wrote:

On Wednesday, 7 February 2018 at 16:51:02 UTC, Seb wrote:
On Wednesday, 7 February 2018 at 16:03:36 UTC, Steven 
Schveighoffer wrote:
Seems like the same code you would need to parse the first is 
reusable for the second, no? I don't see why this deprecation 
was necessary, and now we have more library/template baggage.


-Steve


For the same reason why octal literals have been deprecated 
years ago:


https://dlang.org/deprecate.html#Octal%20literals

The library solution works as well and it's one of the 
features that are rarely used and add up to the steep learning 
curve.


I, like Steve, disagree.
Coming from c/c++ (and some Java), this was really simple to 
understand:

x"deadbeef"
While this took a lot more time to understand:
hexString!"deadbeef"

For hexString, I had to understand that ! is for function 
template instantiation, and I also had to find out what library 
to import.


I just did a quick check, and with DMD v2.078.1, the hexString 
template increases code size by ~300 bytes vs the hex literal.  
So yet one more reason to prefer the hex literals.




Re: missing HexString documentation

2018-02-07 Thread Ralph Doncaster via Digitalmars-d
On Wednesday, 7 February 2018 at 15:54:05 UTC, Ralph Doncaster 
wrote:
Doesn't that go against the idea of -betterC, or will std.conv 
work with -betterC.


p.s. contrary to what the deprecation notice says, hex strings 
are very often used in crypto/hashing test cases.  Most hash 
specs have example hash strings to verify implementation code.


As expected,
auto data = cast(ubyte[]) x"deadbeef";
works with -betterC, but
auto data = cast(ubyte[]) hexString!"deadbeef";
does not.



Re: missing HexString documentation

2018-02-07 Thread Ralph Doncaster via Digitalmars-d

On Wednesday, 7 February 2018 at 15:41:37 UTC, Seb wrote:
On Wednesday, 7 February 2018 at 15:25:05 UTC, Steven 
Schveighoffer wrote:

On 2/7/18 9:59 AM, Ralph Doncaster wrote:

It is mentioned in the literals section, but not documented:
https://dlang.org/spec/lex.html#string_literals

 From reading forum posts I managed to figure out that 
HexStrings are prefixed with an x.  i.e. x"deadbeef"




Good catch! Even the grammar says nothing about what it is, 
except it has HexString as a possible literal.


Can you file an issue? https://issues.dlang.org

-Steve


They are deprecated:

https://dlang.org/changelog/pending.html#hexstrings
https://dlang.org/deprecate.html#Hexstring%20literals

Hence, the grammar has been incompletely updated. As it's not 
an error to use them now, it should have stated that they are 
deprecated.


Anyhow, you can always go back in time:

https://docarchives.dlang.io/v2.078.0/spec/lex.html#HexString


Doesn't that go against the idea of -betterC, or will std.conv 
work with -betterC.


p.s. contrary to what the deprecation notice says, hex strings 
are very often used in crypto/hashing test cases.  Most hash 
specs have example hash strings to verify implementation code.





Re: My choice to pick Go over D ( and Rust ), mostly non-technical

2018-02-07 Thread Ralph Doncaster via Digitalmars-d
On Wednesday, 7 February 2018 at 15:10:36 UTC, Ralph Doncaster 
wrote:
On Wednesday, 7 February 2018 at 08:05:46 UTC, Nicholas Wilson 
wrote:

For OpenCL I develop and maintain DCompute:
http://code.dlang.org/packages/dcompute
https://github.com/libmir/dcompute

It has a much beautified interface to OpenCL (and is mostly 
consistent with its CUDA interface). You can also write 
kernels directly in D, however this requires that LDC is built 
against my fork of LLVM: https://github.com/thewilsonator/llvm


It's still in dev but should be usable. Please let me know if 
you have issues using it.


I saw your library before, but it looked like it is ONLY for 
native D on GPUs.  I looked at it again, and don't see any 
documentation or example showing that it works with standard 
OpenCL kernels written in C.


p.s. since you seem to be a green team guy, you might not know 
that llvm optimization sucks on AMD.  I use -legacy when building 
my kernels to get the good old compiler.


Re: My choice to pick Go over D ( and Rust ), mostly non-technical

2018-02-07 Thread Ralph Doncaster via Digitalmars-d
On Wednesday, 7 February 2018 at 08:05:46 UTC, Nicholas Wilson 
wrote:
On Tuesday, 6 February 2018 at 20:25:22 UTC, Ralph Doncaster 
wrote:
I, like you, may end up jumping off the ship though.  I've 
done a bit of work with golang before, so maybe I'll take 
another look at it.  The opencl bindings aren't much better, 
but there are ready-made sha3 libs I can use instead of 
porting from C.


For crypto there is also Botan: 
http://code.dlang.org/packages/botan

https://github.com/etcimon/botan


That looks more promising.  Strange that it doesn't show up when 
searching for sha or sha3.

https://code.dlang.org/search?q=sha3


For OpenCL I develop and maintain DCompute:
http://code.dlang.org/packages/dcompute
https://github.com/libmir/dcompute

It has a much beautified interface to OpenCL (and is mostly 
consistent with its CUDA interface). You can also write kernels 
directly in D, however this requires that LDC is built against 
my fork of LLVM: https://github.com/thewilsonator/llvm


It's still in dev but should be usable. Please let me know if 
you have issues using it.


I saw your library before, but it looked like it is ONLY for 
native D on GPUs.  I looked at it again, and don't see any 
documentation or example showing that it works with standard 
OpenCL kernels written in C.




missing HexString documentation

2018-02-07 Thread Ralph Doncaster via Digitalmars-d

It is mentioned in the literals section, but not documented:
https://dlang.org/spec/lex.html#string_literals

From reading forum posts I managed to figure out that HexStrings 
are prefixed with an x.  i.e. x"deadbeef"




Re: more OO way to do hex string to bytes conversion

2018-02-07 Thread Ralph Doncaster via Digitalmars-d-learn
On Tuesday, 6 February 2018 at 18:33:02 UTC, Ralph Doncaster 
wrote:
I've been reading std.conv and std.range, trying to figure out 
a high-level way of converting a hex string to bytes.  The only 
way I've been able to do it is through pointer access:


import std.stdio;
import std.string;
import std.conv;

void main()
{
immutable char* hex = "deadbeef".toStringz;
for (auto i=0; hex[i]; i += 2)
writeln(to!byte(hex[i]));
}


While it works, I'm wondering if there is a more 
object-oriented way of doing it in D.


After a bunch of searching, I came across hex string literals.  
They are mentioned but not documented as a literal.

https://dlang.org/spec/lex.html#string_literals

Combined with the toHexString function in std.digest, it is easy 
to convert between hex strings and byte arrays.


import std.stdio;
import std.digest;

void main() {
auto data = cast(ubyte[]) x"deadbeef";
writeln("data: 0x", toHexString(data));
}

p.s. the cast should probably be to immutable ubyte[].  I'm 
guessing without it, there is an automatic copy of the data being 
made.


website articles 404

2018-02-07 Thread Ralph Doncaster via Digitalmars-d
None of the forum group descriptions seem to apply to discussions 
about the dlang.org web site, so I'm posting this in general.
Selecting "Articles" from the "Documentation" tab gives a 404 
error.

https://dlang.org/articles.html



Re: My choice to pick Go over D ( and Rust ), mostly non-technical

2018-02-06 Thread Ralph Doncaster via Digitalmars-d

On Tuesday, 6 February 2018 at 20:55:31 UTC, Adam D. Ruppe wrote:
On Tuesday, 6 February 2018 at 20:25:22 UTC, Ralph Doncaster 
wrote:
The opencl package in dub is a crude wrapper around the 
original C API.  I couldn't find any sha lib, so I've started 
porting a reference sha3 implementation from C.


Don't port libraries like that, just call them directly. 
Porting crypto stuff is just asking for bugs and there's fairly 
little benefit over just calling them.


Is there an automatic way to make D wrappers for all the C 
function calls?


There's also the problem that the test code for the C/C++ 
libraries would have to be wrapped up into the library or ported 
to D.


Although I'm new to D, I do know crypto quite well, and 
especially sha3/keccak.  One reason I considered porting was to 
see if dmd outputs better code than gcc.  On x86_64 with the xmm 
registers there is enough room for 1600 bits of the keccak state 
to be stored in registers, but gcc 5.4 will still use RAM for the 
state.  It stays in L1, but storing all the state in registers 
should be much faster as it would avoid a lot of mov instructions 
loading parts of the state into registers.


Re: My choice to pick Go over D ( and Rust ), mostly non-technical

2018-02-06 Thread Ralph Doncaster via Digitalmars-d

On Friday, 2 February 2018 at 15:06:35 UTC, Benny wrote:
I am sure there will be lots of opinions regarding this post 
but its suffice to say that my decision to go with Go ( no pun 
intended ) is finally. I hope this final post is some 
indication of the issues that have plagued my decision process.


Thanks for the detailed post.  I'm an old C/C++ guy (got started 
with C++ back in the cfront days), and have been kicking the 
tires on D recently.  The poor state of libraries is something 
that may push me away from D as well.  In my case, need opencl 
and crypto libs.  The opencl package in dub is a crude wrapper 
around the original C API.  I couldn't find any sha lib, so I've 
started porting a reference sha3 implementation from C.


I, like you, may end up jumping off the ship though.  I've done a 
bit of work with golang before, so maybe I'll take another look 
at it.  The opencl bindings aren't much better, but there are 
ready-made sha3 libs I can use instead of porting from C.





Re: more OO way to do hex string to bytes conversion

2018-02-06 Thread Ralph Doncaster via Digitalmars-d-learn
On Tuesday, 6 February 2018 at 18:33:02 UTC, Ralph Doncaster 
wrote:
I've been reading std.conv and std.range, trying to figure out 
a high-level way of converting a hex string to bytes.  The only 
way I've been able to do it is through pointer access:


import std.stdio;
import std.string;
import std.conv;

void main()
{
immutable char* hex = "deadbeef".toStringz;
for (auto i=0; hex[i]; i += 2)
writeln(to!byte(hex[i]));
}


Thanks for all the feedback.  I'll have to do some more reading 
about maps.  My initial though is they don't seem as readable as 
loops.

The chunks() is useful, so for now what I'm going with is:
ubyte[] arr;
foreach (b; "deadbeef".chunks(2))
{
arr ~= b.to!ubyte(16);
}



more OO way to do hex string to bytes conversion

2018-02-06 Thread Ralph Doncaster via Digitalmars-d-learn
I've been reading std.conv and std.range, trying to figure out a 
high-level way of converting a hex string to bytes.  The only way 
I've been able to do it is through pointer access:


import std.stdio;
import std.string;
import std.conv;

void main()
{
immutable char* hex = "deadbeef".toStringz;
for (auto i=0; hex[i]; i += 2)
writeln(to!byte(hex[i]));
}


While it works, I'm wondering if there is a more object-oriented 
way of doing it in D.


Re: Error: template std.conv.parse cannot deduce function from argument types

2018-02-06 Thread Ralph Doncaster via Digitalmars-d-learn

On Tuesday, 6 February 2018 at 17:47:30 UTC, Adam D. Ruppe wrote:
On Tuesday, 6 February 2018 at 17:33:43 UTC, Ralph Doncaster 
wrote:

I get this error when I try the following code:


parse specifically works with a reference input it can advance. 
From the docs:


"It takes the input by reference. (This means that rvalues - 
such as string literals - are not accepted: use to instead.)

It advances the input to the position following the conversion."

I think the template overloading for parse is getting confused 
because it sees the lvalue r.l as being of type union rather 
than ulong.


The left hand side of an assignment never plays a role in 
overloading, so you can disregard that. It is just a string 
literal cannot be advanced and thus does not work with parse.


Thanks!  Got it.


Error: template std.conv.parse cannot deduce function from argument types

2018-02-06 Thread Ralph Doncaster via Digitalmars-d-learn

I get this error when I try the following code:

struct Record {
union { ubyte[8] bytes; ulong l;}
uint key;
}

Record r;
r.l = parse!ulong("deadbeef", 16);

However the following works:
string s = "deadbeef";
r.l = parse!ulong(s, 16);

And another way that works:
r.l = "deadbeef".to!ulong(16);

I've been doing C/C++ for over 20 years and recently started 
playing with D.  I think the template overloading for parse is 
getting confused because it sees the lvalue r.l as being of type 
union rather than ulong.  Is this a bug or the way things are 
supposed to work?