Re: The Computer Language Benchmarks Game

2016-08-06 Thread Walter Bright via Digitalmars-d

On 8/5/2016 7:02 AM, qznc wrote:

Ultimately, my opinion is that the benchmark is outdated and not useful today. I
ignore it, if anybody cites the benchmark game for performance measurements.


Yeah, I wouldn't bother with it, either.


Re: Library for serialization of data (with cycles) to JSON and binary

2016-08-06 Thread Neurone via Digitalmars-d-learn

On Saturday, 6 August 2016 at 16:25:48 UTC, Ilya Yaroshenko wrote:

On Saturday, 6 August 2016 at 16:11:03 UTC, Neurone wrote:
Is there a library that can serialize data (which may contain 
cycles) into JSON and a binary format that is portable across 
operating systems?


JSON:   http://code.dlang.org/packages/asdf
Binary: http://code.dlang.org/packages/cerealed


"ASDF is currently only very loosely validating jsons and with 
certain functions even silently and on purpose ignoring failing 
Objects"


Is there a way of turning this off?


Re: Dreams come true: Compiling and running linux apps on windows :)

2016-08-06 Thread Martin Nowak via Digitalmars-d-announce

On Saturday, 6 August 2016 at 17:34:14 UTC, Andre Pany wrote:

The build script is working fine:
curl -fsS https://dlang.org/install.sh | bash -s dmd


Good news, I'm really not that keen to write a powershell script.
What OS does it detect and download?


[Issue 16358] New: the most basic program leaks 88 bytes

2016-08-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16358

  Issue ID: 16358
   Summary: the most basic program leaks 88 bytes
   Product: D
   Version: D2
  Hardware: x86_64
OS: Linux
Status: NEW
  Severity: normal
  Priority: P1
 Component: druntime
  Assignee: nob...@puremagic.com
  Reporter: b2.t...@gmx.com

a.d:


void main()
{
}

compile & run (dmd v2.071.2-b1):
°°°

dmd a.d
valgrind --leak-check=full -v ./basic:

output:
°°°

==10527== HEAP SUMMARY:
==10527== in use at exit: 88 bytes in 2 blocks
==10527==   total heap usage: 27 allocs, 25 frees, 43,904 bytes allocated
==10527== 
==10527== Searching for pointers to 2 not-freed blocks
==10527== Checked 148,376 bytes
==10527== 
==10527== 16 bytes in 1 blocks are definitely lost in loss record 1 of 2
==10527==at 0x4C291A0: malloc (in
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==10527==by 0x446CCB: _D2rt5tlsgc4initFZPv (in /tmp/a)
==10527==by 0x441CC8: thread_attachThis (in /tmp/a)
==10527==by 0x441B8D: thread_init (in /tmp/a)
==10527==by 0x436D76: gc_init (in /tmp/a)
==10527==by 0x4296DE: rt_init (in /tmp/a)
==10527==by 0x426369: _D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZv (in
/tmp/a)
==10527==by 0x426314:
_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ7tryExecMFMDFZvZv (in /tmp/a)
==10527==by 0x426285: _d_run_main (in /tmp/a)
==10527==by 0x425EBD: main (in /tmp/a)
==10527== 
==10527== 72 bytes in 1 blocks are definitely lost in loss record 2 of 2
==10527==at 0x4C2B290: calloc (in
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==10527==by 0x446AAC:
_D2rt8monitor_13ensureMonitorFNbC6ObjectZPOS2rt8monitor_7Monitor (in /tmp/a)
==10527==by 0x446A02: _d_monitorenter (in /tmp/a)
==10527==by 0x4418C0: _D4core6thread6Thread8isDaemonMFNdZb (in /tmp/a)
==10527==by 0x42E8C7: thread_joinAll (in /tmp/a)
==10527==by 0x429779: rt_term (in /tmp/a)
==10527==by 0x426394: _D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZv (in
/tmp/a)
==10527==by 0x426314:
_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ7tryExecMFMDFZvZv (in /tmp/a)
==10527==by 0x426285: _d_run_main (in /tmp/a)
==10527==by 0x425EBD: main (in /tmp/a)
==10527== 
==10527== LEAK SUMMARY:
==10527==definitely lost: 88 bytes in 2 blocks
==10527==indirectly lost: 0 bytes in 0 blocks
==10527==  possibly lost: 0 bytes in 0 blocks
==10527==still reachable: 0 bytes in 0 blocks
==10527== suppressed: 0 bytes in 0 blocks

--


Re: Tracking memory usage

2016-08-06 Thread Basile B. via Digitalmars-d-learn

On Sunday, 7 August 2016 at 00:28:40 UTC, Alfred Pincher wrote:
Hi, I have written some code that tracks all memory allocations 
and deallocations when using my own memory interface. It is non 
gc based.


It reports the results of each allocation when the memory 
balance for the pointer allocated is non-zero. It gives the 
stack trace of the allocations and deallocations if they exist 
long with statistics collected from the allocators and 
deallocators.


It is very handy but I rely on hacks to accomplish the task. 
Does D have any solution for handling non-gc and/or gc memory 
inspection? With my code, if I miss a free, it is reported, 
this is a very nice feature. I hope D has something similar?


http://www.cprogramming.com/debugging/valgrind.html


Tracking memory usage

2016-08-06 Thread Alfred Pincher via Digitalmars-d-learn
Hi, I have written some code that tracks all memory allocations 
and deallocations when using my own memory interface. It is non 
gc based.


It reports the results of each allocation when the memory balance 
for the pointer allocated is non-zero. It gives the stack trace 
of the allocations and deallocations if they exist long with 
statistics collected from the allocators and deallocators.


It is very handy but I rely on hacks to accomplish the task. Does 
D have any solution for handling non-gc and/or gc memory 
inspection? With my code, if I miss a free, it is reported, this 
is a very nice feature. I hope D has something similar?







[Issue 16357] New: cast(T[])[x] casts x to T instead of [x] to T[]

2016-08-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16357

  Issue ID: 16357
   Summary: cast(T[])[x] casts x to T instead of [x] to T[]
   Product: D
   Version: D2
  Hardware: All
OS: All
Status: NEW
  Keywords: wrong-code
  Severity: normal
  Priority: P3
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: thecybersha...@gmail.com

// test.d //
void main()
{
uint x;
ubyte[] data = cast(ubyte[])[x];
assert(data.length == 4);
}


This is surprising, because if the expression is split up into:

auto arr = [x];
ubyte[] data = cast(ubyte[])arr;

then the result is as expected (an array with 4 elements).

--


Re: Battle-plan for CTFE

2016-08-06 Thread Stefan Koch via Digitalmars-d-announce

On Saturday, 6 August 2016 at 19:07:10 UTC, Rory McGuire wrote:


Hi Stefan,

Are you saying we can play around with ascii string 
slicing/appending already?


No, not now, but very soon. I want to have _basic_ utf8 support 
before I am comfortable with enabling string operations.


The gist with the current state of ctfe support seems to have 
last been changed 21 days ago? Am I reading that right? If so 
where could I find your current tests, if I wanted to run the 
tests on my machine.


That could be right.
Many features are half-implemented at the moment and will fail in 
random and exciting ways.

Therefore it is too soon to publish code testing those.

If you want particular features the best way is contacting me.



SortedRange.lowerBound from FrontTransversal

2016-08-06 Thread Alex via Digitalmars-d-learn

Hi all... a technical question from my side...
why the last line of the following gives the error?

import std.stdio;
import std.range;
import std.algorithm;

void main()
{
size_t[][] darr;
darr.length = 2;
darr[0] = [0, 1, 2, 3];
darr[1] = [4, 5, 6];
auto fT = frontTransversal(darr);
assert(equal(fT, [ 0, 4 ][]));

auto heads = assumeSorted!"a <= b"(fT);
writeln(heads.lowerBound(3)); //!(SearchPolicy.gallop)
}

The error is:
Error: template 
std.range.SortedRange!(FrontTransversal!(ulong[][], 
cast(TransverseOptions)0), "a <= b").SortedRange.lowerBound 
cannot deduce function from argument types !()(int), candidates 
are:
package.d(7807,10): 
std.range.SortedRange!(FrontTransversal!(ulong[][], 
cast(TransverseOptions)0), "a <= 
b").SortedRange.lowerBound(SearchPolicy sp = 
SearchPolicy.binarySearch, V)(V value) if 
(isTwoWayCompatible!(predFun, ElementType!Range, V) && 
hasSlicing!Range)


I tried also with "assumeNotJagged" for the FrontTransversal, it 
didn't worked either, beyond the fact, that assumeNotJagged is 
not of interest for me...




Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Walter Bright via Digitalmars-d

On 8/6/2016 3:14 PM, David Nadlinger wrote:

On Saturday, 6 August 2016 at 21:56:06 UTC, Walter Bright wrote:

Let me rephrase the question - how does fusing them alter the result?


There is just one rounding operation instead of two.


Makes sense.



Of course, if floating point values are strictly defined as having only a
minimum precision, then folding away the rounding after the multiplication is
always legal.


Yup.

So it does make sense that allowing fused operations would be equivalent to 
having no maximum precision.




Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread David Nadlinger via Digitalmars-d

On Saturday, 6 August 2016 at 21:56:06 UTC, Walter Bright wrote:
Let me rephrase the question - how does fusing them alter the 
result?


There is just one rounding operation instead of two.

Of course, if floating point values are strictly defined as 
having only a minimum precision, then folding away the rounding 
after the multiplication is always legal.


 — David


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Walter Bright via Digitalmars-d

On 8/6/2016 2:12 PM, David Nadlinger wrote:

This is true – and precisely the reason why it is actually defined
(ldc.attributes) as

---
alias fastmath = AliasSeq!(llvmAttr("unsafe-fp-math", "true"),
llvmFastMathFlag("fast"));
---

This way, users can actually combine different optimisations in a more tasteful
manner as appropriate for their particular application.

Experience has shown that people – even those intimately familiar with FP
semantics – expect a catch-all kitchen-sink switch for all natural optimisations
(natural when equating FP values with real numbers). This is why the shorthand
exists.


I didn't know that, thanks for the explanation. But the same can be done for 
pragmas, as the second argument isn't just true|false, it's an expression.


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Walter Bright via Digitalmars-d

On 8/6/2016 1:06 PM, Ilya Yaroshenko wrote:

Some applications requires exactly the same results for different architectures
(probably because business requirement). So this optimization is turned off by
default in LDC for example.


Let me rephrase the question - how does fusing them alter the result?


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread David Nadlinger via Digitalmars-d

On Saturday, 6 August 2016 at 09:35:32 UTC, Walter Bright wrote:
The LDC fastmath bothers me a lot. It throws away proper NaN 
and infinity handling, and throws away precision by allowing 
reciprocal and algebraic transformations.


This is true – and precisely the reason why it is actually 
defined (ldc.attributes) as


---
alias fastmath = AliasSeq!(llvmAttr("unsafe-fp-math", "true"), 
llvmFastMathFlag("fast"));

---

This way, users can actually combine different optimisations in a 
more tasteful manner as appropriate for their particular 
application.


Experience has shown that people – even those intimately familiar 
with FP semantics – expect a catch-all kitchen-sink switch for 
all natural optimisations (natural when equating FP values with 
real numbers). This is why the shorthand exists.


 — David


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread David Nadlinger via Digitalmars-d

On Saturday, 6 August 2016 at 12:48:26 UTC, Iain Buclaw wrote:
There are compiler switches for that.  Maybe there should be 
one pragma to tweak these compiler switches on a per-function 
basis, rather than separately named pragmas.


This might be a solution for inherently compiler-specific 
settings (although for LDC we would probably go for "type-safe" 
UDAs/pragmas instead of parsing faux command-line strings).


Floating point transformation semantics aren't compiler-specific, 
though. The corresponding options are used commonly enough in 
certain kinds of code that it doesn't seem prudent to require 
users to resort to compiler-specific ways of expressing them.


 — David


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread David Nadlinger via Digitalmars-d

On Saturday, 6 August 2016 at 10:02:25 UTC, Iain Buclaw wrote:
No pragmas tied to a specific architecture should be allowed in 
the language spec, please.


I wholeheartedly agree. However, it's not like FP optimisation 
pragmas would be specific to any particular architecture. They 
just describe classes of transformations that are allowed on top 
of the standard semantics.


For example, whether transforming `a + (b * c)` into a single 
operation is allowed is not a question of the target architecture 
at all, but rather whether the implicit rounding after evaluating 
(b * c) can be skipped or not. While this in turn of course 
enables the compiler to use FMA instructions on x86/AVX, 
ARM/NEON, PPC, …, it is not architecture-specific at all on a 
conceptual level.


 — David


Re: Autotesting dub packages with dmd nightly

2016-08-06 Thread Seb via Digitalmars-d-announce
On Saturday, 6 August 2016 at 19:06:34 UTC, Sebastiaan Koppe 
wrote:
- code.dlang.org has an api but doesn't provide an endpoint to 
retrieve all packages/version. Now I just scrape the site 
instead (thanks Adam for your dom implementation).


Why don't you make a PR to the dub registry 
(https://github.com/dlang/dub-registry) to get such an endpoint? 
Or at least open an issue ;-)


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Ilya Yaroshenko via Digitalmars-d

On Saturday, 6 August 2016 at 19:51:11 UTC, Walter Bright wrote:

On 8/6/2016 2:48 AM, Ilya Yaroshenko wrote:

I don't know what the point of fusedMath is.
It allows a compiler to replace two arithmetic operations with 
single composed
one, see AVX2 (FMA3 for intel and FMA4 for AMD) instruction 
set.


I understand that, I just don't understand why that wouldn't be 
done anyway.


Some applications requires exactly the same results for different 
architectures (probably because business requirement). So this 
optimization is turned off by default in LDC for example.


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Walter Bright via Digitalmars-d

On 8/6/2016 2:48 AM, Ilya Yaroshenko wrote:

I don't know what the point of fusedMath is.

It allows a compiler to replace two arithmetic operations with single composed
one, see AVX2 (FMA3 for intel and FMA4 for AMD) instruction set.


I understand that, I just don't understand why that wouldn't be done anyway.


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Walter Bright via Digitalmars-d

On 8/6/2016 3:02 AM, Iain Buclaw via Digitalmars-d wrote:

No pragmas tied to a specific architecture should be allowed in the
language spec, please.



A good point. On the other hand, a list of them would be nice so implementations 
don't step on each other.




Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Walter Bright via Digitalmars-d

On 8/6/2016 5:09 AM, Johannes Pfau wrote:

I think this restriction is also quite arbitrary.


You're right that there are gray areas, but the distinction is not arbitrary.

For example, mangling does not affect the interface. It affects the name.

Using an attribute has more downsides, as it affects the whole function rather 
than just part of it, like a pragma would.




Re: Autotesting dub packages with dmd nightly

2016-08-06 Thread Seb via Digitalmars-d-announce
On Saturday, 6 August 2016 at 19:06:34 UTC, Sebastiaan Koppe 
wrote:
I have just finished a first iteration of dubster, a test 
runner that runs `dub test` on each package for each dmd 
release.


see https://github.com/skoppe/dubster

Please provide feedback as it will determine the direction/life 
of this tester.


I am planning on adding a web ui/api next to look around in the 
data.


That are excellent news!
Some random ideas:

1) Send the packages a notification about the build error (e.g. 
Github comment) - this should probably be tweaked a bit, s.t. it 
doesn't spam too often for still broken packages
2) Allow easy, manual build of special branches for the core 
team, e.g. let's say Walter develops the new scoped pointers 
feature ( https://github.com/dlang/DIPs/pull/24), than it would 
be great to know how many packages would break by pulling in the 
branch (in comparison to the last release or current nightly). A 
similar "breakage by shipping" test might be very interesting for 
critical changes to druntime or phobos too.

3) Once you have the API
a) (try to) get a shield badge (-> http://shields.io/)
b) Make the data available to the dub-registry (-> 
https://github.com/dlang/dub-registry)
4) Assess the quality of the unittests. Probably the easiest is 
`dub test -b unittest-cov`, and then summing up the total 
coverage of all generated .lst files, but running with coverage 
might increase your build times, though I would argue that it's 
worth it ;-)
5) Log your daily "broken" statistics - could be a good indicator 
of whether your hard work gets acknowledged.
6) Regarding linker errors - I can only redirect you to the open 
DUB issue (https://github.com/dlang/dub/issues/852) and DEP 5 
(https://github.com/dlang/dub/wiki/DEP5).


Re: Battle-plan for CTFE

2016-08-06 Thread Rory McGuire via Digitalmars-d-announce
On 06 Aug 2016 16:30, "Stefan Koch via Digitalmars-d-announce" <
digitalmars-d-announce@puremagic.com> wrote:
>
> Time for an update.
> (ASCII)-Strings work reasonably well.
>
> I am now working on supporting general Sliceing and Appending.
> The effort on function calls is also still ongoing.
>
> I added a switch to my version of dmd which allows to toggle the ctfe
engine.
> So now I can compare apples to apples when posting perf data.
>
> A nice weekend to all of you.

Hi Stefan,

Are you saying we can play around with ascii string slicing/appending
already?

The gist with the current state of ctfe support seems to have last been
changed 21 days ago? Am I reading that right? If so where could I find your
current tests, if I wanted to run the tests on my machine.

Cheers,
Rory


Re: Library for serialization of data (with cycles) to JSON and binary

2016-08-06 Thread Jacob Carlborg via Digitalmars-d-learn

On 06/08/16 18:11, Neurone wrote:

Is there a library that can serialize data (which may contain cycles)
into JSON and a binary format that is portable across operating systems?


XML: http://code.dlang.org/packages/orange

--
/Jacob Carlborg


Re: Dreams come true: Compiling and running linux apps on windows :)

2016-08-06 Thread Andre Pany via Digitalmars-d-announce
On Saturday, 6 August 2016 at 17:48:43 UTC, Rattle Weird Hole 
wrote:

what are concret applications ?


For me it is the possibility to develop applications for the 
amazon web services cloud
while not leaving my windows system. I am used to windows but now 
I have the possibility
to also develop the needed linux artifacts directly without a VM 
or secondary OS on my machine.


Kind regards
André


[OT] Curl pipe

2016-08-06 Thread cym13 via Digitalmars-d-learn

On Saturday, 6 August 2016 at 17:18:51 UTC, Andre Pany wrote:

Hi,

I play around with the new windows 10 feature to run a linux
sub system on windows.

-> Installing dmd is working fine with the command
curl -fsS https://dlang.org/install.sh | bash -s dmd


Just a side note but it's a really bad idea to pipe curl in bash 
as it opens the door to remote unknown code exploitation. Even 
checking what curl returns beforehand isn't enough as a piped 
curl is distinguishable server-side so a different script could 
be provided dynamically.


(Ok, with dlang.org there shouldn't be much problem but still)


[Issue 16349] better curl retry for install.sh script

2016-08-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16349

greensunn...@gmail.com changed:

   What|Removed |Added

 CC||greensunn...@gmail.com

--- Comment #5 from greensunn...@gmail.com ---
> Our binaries are already hosted on S3 (that's where Github Releases come from 
> as well).

Oh, but for Github Releases traffic is _free_, whereas if I am not mistaken you
have to pay quite a lot to AWS. A simple estimate goes like this:

25 (size of release in MB, depends of course) * 10_000 (number of monthly
downloads, I hope it's higher, see below)  / 1_000 (convert to GB) * 0.09
(transfer costs to internet, per GB)

and thus yields:

10K monthly downloads: 25 * 1 / 1000 * 0.09 = 22.5$
30K: 67.5$
50K: 112.5$
100K monthly downloads: 225$

According to http://erdani.com/d/downloads.daily.png, there are currently at
about 1400 downloads per day (=42K /month), which means it's about 1.1K$ a
year.
I estimate the bill to be 30-40% less, because the Linux archives are better
compressed and it's mostly those that are constantly downloaded for CI. It's
still quite a lot of money (if I am not mistaken with my estimate).

--


Re: Dreams come true: Compiling and running linux apps on windows :)

2016-08-06 Thread Rattle Weird Hole via Digitalmars-d-announce

On Saturday, 6 August 2016 at 17:34:14 UTC, Andre Pany wrote:

Hi,

there is a new feature with the recent windows 10 update.
You now can compile and run your linux apps (console only) on 
windows.


The build script is working fine:
curl -fsS https://dlang.org/install.sh | bash -s dmd

The only thing you need is to install the build-essential 
package

sudo apt-get install build-essential


A simple hello world application is running fine.
Network functionality not tested so far.

Kind regards
André


what are concret applications ?



Re: Windows 10 Linux Bash Shell: Compiling linux app on windows

2016-08-06 Thread ZombineDev via Digitalmars-d-learn

On Saturday, 6 August 2016 at 17:18:51 UTC, Andre Pany wrote:

Hi,

I play around with the new windows 10 feature to run a linux
sub system on windows.

-> Installing dmd is working fine with the command
curl -fsS https://dlang.org/install.sh | bash -s dmd

-> Activating dmd is also working
source ~/dlang/dmd-2.071.1/activate

-> dmd can be started and shows correct version
dmd --version.

-> Compiling a hello world works, linking fails
"dmd test -c" creates a test.o file

but "dmd test" fails with error:
cc: no such file or directory
--- errorlevel 255

Do you have an idea how to fix the linker issue?

Kind regards
André


Just like on a regular linux distro, you need to have installed 
standard development tools such as a C compiler toolchain. Since 
the install script on dlang's homepage is not a deb package it 
doesn't verify if those dependencies are fulfilled.


I think running "sudo apt-get install build-essential" in 
bash.exe should do it (install C compiler and a "cc" alias to it).


The other option is to download the Ubuntu/Debian x86_64 .deb 
package and do "sudo dpkg -i" on it: 
http://downloads.dlang.org/releases/2.x/2.071.1/dmd_2.071.1-0_amd64.deb and it will find the missing dependencies, so you can later install them, like described here: http://superuser.com/questions/196864/how-to-install-local-deb-packages-with-apt-get




Dreams come true: Compiling and running linux apps on windows :)

2016-08-06 Thread Andre Pany via Digitalmars-d-announce

Hi,

there is a new feature with the recent windows 10 update.
You now can compile and run your linux apps (console only) on 
windows.


The build script is working fine:
curl -fsS https://dlang.org/install.sh | bash -s dmd

The only thing you need is to install the build-essential package

sudo apt-get install build-essential


A simple hello world application is running fine.
Network functionality not tested so far.

Kind regards
André





Re: Windows 10 Linux Bash Shell: Compiling linux app on windows

2016-08-06 Thread Rattle Weird Hole via Digitalmars-d-learn

On Saturday, 6 August 2016 at 17:18:51 UTC, Andre Pany wrote:

Hi,

I play around with the new windows 10 feature to run a linux
sub system on windows.

-> Installing dmd is working fine with the command
curl -fsS https://dlang.org/install.sh | bash -s dmd

-> Activating dmd is also working
source ~/dlang/dmd-2.071.1/activate

-> dmd can be started and shows correct version
dmd --version.

-> Compiling a hello world works, linking fails
"dmd test -c" creates a test.o file

but "dmd test" fails with error:
cc: no such file or directory
--- errorlevel 255

Do you have an idea how to fix the linker issue?

Kind regards
André


ld must be found in the environment ?


Re: Windows 10 Linux Bash Shell: Compiling linux app on windows

2016-08-06 Thread Andre Pany via Digitalmars-d-learn
On Saturday, 6 August 2016 at 17:26:05 UTC, Rattle Weird Hole 
wrote:

ld must be found in the environment ?


Yes ld was missing, by installing the build-essentials dmd is 
running fine:

sudo apt-get install build-essential

Kind regards
André


Windows 10 Linux Bash Shell: Compiling linux app on windows

2016-08-06 Thread Andre Pany via Digitalmars-d-learn

Hi,

I play around with the new windows 10 feature to run a linux
sub system on windows.

-> Installing dmd is working fine with the command
curl -fsS https://dlang.org/install.sh | bash -s dmd

-> Activating dmd is also working
source ~/dlang/dmd-2.071.1/activate

-> dmd can be started and shows correct version
dmd --version.

-> Compiling a hello world works, linking fails
"dmd test -c" creates a test.o file

but "dmd test" fails with error:
cc: no such file or directory
--- errorlevel 255

Do you have an idea how to fix the linker issue?

Kind regards
André



[Issue 16356] cdouble is broken

2016-08-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16356

ag0ae...@gmail.com changed:

   What|Removed |Added

   Keywords||wrong-code
 CC||ag0ae...@gmail.com

--


[Issue 16356] cdouble is broken

2016-08-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16356

--- Comment #2 from Kazuki Komatsu  ---
(In reply to greensunny12 from comment #1)
> I hardly believe that this is going to be fixed as builtin complex types are
> rarely used and have been scheduled for deprecation (see
> https://dlang.org/deprecate.html). Btw @other why is there no schedule date
> for complex types?
>
> Do you experience a similar behavior with std.complex?
> (https://dlang.org/phobos/std_complex.html)

I test std.complex.Complex by same code, but it has no strange behavior.

--


[Issue 16356] cdouble is broken

2016-08-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16356

greensunn...@gmail.com changed:

   What|Removed |Added

 CC||greensunn...@gmail.com

--- Comment #1 from greensunn...@gmail.com ---
I hardly believe that this is going to be fixed as builtin complex types are
rarely used and have been scheduled for deprecation (see
https://dlang.org/deprecate.html). Btw @other why is there no schedule date for
complex types?

Do you experience a similar behavior with std.complex?
(https://dlang.org/phobos/std_complex.html)

--


Re: Library for serialization of data (with cycles) to JSON and binary

2016-08-06 Thread Ilya Yaroshenko via Digitalmars-d-learn

On Saturday, 6 August 2016 at 16:11:03 UTC, Neurone wrote:
Is there a library that can serialize data (which may contain 
cycles) into JSON and a binary format that is portable across 
operating systems?


JSON:   http://code.dlang.org/packages/asdf
Binary: http://code.dlang.org/packages/cerealed


Library for serialization of data (with cycles) to JSON and binary

2016-08-06 Thread Neurone via Digitalmars-d-learn
Is there a library that can serialize data (which may contain 
cycles) into JSON and a binary format that is portable across 
operating systems?


[Issue 16356] New: cdouble is broken

2016-08-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16356

  Issue ID: 16356
   Summary: cdouble is broken
   Product: D
   Version: D2
  Hardware: x86_64
OS: All
Status: NEW
  Severity: critical
  Priority: P1
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: enjouzensyou.bo...@gmail.com

OK:

import std.stdio;

/*
0+0i
1+1i
2+2i
3+3i
*/
void main()
{
cfloat c = 1f + 1fi;
foreach(x; 0 .. 4)
writeln(c * x);
}


NG:

import std.stdio;

/*
0+1i
1+1i
2+1i
3+1i
*/
void main()
{
cdouble c = 1 + 1i;
foreach(x; 0 .. 4)
writeln(c * x);
}


dmd: 2.071.1

--


[Issue 16349] better curl retry for install.sh script

2016-08-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16349

--- Comment #4 from github-bugzi...@puremagic.com ---
Commits pushed to master at https://github.com/dlang/installer

https://github.com/dlang/installer/commit/2f1ec46aaae444f3b0a10dde18517d7e98145ee7
fix Issue 16349 - better curl retry for install.sh script

- curl's --retry option isn't really helpful b/c no timeouts are
  activated by default
- use a retry loop with increasing sleep times instead
- enable connection (5s) and download (<1KB/s for 30s) timeouts
  (using --max-time is too tricky b/c of the unknown download sizes)

https://github.com/dlang/installer/commit/c777a91b353cc3e2daed2460a43e16420005bbd2
Merge pull request #187 from MartinNowak/fix16349

fix Issue 16349 - better curl retry for install.sh script

--


Re: D safety! New Feature?

2016-08-06 Thread Chris Wright via Digitalmars-d
On Sat, 06 Aug 2016 07:56:29 +0200, ag0aep6g wrote:

> On 08/06/2016 03:38 AM, Chris Wright wrote:
>> Some reflection stuff is a bit inconvenient:
>>
>> class A {
>>   int foo() { return 1; }
>> }
>>
>> void main() {
>>   auto a = new immutable(A);
>>   // This passes:
>>   static assert(is(typeof(a.foo)));
>>   // This doesn't:
>>   static assert(__traits(compiles, () { a.foo; }));
>> }
>>
>> __traits(compiles) is mostly an evil hack, but things like this require
>> its use.
> 
> The two are not equivalent, though. The first one checks the type of the
> method.

Which is a bit awkward because in no other context is it possible to 
mention a raw method. You invoke it or you get a delegate to it.

And if you try to get a delegate explicitly to avoid this, you run into 
https://issues.dlang.org/show_bug.cgi?id=1983 .

It's also a bit awkward for a method to be reported to exist with no way 
to call it. While you can do the same with private methods, you get a 
warning:

a.d(7): Deprecation: b.B.priv is not visible from module a

Which implies that this will become illegal at some point and fail to 
compile.


Re: Battle-plan for CTFE

2016-08-06 Thread Stefan Koch via Digitalmars-d-announce

Time for an update.
(ASCII)-Strings work reasonably well.

I am now working on supporting general Sliceing and Appending.
The effort on function calls is also still ongoing.

I added a switch to my version of dmd which allows to toggle the 
ctfe engine.

So now I can compare apples to apples when posting perf data.

A nice weekend to all of you.


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Iain Buclaw via Digitalmars-d
On 6 August 2016 at 16:11, Patrick Schluter via Digitalmars-d
 wrote:
> On Saturday, 6 August 2016 at 10:02:25 UTC, Iain Buclaw wrote:
>>
>> On 6 August 2016 at 11:48, Ilya Yaroshenko via Digitalmars-d
>>  wrote:
>>>
>>> On Saturday, 6 August 2016 at 09:35:32 UTC, Walter Bright wrote:


>>
>> No pragmas tied to a specific architecture should be allowed in the
>> language spec, please.
>
>
> Hmmm, that's the whole point of pragmas (at least in C) to specify
> implementation specific stuff outside of the language specs. If it's in the
> language specs it should be done with language specific mechanisms.

https://dlang.org/spec/pragma.html#predefined-pragmas

"""
All implementations must support these, even if by just ignoring them.
...
Vendor specific pragma Identifiers can be defined if they are prefixed
by the vendor's trademarked name, in a similar manner to version
identifiers.
"""

So all added pragmas that have no vendor prefix must be treated as
part of the language in order to conform with the specs.


Re: Recommended procedure to upgrade DMD installation

2016-08-06 Thread A D dev via Digitalmars-d

On Saturday, 6 August 2016 at 01:22:51 UTC, Mike Parker wrote:


DMD ships with the OPTLINK linker and uses it by default.



You generally don't need to worry about calling them directly


Got it, thanks.



Re: The Origins of the D Cookbook

2016-08-06 Thread A D dev via Digitalmars-d-announce

On Saturday, 6 August 2016 at 01:01:59 UTC, Mike Parker wrote:
There are two goals behind the blog: to market D to the world 
at large and to let users know what's going on in the

...
posts in the coming weeks. So the acceptance rate so far is 
100% :)


Thanks, H Loom and Mike. Fair enough.





Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Patrick Schluter via Digitalmars-d

On Saturday, 6 August 2016 at 10:02:25 UTC, Iain Buclaw wrote:
On 6 August 2016 at 11:48, Ilya Yaroshenko via Digitalmars-d 
 wrote:
On Saturday, 6 August 2016 at 09:35:32 UTC, Walter Bright 
wrote:




No pragmas tied to a specific architecture should be allowed in 
the language spec, please.


Hmmm, that's the whole point of pragmas (at least in C) to 
specify implementation specific stuff outside of the language 
specs. If it's in the language specs it should be done with 
language specific mechanisms.


Re: Using D in Games and bindings to c++ libraries

2016-08-06 Thread rikki cattermole via Digitalmars-d-learn

On 07/08/2016 1:08 AM, Mike Parker wrote:

On Saturday, 6 August 2016 at 11:18:11 UTC, rikki cattermole wrote:


For 32bit I use a hack of a workaround to make it work recursively.

$ DFLAGS="-m32mscoff" ; dub build

Sadly this is not full proof or much help.


For clarity, what rikki is getting at here is that DUB does not yet
support compiling to the 32-bit COFF format. The --arch switch for dub
has two options: x86 and x86_64. The former will cause it to call dmd
with the -m32 switch. The only way to get use -m32mscoff is to specify
it manually. Doing so in the dflags entry of your dub configuration will
certainly cause errors if you have any dependencies in your
configuration (your source will be compiled with -m32mscoff, while
dependencies are compiled with -m32). I've not tried the environment
variable approach that rikki shows here, so I have no comment on that.

There is an issue in the DUB bug tracker to for this [1].

[1] https://github.com/dlang/dub/issues/628


What happens is dub uses DFLAGS as part of the identifier for builds and 
will propagate it to dependencies. As long as you don't miss-match the 
values of DFLAGS, you're good to go.


Re: Using D in Games and bindings to c++ libraries

2016-08-06 Thread Mike Parker via Digitalmars-d-learn
On Saturday, 6 August 2016 at 11:18:11 UTC, rikki cattermole 
wrote:


For 32bit I use a hack of a workaround to make it work 
recursively.


$ DFLAGS="-m32mscoff" ; dub build

Sadly this is not full proof or much help.


For clarity, what rikki is getting at here is that DUB does not 
yet support compiling to the 32-bit COFF format. The --arch 
switch for dub has two options: x86 and x86_64. The former will 
cause it to call dmd with the -m32 switch. The only way to get 
use -m32mscoff is to specify it manually. Doing so in the dflags 
entry of your dub configuration will certainly cause errors if 
you have any dependencies in your configuration (your source will 
be compiled with -m32mscoff, while dependencies are compiled with 
-m32). I've not tried the environment variable approach that 
rikki shows here, so I have no comment on that.


There is an issue in the DUB bug tracker to for this [1].

[1] https://github.com/dlang/dub/issues/628


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Iain Buclaw via Digitalmars-d
On 6 August 2016 at 13:30, Ilya Yaroshenko via Digitalmars-d
 wrote:
> On Saturday, 6 August 2016 at 11:10:18 UTC, Iain Buclaw wrote:
>>
>> On 6 August 2016 at 12:07, Ilya Yaroshenko via Digitalmars-d
>>  wrote:
>>>
>>> On Saturday, 6 August 2016 at 10:02:25 UTC, Iain Buclaw wrote:


 On 6 August 2016 at 11:48, Ilya Yaroshenko via Digitalmars-d
  wrote:
>
>
> On Saturday, 6 August 2016 at 09:35:32 UTC, Walter Bright wrote:
>>
>>
>> [...]
>
>
>
>
> OK, then we need a third pragma,`pragma(ieeeRound)`. But
> `pragma(fusedMath)`
> and `pragma(fastMath)` should be presented too.
>
>> [...]
>
>
>
>
> It allows a compiler to replace two arithmetic operations with single
> composed one, see AVX2 (FMA3 for intel and FMA4 for AMD) instruction set.



 No pragmas tied to a specific architecture should be allowed in the
 language spec, please.
>>>
>>>
>>>
>>> Then probably Mir will drop all compilers, but LDC
>>> LLVM is tied for real world, so we can tied D for real world too. If a
>>> compiler can not implement optimization pragma, then this pragma can be
>>> just
>>> ignored by the compiler.
>>
>>
>> If you need a function to work with an exclusive instruction set or
>> something as specific as use of composed/fused instructions, then it is
>> common to use an indirect function resolver to choose the most relevant
>> implementation for the system that's running the code (a la @ifunc), then
>> for the targetted fusedMath implementation, do it yourself.
>
>
> What do you mean by "do it yourself"? Write code using FMA GCC intrinsics?
> Why I need to do something that can be automated by a compiler? Modern
> approach is to give a hint to the compiler instead of write specialised code
> for different architectures.
>
> It seems you have misunderstood me. I don't want to force compiler to use
> explicit instruction sets. Instead, I want to give a hint to a compiler,
> about what math _transformations_ are allowed. And this hints are
> architecture independent. A compiler may a may not use this hints to
> optimise code.

There are compiler switches for that.  Maybe there should be one
pragma to tweak these compiler switches on a per-function basis,
rather than separately named pragmas.  That way you tell the compiler
what you want, rather than it being part of the language logic to
understand what must be turned on/off internally.

First, assume the language knows nothing about what platform it's
running on, then use that as a basis for suggesting new pragmas that
should be supported everywhere.


[Issue 16349] better curl retry for install.sh script

2016-08-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16349

Martin Nowak  changed:

   What|Removed |Added

   Keywords||pull

--- Comment #3 from Martin Nowak  ---
Would you want to review this?
https://github.com/dlang/installer/pull/187

--


Re: Linking on MS Windows.

2016-08-06 Thread ciechowoj via Digitalmars-d-learn
On Saturday, 6 August 2016 at 11:58:31 UTC, rikki cattermole 
wrote:
We provide Optlink so that we have support for Windows out of 
the box. Unfortunately since Optlink does not understand COFF, 
we are forced to provide a second command option to force MSVC 
tooling for 32bit usage.


That makes sense.


Re: Linking on MS Windows.

2016-08-06 Thread ciechowoj via Digitalmars-d-learn

On Saturday, 6 August 2016 at 12:06:02 UTC, Kai Nacke wrote:


If you are already using Visual Studio and LLVM/clang then why 
not use ldc? The compiler itself is built with this toolchain...


I'm considering that option. However, as the project I want to 
compile is dstep, I want it to compile smoothly with dmd.


Any ideas about that strange liker error?


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Johannes Pfau via Digitalmars-d
Am Sat, 6 Aug 2016 02:29:50 -0700
schrieb Walter Bright :

> On 8/6/2016 1:21 AM, Ilya Yaroshenko wrote:
> > On Friday, 5 August 2016 at 20:53:42 UTC, Walter Bright wrote:
> >  
> >> I agree that the typical summation algorithm suffers from double
> >> rounding. But that's one algorithm. I would appreciate if you
> >> would review
> >> http://dlang.org/phobos/std_algorithm_iteration.html#sum to ensure
> >> it doesn't have this problem, and if it does, how we can fix it. 
> >
> > Phobos's sum is two different algorithms. Pairwise summation for
> > Random Access Ranges and Kahan summation for Input Ranges. Pairwise
> > summation does not require IEEE rounding, but Kahan summation
> > requires it.
> >
> > The problem with real world example is that it depends on
> > optimisation. For example, if all temporary values are rounded,
> > this is not a problem, and if all temporary values are not rounded
> > this is not a problem too. However if some of them rounded and
> > others are not, than this will break Kahan algorithm.
> >
> > Kahan is the shortest and one of the slowest (comparing with KBN
> > for example) summation algorithms. The true story about Kahan, that
> > we may have it in Phobos, but we can use pairwise summation for
> > Input Ranges without random access, and it will be faster then
> > Kahan. So we don't need Kahan for current API at all.
> >
> > Mir has both Kahan, which works with 32-bit DMD, and pairwise,
> > witch works with input ranges.
> >
> > Kahan, KBN, KB2, and Precise summations is always use `real` or
> > `Complex!real` internal values for 32 bit X86 target. The only
> > problem with Precise summation, if we need precise result in double
> > and use real for internal summation, then the last bit will be
> > wrong in the 50% of cases.
> >
> > Another good point about Mir's summation algorithms, that they are
> > Output Ranges. This means they can be used effectively to sum
> > multidimensional arrays for example. Also, Precise summator may be
> > used to compute exact sum of distributed data.
> >
> > When we get a decision and solution for rounding problem, I will
> > make PR for std.experimental.numeric.sum.
> >  
> >> I hear you. I'd like to explore ways of solving it. Got any
> >> ideas?  
> >
> > We need to take the overall picture.
> >
> > It is very important to recognise that D core team is small and D
> > community is not large enough now to involve a lot of new
> > professionals. This means that time of existing one engineers is
> > very important for D and the most important engineer for D is you,
> > Walter.
> >
> > In the same time we need to move forward fast with language changes
> > and druntime changes (GC-less Fibers for example).
> >
> > So, we need to choose tricky options for development. The most
> > important option for D in the science context is to split D
> > Programming Language from DMD in our minds. I am not asking to
> > remove DMD as reference compiler. Instead of that, we can introduce
> > changes in D that can not be optimally implemented in DMD (because
> > you have a lot of more important things to do for D instead of
> > optimisation) but will be awesome for our LLVM-based or GCC-based
> > backends.
> >
> > We need 2 new pragmas with the same syntax as `pragma(inline, xxx)`:
> >
> > 1. `pragma(fusedMath)` allows fused mul-add, mul-sub, div-add,
> > div-sub operations. 2. `pragma(fastMath)` equivalents to [1]. This
> > pragma can be used to allow extended precision.
> >
> > This should be 2 separate pragmas. The second one may assume the
> > first one.
> >
> > Recent LDC beta has @fastmath attribute for functions, and it is
> > already used in Phobos ndslice.algorithm PR and its Mir's mirror.
> > Attributes are alternative for pragmas, but their syntax should be
> > extended, see [2]
> >
> > The old approach is separate compilation, but it is weird, low
> > level for users, and requires significant efforts for both small
> > and large projects.
> >
> > [1] http://llvm.org/docs/LangRef.html#fast-math-flags
> > [2] https://github.com/ldc-developers/ldc/issues/1669  
> 
> Thanks for your help with this.
> 
> Using attributes for this is a mistake. Attributes affect the
> interface to a function

This is not true for UDAs. LDC and GDC actually implement @attribute
as an UDA. And UDAs used in serialization interfaces, the std.benchmark
proposals, ... do not affect the interface either.

> not its internal implementation.

It's possible to reflect on the UDAs of the current function, so this
is not true in general:
-
@(40) int foo()
{
mixin("alias thisFunc = " ~ __FUNCTION__ ~ ";");
return __traits(getAttributes, thisFunc)[0];
}
-
https://dpaste.dzfl.pl/aa0615b40adf

I think this restriction is also quite arbitrary. For end users
attributes provide a much nicer syntax than pragmas. Both GDC and LDC
already successfully use UDAs for function specific backend options, 

Re: Linking on MS Windows.

2016-08-06 Thread Kai Nacke via Digitalmars-d-learn

On Friday, 5 August 2016 at 18:28:48 UTC, ciechowoj wrote:
Is default dmd linker (on MS Windows, OPTILINK) supposed to 
link against static libraries created with Visual Studio?


Specifically I want to link a project compiled on windows with 
dmd against pre-compiled library `libclang.lib` from LLVM 
suite. I'm pretty sure they used Visual Studio to compile the 
library.


If you are already using Visual Studio and LLVM/clang then why 
not use ldc? The compiler itself is built with this toolchain...


Regards,
Kai


Re: Linking on MS Windows.

2016-08-06 Thread rikki cattermole via Digitalmars-d-learn

On 06/08/2016 11:53 PM, ciechowoj wrote:

Another question that is troubling me is why to use OPTLINK as a default
for 32-bit version, if for 64-bit version a Visual Studio linker is used
anyway?


Optlink does not support 64bit.
For 64bit support we use the MSVC tooling on Windows.

We provide Optlink so that we have support for Windows out of the box. 
Unfortunately since Optlink does not understand COFF, we are forced to 
provide a second command option to force MSVC tooling for 32bit usage.


Re: Linking on MS Windows.

2016-08-06 Thread ciechowoj via Digitalmars-d-learn
I managed to compile both 32 and 64 bit release versions and it 
seems to work fine, however with 64-bit debug version I'm getting 
a strange error:


LINK : fatal error LNK1101: incorrect MSPDB120.DLL version; 
recheck installation of this product


Does anyone know why it is so? I'm compiling with -m64 switch, so 
I suppose the linker from Visual Studio is used by default.


Another question that is troubling me is why to use OPTLINK as a 
default for 32-bit version, if for 64-bit version a Visual Studio 
linker is used anyway?


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Ilya Yaroshenko via Digitalmars-d

On Saturday, 6 August 2016 at 11:10:18 UTC, Iain Buclaw wrote:
On 6 August 2016 at 12:07, Ilya Yaroshenko via Digitalmars-d 
 wrote:

On Saturday, 6 August 2016 at 10:02:25 UTC, Iain Buclaw wrote:


On 6 August 2016 at 11:48, Ilya Yaroshenko via Digitalmars-d 
 wrote:


On Saturday, 6 August 2016 at 09:35:32 UTC, Walter Bright 
wrote:


[...]




OK, then we need a third pragma,`pragma(ieeeRound)`. But
`pragma(fusedMath)`
and `pragma(fastMath)` should be presented too.


[...]




It allows a compiler to replace two arithmetic operations 
with single composed one, see AVX2 (FMA3 for intel and FMA4 
for AMD) instruction set.



No pragmas tied to a specific architecture should be allowed 
in the language spec, please.



Then probably Mir will drop all compilers, but LDC
LLVM is tied for real world, so we can tied D for real world 
too. If a
compiler can not implement optimization pragma, then this 
pragma can be just

ignored by the compiler.


If you need a function to work with an exclusive instruction 
set or something as specific as use of composed/fused 
instructions, then it is common to use an indirect function 
resolver to choose the most relevant implementation for the 
system that's running the code (a la @ifunc), then for the 
targetted fusedMath implementation, do it yourself.


What do you mean by "do it yourself"? Write code using FMA GCC 
intrinsics? Why I need to do something that can be automated by a 
compiler? Modern approach is to give a hint to the compiler 
instead of write specialised code for different architectures.


It seems you have misunderstood me. I don't want to force 
compiler to use explicit instruction sets. Instead, I want to 
give a hint to a compiler, about what math _transformations_ are 
allowed. And this hints are architecture independent. A compiler 
may a may not use this hints to optimise code.


[Issue 16349] better curl retry for install.sh script

2016-08-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16349

--- Comment #2 from Martin Nowak  ---
(In reply to greenify from comment #1)
> It's definitely good if the installation gets more robust, but maybe we can 
> also
> dig into the root cause of the random network failures. I never experienced
> such  constant problems with other providers, do you know anything what
> could cause this? Should we consider to provide a more stable distribution
> way like e.g. Github Releases?

Network failures are normal, everyone has to properly deal w/ them.
Our binaries are already hosted on S3 (that's where Github Releases come from
as well).

--


Re: Using D in Games and bindings to c++ libraries

2016-08-06 Thread rikki cattermole via Digitalmars-d-learn

On 06/08/2016 10:53 PM, Neurone wrote:

On Saturday, 6 August 2016 at 08:04:45 UTC, rikki cattermole wrote:

On 06/08/2016 6:28 PM, Neurone wrote:

On Saturday, 6 August 2016 at 04:01:03 UTC, rikki cattermole wrote:

You can safely forget about RakNet by the looks.

"While I still get some inquiries, as of as of March 13, 2015 I've
decided to stop licensing so I can focus on changing the world through
VR."

So based upon this, what would you like to do? Scratch or another
pre-existing protocol like proto-buffers?


It is a mature library, which has been open sourced under the BSD
license when Oculus baught it. Hence why I want to use this instead of
reinventing the wheel.


They really need to update its old website then.

I've had a look at the code, its mostly classes and is very c style.
You won't have trouble making bindings.

The main steps if you want to give it a go are:
1. Compile via MSVC (either 32 or 64bit are fine) can be static of
dynamic
2. Compile D with -m32mscoff or -m64 and link against above binary

You will almost definitely want to use 64bit binaries not 32bit if you
use dub.

I know there is a D target for SWIG but I have never really used it.
But I do not believe you need any magical tool to do the binding
creation for you. The headers should convert over nicely as is for you.


Do I use the DMD compiler? Also, would Dub not work with 64-bit
executables? I still need to use 32-bit to support systems that aren't
64-bit yet.


Yes to dmd.

$ dub build --arch=x86_64

For 32bit I use a hack of a workaround to make it work recursively.

$ DFLAGS="-m32mscoff" ; dub build

Sadly this is not full proof or much help.

The reason why you must use -m32mscoff or -m64 is to use the COFF binary 
format as OMF (which is default for 32bit) will not link against any 
binaries produced by MSVC.


Of course if you can build it as a shared library and get an OMF import 
library you're good to go as well. But just easier to keep everything 
the same.



Also, is it possible to use D like a traditional scripting language? The
use case is adding content to the server (which may contain D scripts)
without restarting it for a full recompile.


I would not attempt this. Its not worth your time or worry. Instead I 
would use D for infrastructure but Lua for logic that is likely to 
change often.


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Iain Buclaw via Digitalmars-d
On 6 August 2016 at 12:07, Ilya Yaroshenko via Digitalmars-d
 wrote:
> On Saturday, 6 August 2016 at 10:02:25 UTC, Iain Buclaw wrote:
>>
>> On 6 August 2016 at 11:48, Ilya Yaroshenko via Digitalmars-d
>>  wrote:
>>>
>>> On Saturday, 6 August 2016 at 09:35:32 UTC, Walter Bright wrote:

 [...]
>>>
>>>
>>>
>>> OK, then we need a third pragma,`pragma(ieeeRound)`. But
>>> `pragma(fusedMath)`
>>> and `pragma(fastMath)` should be presented too.
>>>
 [...]
>>>
>>>
>>>
>>> It allows a compiler to replace two arithmetic operations with single
>>> composed one, see AVX2 (FMA3 for intel and FMA4 for AMD) instruction set.
>>
>>
>> No pragmas tied to a specific architecture should be allowed in the
>> language spec, please.
>
>
> Then probably Mir will drop all compilers, but LDC
> LLVM is tied for real world, so we can tied D for real world too. If a
> compiler can not implement optimization pragma, then this pragma can be just
> ignored by the compiler.

If you need a function to work with an exclusive instruction set or
something as specific as use of composed/fused instructions, then it
is common to use an indirect function resolver to choose the most
relevant implementation for the system that's running the code (a la
@ifunc), then for the targetted fusedMath implementation, do it
yourself.


Re: D safety! New Feature?

2016-08-06 Thread Timon Gehr via Digitalmars-d

On 06.08.2016 07:56, ag0aep6g wrote:


Add parentheses to the typeof one and it fails as expected:

static assert(is(typeof(a.foo(; /* fails */

Can also do the function literal thing you did in the __traits one:

static assert(is(typeof(() { a.foo; }))); /* fails */


You can, but don't.

class A {
int foo() { return 1; }
}

void main() {
auto a = new A;
static void gotcha(){
// This passes:
static assert(is(typeof((){a.foo;})));
// This doesn't:
static assert(__traits(compiles,(){a.foo;}));
}
}



Re: Using D in Games and bindings to c++ libraries

2016-08-06 Thread Neurone via Digitalmars-d-learn
On Saturday, 6 August 2016 at 08:04:45 UTC, rikki cattermole 
wrote:

On 06/08/2016 6:28 PM, Neurone wrote:
On Saturday, 6 August 2016 at 04:01:03 UTC, rikki cattermole 
wrote:

You can safely forget about RakNet by the looks.

"While I still get some inquiries, as of as of March 13, 2015 
I've
decided to stop licensing so I can focus on changing the 
world through

VR."

So based upon this, what would you like to do? Scratch or 
another

pre-existing protocol like proto-buffers?


It is a mature library, which has been open sourced under the 
BSD
license when Oculus baught it. Hence why I want to use this 
instead of

reinventing the wheel.


They really need to update its old website then.

I've had a look at the code, its mostly classes and is very c 
style.

You won't have trouble making bindings.

The main steps if you want to give it a go are:
1. Compile via MSVC (either 32 or 64bit are fine) can be static 
of dynamic
2. Compile D with -m32mscoff or -m64 and link against above 
binary


You will almost definitely want to use 64bit binaries not 32bit 
if you use dub.


I know there is a D target for SWIG but I have never really 
used it. But I do not believe you need any magical tool to do 
the binding creation for you. The headers should convert over 
nicely as is for you.


Do I use the DMD compiler? Also, would Dub not work with 64-bit 
executables? I still need to use 32-bit to support systems that 
aren't 64-bit yet.


Also, is it possible to use D like a traditional scripting 
language? The use case is adding content to the server (which may 
contain D scripts) without restarting it for a full recompile.


Re: Indexing with an arbitrary type

2016-08-06 Thread Alex via Digitalmars-d-learn

On Friday, 5 August 2016 at 16:37:14 UTC, Jonathan M Davis wrote:
On Thursday, August 04, 2016 08:13:59 Alex via 
Digitalmars-d-learn wrote:
What I think about is something like this: 
https://dpaste.dzfl.pl/d37cfb8e513d


Okay, you have

enum bool isKey(T) = is(typeof(T.init < T.init) : bool);
template isNullable(T) { enum isNullable = __traits(compiles, 
T.init.isNull);

}

struct IndexAble
{
int[] arr;
this(size_t size)
{
arr.length = size;
}

ref int opIndex(T)(T ind) if(isKey!T)
{
static if(isNullable!T)
if(ind.isNull) return arr[$]; // or another 
manually defined state.


return arr[ind];
}
}

What happens if I do this?

void main()
{
IndexAble indexable;
indexable["hello"];
}

You get a compilation error like

q.d(17): Error: cannot implicitly convert expression (ind) of 
type string to

ulong
q.d(24): Error: template instance q.IndexAble.opIndex!string 
error

instantiating

because the index type of int[] is size_t, so it's only going 
to accept values that are implicitly convertible to size_t, and 
string is not implicitly convertible to size_t.


Yes. You are surely right.

opIndex's template constraint did not catch the fact that 
passing "hello" to opIndex would result in a compilation error. 
string is perfectly comparable, and it's not a valid index type 
for int[] - and comparability is all that opIndex's template 
constraint checks for.


Right. My intension was to find the one check, which has to be 
fulfilled by all types, which could be used as an index...




For opIndex to actually catch that an argument is not going to 
compile if opIndex is instantiated with that type, then its 
template constraint needs to either test that the index type 
will implicitly convert to size_t so that it can be used to 
index the int[],


Which would be too restrictive, as we have associative arrays...

or it needs to test that indexing int[] with the argument will 
compile. So, something like


I see the technical point, but the question which I want to 
answer with the isKey template is a different one. Maybe, a 
better formulation would be: "is a type uniquely convertible to 
size_t".




ref int opIndex(T)(T ind)
if(is(T : size_t))
{...}

or

ref int opIndex(T)(T ind)
if(__traits(compiles, arr[ind]))
{...}

And given that you're dealing explicitly with int[] and not an 
arbitrary type, it really makes more sense to use is(T : 
size_t). If IndexAble were templated on the type for arr, then 
you would need to do the second. But testing for comparability 
is completely insufficient for testing whether the type can be 
used as an index.


I think, if I separate the concern of "can use sth. as key" and 
"how to use sth. as key" the isKey template is sufficient for the 
first one. Sure, without providing the answer for what...


In fact, it's pretty much irrelevant. What matters is whether 
the index type will compile with the code that it's being used 
in inside the template. If you're forwarding the index variable 
to another object's opIndex, then what matters is whether that 
opIndex will compile with the index, and if you're doing 
something else with it, then what matters is that it compiles 
with whatever it is that you're doing with it.


Right. I just wanted to clarify for myself, what is the minimum 
requirement for a "key" is. Meanwhile, I'm close to the view, 
that it has to provide some conversion to size_t, providing 
toHash is maybe nothing else...


I could declare a hash map implementation which did not use 
opCmp but just used opEquals and toHash. Then I could have a 
struct like


struct S
{
int i;
int j;

size_t toHash()
{
return i * j % size_t.max;
}
}

It has an opEquals (the implicit one declared by the compiler). 
It has a toHash. It does not have opCmp (and the compiler won't 
generate one). So, it's not comparable. But it would work in my 
hash map type only used toHash and opEquals and not opCmp.


HashMap!(S, string) hm;
hm[S(5, 22)] = "hello world";

And actually, while D's built-in AA's used to use opCmp (and 
thus required it), they don't anymore. So, that code even works 
with the built-in AA's. e.g.


string[S] aa;
aa[S(5, 22)] = "hello world";



Yeah. But you have either a perfect hashing and the absence of 
opCmp is the point what I'm wondering about. Or, the hashing is 
not perfect and then the usage of a key type relies on handling 
of collisions.


So, comparability has nothing to do with whether a particular 
type will work as the key for an indexable type. What matters 
is that the index operator be given an argument that will 
compile with it - which usually means a type which is 
implicitly convertible to the type that it uses for its indices 
but in more complex implementations where opIndex might accept 
a variety of types for whatever reason, then what matters is 
whether the type that you want to pass it will compile with it.


At the end - yes, I 

Re: unit-threaded v0.6.26 - advanced unit testing in D with new features

2016-08-06 Thread Nicholas Wilson via Digitalmars-d-announce

On Friday, 5 August 2016 at 15:31:34 UTC, Atila Neves wrote:

https://code.dlang.org/packages/unit-threaded

What's new:
. Mocking support. Classes, interfaces and structs can be 
mocked (see README.md or examples)
. Shrinking support for property-based testing, but only for 
integrals and arrays

. Bug fixes

Enjoy!

Atila


There is a typo in your readme


int adder(int i, int j) { return i + j; }

@("Test adder") unittest {

 >   adder(2 + 3).shouldEqual(5);

}

@("Test adder fails", ShouldFail) unittest {
   adder(2 + 3).shouldEqual(7);
}


shouldBe (!) adder(2, 3)


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Ilya Yaroshenko via Digitalmars-d

On Saturday, 6 August 2016 at 10:02:25 UTC, Iain Buclaw wrote:
On 6 August 2016 at 11:48, Ilya Yaroshenko via Digitalmars-d 
 wrote:
On Saturday, 6 August 2016 at 09:35:32 UTC, Walter Bright 
wrote:

[...]



OK, then we need a third pragma,`pragma(ieeeRound)`. But 
`pragma(fusedMath)`

and `pragma(fastMath)` should be presented too.


[...]



It allows a compiler to replace two arithmetic operations with 
single composed one, see AVX2 (FMA3 for intel and FMA4 for 
AMD) instruction set.


No pragmas tied to a specific architecture should be allowed in 
the language spec, please.


Then probably Mir will drop all compilers, but LDC
LLVM is tied for real world, so we can tied D for real world too. 
If a compiler can not implement optimization pragma, then this 
pragma can be just ignored by the compiler.


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Iain Buclaw via Digitalmars-d
On 6 August 2016 at 11:48, Ilya Yaroshenko via Digitalmars-d
 wrote:
> On Saturday, 6 August 2016 at 09:35:32 UTC, Walter Bright wrote:
>>
>> On 8/6/2016 1:21 AM, Ilya Yaroshenko wrote:
>>>
>>> We need 2 new pragmas with the same syntax as `pragma(inline, xxx)`:
>>>
>>> 1. `pragma(fusedMath)` allows fused mul-add, mul-sub, div-add, div-sub
>>> operations.
>>> 2. `pragma(fastMath)` equivalents to [1]. This pragma can be used to
>>> allow
>>> extended precision.
>>
>>
>>
>> The LDC fastmath bothers me a lot. It throws away proper NaN and infinity
>> handling, and throws away precision by allowing reciprocal and algebraic
>> transformations. As I've said before, correctness should be first, not
>> speed, and fastmath has nothing to do with this thread.
>
>
> OK, then we need a third pragma,`pragma(ieeeRound)`. But `pragma(fusedMath)`
> and `pragma(fastMath)` should be presented too.
>
>> I don't know what the point of fusedMath is.
>
>
> It allows a compiler to replace two arithmetic operations with single
> composed one, see AVX2 (FMA3 for intel and FMA4 for AMD) instruction set.

No pragmas tied to a specific architecture should be allowed in the
language spec, please.


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Ilya Yaroshenko via Digitalmars-d

On Saturday, 6 August 2016 at 09:35:32 UTC, Walter Bright wrote:

On 8/6/2016 1:21 AM, Ilya Yaroshenko wrote:
We need 2 new pragmas with the same syntax as `pragma(inline, 
xxx)`:


1. `pragma(fusedMath)` allows fused mul-add, mul-sub, div-add, 
div-sub operations.
2. `pragma(fastMath)` equivalents to [1]. This pragma can be 
used to allow

extended precision.



The LDC fastmath bothers me a lot. It throws away proper NaN 
and infinity handling, and throws away precision by allowing 
reciprocal and algebraic transformations. As I've said before, 
correctness should be first, not speed, and fastmath has 
nothing to do with this thread.


OK, then we need a third pragma,`pragma(ieeeRound)`. But 
`pragma(fusedMath)` and `pragma(fastMath)` should be presented 
too.



I don't know what the point of fusedMath is.


It allows a compiler to replace two arithmetic operations with 
single composed one, see AVX2 (FMA3 for intel and FMA4 for AMD) 
instruction set.


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Walter Bright via Digitalmars-d

On 8/6/2016 1:21 AM, Ilya Yaroshenko wrote:

We need 2 new pragmas with the same syntax as `pragma(inline, xxx)`:

1. `pragma(fusedMath)` allows fused mul-add, mul-sub, div-add, div-sub 
operations.
2. `pragma(fastMath)` equivalents to [1]. This pragma can be used to allow
extended precision.



The LDC fastmath bothers me a lot. It throws away proper NaN and infinity 
handling, and throws away precision by allowing reciprocal and algebraic 
transformations. As I've said before, correctness should be first, not speed, 
and fastmath has nothing to do with this thread.



I don't know what the point of fusedMath is.


[Issue 14855] -cov should ignore assert(0)

2016-08-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=14855

--- Comment #3 from Jonathan M Davis  ---
(In reply to Walter Bright from comment #2)
> Being concerned about assert(0)'s not being executed is placing too
> much emphasis on the metric.

Perhaps, but if you're trying to make sure that you tested every line that
you're supposed to test, assert(0) is not on the list of lines that should ever
be run. In fact, if the code is correct, it shouldn't even be testable, because
it shouldn't be reachable. So, if the point of the code coverage metric is to
show whether your tests actually run every reachable line of code, then
including assert(0) in the total is incorrect and counterproductive. It's
certainly not the end of the world, but I don't see why there is any value in
counting assert(0) in the coverage, and there's definitely value in not
counting it.

--


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Walter Bright via Digitalmars-d

On 8/6/2016 1:21 AM, Ilya Yaroshenko wrote:

On Friday, 5 August 2016 at 20:53:42 UTC, Walter Bright wrote:


I agree that the typical summation algorithm suffers from double rounding. But
that's one algorithm. I would appreciate if you would review
http://dlang.org/phobos/std_algorithm_iteration.html#sum to ensure it doesn't
have this problem, and if it does, how we can fix it.



Phobos's sum is two different algorithms. Pairwise summation for Random Access
Ranges and Kahan summation for Input Ranges. Pairwise summation does not require
IEEE rounding, but Kahan summation requires it.

The problem with real world example is that it depends on optimisation. For
example, if all temporary values are rounded, this is not a problem, and if all
temporary values are not rounded this is not a problem too. However if some of
them rounded and others are not, than this will break Kahan algorithm.

Kahan is the shortest and one of the slowest (comparing with KBN for example)
summation algorithms. The true story about Kahan, that we may have it in Phobos,
but we can use pairwise summation for Input Ranges without random access, and it
will be faster then Kahan. So we don't need Kahan for current API at all.

Mir has both Kahan, which works with 32-bit DMD, and pairwise, witch works with
input ranges.

Kahan, KBN, KB2, and Precise summations is always use `real` or `Complex!real`
internal values for 32 bit X86 target. The only problem with Precise summation,
if we need precise result in double and use real for internal summation, then
the last bit will be wrong in the 50% of cases.

Another good point about Mir's summation algorithms, that they are Output
Ranges. This means they can be used effectively to sum multidimensional arrays
for example. Also, Precise summator may be used to compute exact sum of
distributed data.

When we get a decision and solution for rounding problem, I will make PR for
std.experimental.numeric.sum.


I hear you. I'd like to explore ways of solving it. Got any ideas?


We need to take the overall picture.

It is very important to recognise that D core team is small and D community is
not large enough now to involve a lot of new professionals. This means that time
of existing one engineers is very important for D and the most important
engineer for D is you, Walter.

In the same time we need to move forward fast with language changes and druntime
changes (GC-less Fibers for example).

So, we need to choose tricky options for development. The most important option
for D in the science context is to split D Programming Language from DMD in our
minds. I am not asking to remove DMD as reference compiler. Instead of that, we
can introduce changes in D that can not be optimally implemented in DMD (because
you have a lot of more important things to do for D instead of optimisation) but
will be awesome for our LLVM-based or GCC-based backends.

We need 2 new pragmas with the same syntax as `pragma(inline, xxx)`:

1. `pragma(fusedMath)` allows fused mul-add, mul-sub, div-add, div-sub 
operations.
2. `pragma(fastMath)` equivalents to [1]. This pragma can be used to allow
extended precision.

This should be 2 separate pragmas. The second one may assume the first one.

Recent LDC beta has @fastmath attribute for functions, and it is already used in
Phobos ndslice.algorithm PR and its Mir's mirror. Attributes are alternative for
pragmas, but their syntax should be extended, see [2]

The old approach is separate compilation, but it is weird, low level for users,
and requires significant efforts for both small and large projects.

[1] http://llvm.org/docs/LangRef.html#fast-math-flags
[2] https://github.com/ldc-developers/ldc/issues/1669


Thanks for your help with this.

Using attributes for this is a mistake. Attributes affect the interface to a 
function, not its internal implementation. Pragmas are suitable for internal 
implementation things. I also oppose using compiler flags, because they tend to 
be overly global, and the details of an algorithm should not be split between 
the source code and the makefile.




Re: vsprintf or printf variable arguments

2016-08-06 Thread Patrick Schluter via Digitalmars-d-learn

On Friday, 5 August 2016 at 08:32:42 UTC, kink wrote:
On Thursday, 4 August 2016 at 21:03:52 UTC, Mark "J" Twain 
wrote:
How can I construct a va_list for vsprintf when all I have is 
the a list of pointers to the data, without their type info?


A va_list seems to be a packed struct of values and/or 
pointers to the data. While I could construct such a list, 
theoretically, I don't always know when I should store an 
element as a pointer or a value.


e.g., ints, floats, and other value types seems to get stored 
directly in the list, while strings and *other* stuff get 
stored as pointers.


I would be nice if a printf variant listed takes only a 
pointer list to all the values, does anything like this exist?


If not, is there any standardization of what is a value in the 
list and what is a pointer so I can attempt to build the list 
properly?


This has absolutely nothing to do with D as these are C 
functions, so you'd be better off asking this in another forum. 
Anyway, I have no idea why you'd want to construct the va_list 
manually. These vprintf() functions only exist so that other 
variadic functions can forward THEIR varargs - e.g.,


extern(C) void myOldschoolPrintf(in char* format, ...)
{
  doSomethingSpecial();
  va_list myVarargs = va_start(format);
  vprintf(format, myVarargs);
}


Just a side note, a C nitpicker that I am, you forgot the va_end 
after the call to vprintf(). On a lot of platforms it's not a 
problem, on Linux/x86_64 (i.e. a very common platform) it's very 
likely a memory leak (as the ABI allows to pass parameters in 
register even for vararg functions, these registers have to be 
saved somewhere to be usable in a va_list).


Note that va_list is highly platform-dependent, so filling the 
struct manually is a very bad idea.


Yes indeed, and one of the most common platforms is the very 
complex one. It's probably the most common issue when porting C 
programs to Linux/x86_64 for the first time.




Re: unit-threaded v0.6.26 - advanced unit testing in D with new features

2016-08-06 Thread Sebastiaan Koppe via Digitalmars-d-announce

On Saturday, 6 August 2016 at 01:50:15 UTC, Øivind wrote:

I have started using unit_threaded, and love it.


Most of my unittests now run in < 100ms; it is great.

Keep up the good work.


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Iain Buclaw via Digitalmars-d
On 4 August 2016 at 23:38, Seb via Digitalmars-d
 wrote:
> On Thursday, 4 August 2016 at 21:13:23 UTC, Iain Buclaw wrote:
>>
>> On 4 August 2016 at 01:00, Seb via Digitalmars-d
>>  wrote:
>>>
>>> To make matters worse std.math yields different results than
>>> compiler/assembly intrinsics - note that in this example import std.math.pow
>>> adds about 1K instructions to the output assembler, whereas llvm_powf boils
>>> down to the assembly powf. Of course the performance of powf is a lot
>>> better, I measured [3] that e.g. std.math.pow takes ~1.5x as long for both
>>> LDC and DMD. Of course if you need to run this very often, this cost isn't
>>> acceptable.
>>>
>>
>> This could be something specific to your architecture.  I get the same
>> result on from all versions of powf, and from GCC builtins too, regardless
>> of optimization tunings.
>
>
> I can reproduce this on DPaste (also x86_64).
>
> https://dpaste.dzfl.pl/c0ab5131b49d
>
> Behavior with a recent LDC build is similar (as annotated with the
> comments).

When testing the math functions, I chose not to compare results to
what C libraries, or CPU instructions return, but rather compared the
results to Wolfram, which I hope I'm correct in saying is a more
reliable and proven source of scientific maths than libm.

As of the time I ported all pure D (not IASM) implementations of math
functions, the results returned from all unittests using 80-bit reals
were identical with Wolfram given up to the last 2 digits as an
acceptable error with some values.  This was true for all except
inputs that were just inside the domain for the function, in which
case only double precision was guaranteed.  Where applicable, they
were also found to in some cases to be more accurate than the inline
assembler or yl2x implementations version paths that are used if you
compile with DMD or LDC.


Re: Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

2016-08-06 Thread Ilya Yaroshenko via Digitalmars-d

On Friday, 5 August 2016 at 20:53:42 UTC, Walter Bright wrote:

I agree that the typical summation algorithm suffers from 
double rounding. But that's one algorithm. I would appreciate 
if you would review 
http://dlang.org/phobos/std_algorithm_iteration.html#sum to 
ensure it doesn't have this problem, and if it does, how we can 
fix it.




Phobos's sum is two different algorithms. Pairwise summation for 
Random Access Ranges and Kahan summation for Input Ranges. 
Pairwise summation does not require IEEE rounding, but Kahan 
summation requires it.


The problem with real world example is that it depends on 
optimisation. For example, if all temporary values are rounded, 
this is not a problem, and if all temporary values are not 
rounded this is not a problem too. However if some of them 
rounded and others are not, than this will break Kahan algorithm.


Kahan is the shortest and one of the slowest (comparing with KBN 
for example) summation algorithms. The true story about Kahan, 
that we may have it in Phobos, but we can use pairwise summation 
for Input Ranges without random access, and it will be faster 
then Kahan. So we don't need Kahan for current API at all.


Mir has both Kahan, which works with 32-bit DMD, and pairwise, 
witch works with input ranges.


Kahan, KBN, KB2, and Precise summations is always use `real` or 
`Complex!real` internal values for 32 bit X86 target. The only 
problem with Precise summation, if we need precise result in 
double and use real for internal summation, then the last bit 
will be wrong in the 50% of cases.


Another good point about Mir's summation algorithms, that they 
are Output Ranges. This means they can be used effectively to sum 
multidimensional arrays for example. Also, Precise summator may 
be used to compute exact sum of distributed data.


When we get a decision and solution for rounding problem, I will 
make PR for std.experimental.numeric.sum.


I hear you. I'd like to explore ways of solving it. Got any 
ideas?


We need to take the overall picture.

It is very important to recognise that D core team is small and D 
community is not large enough now to involve a lot of new 
professionals. This means that time of existing one engineers is 
very important for D and the most important engineer for D is 
you, Walter.


In the same time we need to move forward fast with language 
changes and druntime changes (GC-less Fibers for example).


So, we need to choose tricky options for development. The most 
important option for D in the science context is to split D 
Programming Language from DMD in our minds. I am not asking to 
remove DMD as reference compiler. Instead of that, we can 
introduce changes in D that can not be optimally implemented in 
DMD (because you have a lot of more important things to do for D 
instead of optimisation) but will be awesome for our LLVM-based 
or GCC-based backends.


We need 2 new pragmas with the same syntax as `pragma(inline, 
xxx)`:


1. `pragma(fusedMath)` allows fused mul-add, mul-sub, div-add, 
div-sub operations.
2. `pragma(fastMath)` equivalents to [1]. This pragma can be used 
to allow extended precision.


This should be 2 separate pragmas. The second one may assume the 
first one.


Recent LDC beta has @fastmath attribute for functions, and it is 
already used in Phobos ndslice.algorithm PR and its Mir's mirror. 
Attributes are alternative for pragmas, but their syntax should 
be extended, see [2]


The old approach is separate compilation, but it is weird, low 
level for users, and requires significant efforts for both small 
and large projects.


[1] http://llvm.org/docs/LangRef.html#fast-math-flags
[2] https://github.com/ldc-developers/ldc/issues/1669

Best regards,
Ilya



Re: For the Love of God, Please Write Better Docs!

2016-08-06 Thread poliklosio via Digitalmars-d

On Friday, 5 August 2016 at 21:01:28 UTC, H.Loom wrote:

On Friday, 5 August 2016 at 19:52:19 UTC, poliklosio wrote:

On Tuesday, 2 August 2016 at 20:26:06 UTC, Jack Stouffer wrote:

(...)

(...)
In my opinion open source community massively underestimates 
the importance of high-level examples, articles and tutorials. 
(...)


This is not true for widely used libraries. I can find 
everywhere examples of how to use FreeType even if the bindings 
I use have 0 docs and 0 examples. Idem for libX11...Also i have 
to say that with a well crafted library you understand what a 
function does when it's well named, when the parameters are 
well named.


This is another story for marginal libraries, e.g when you're 
part of the early adopters.


I think we agree here. Most libraries are marginal, and missing 
proper announcement and documentation is the main reason they are 
marginal. Hence, this is true for most libraries.
Of course there are good ones, sadly not many D libraries are 
really well documented.


If you are a library author and you are reading this, let me 
quantify this for you.


Thanks you ! you're so generous.


Why the sarcasm? I was just venting (at no-one in particular) 
after hitting the wall for a large chunk of my life.



(...)

This also explains part of complaints on Phobos documentation 
- people don't get the general idea of how to make things work 
together.


For phobos i agree. D examples shipped with DMD are ridiculous. 
I was thinking to propose an initiative which would be to renew 
completly them with small usable and idiomatic programs.


Those who do get the general idea don't care much about the 
exact width of whitespace and other similar concerns.


I don't understand your private thing here. Are you talking 
about text justification in ddoc ? If it's a mono font no 
problem here...


I'm lost here. The "width of whitespace" was just an example of 
something you would NOT normally care about if you were a savvy 
user used who already knows how to navigate the docs.


Re: Using D in Games and bindings to c++ libraries

2016-08-06 Thread rikki cattermole via Digitalmars-d-learn

On 06/08/2016 6:28 PM, Neurone wrote:

On Saturday, 6 August 2016 at 04:01:03 UTC, rikki cattermole wrote:

You can safely forget about RakNet by the looks.

"While I still get some inquiries, as of as of March 13, 2015 I've
decided to stop licensing so I can focus on changing the world through
VR."

So based upon this, what would you like to do? Scratch or another
pre-existing protocol like proto-buffers?


It is a mature library, which has been open sourced under the BSD
license when Oculus baught it. Hence why I want to use this instead of
reinventing the wheel.


They really need to update its old website then.

I've had a look at the code, its mostly classes and is very c style.
You won't have trouble making bindings.

The main steps if you want to give it a go are:
1. Compile via MSVC (either 32 or 64bit are fine) can be static of dynamic
2. Compile D with -m32mscoff or -m64 and link against above binary

You will almost definitely want to use 64bit binaries not 32bit if you 
use dub.


I know there is a D target for SWIG but I have never really used it. But 
I do not believe you need any magical tool to do the binding creation 
for you. The headers should convert over nicely as is for you.


Re: Using D in Games and bindings to c++ libraries

2016-08-06 Thread Neurone via Digitalmars-d-learn
On Saturday, 6 August 2016 at 04:01:03 UTC, rikki cattermole 
wrote:

You can safely forget about RakNet by the looks.

"While I still get some inquiries, as of as of March 13, 2015 
I've decided to stop licensing so I can focus on changing the 
world through VR."


So based upon this, what would you like to do? Scratch or 
another pre-existing protocol like proto-buffers?


It is a mature library, which has been open sourced under the BSD 
license when Oculus baught it. Hence why I want to use this 
instead of reinventing the wheel.


[Issue 16352] dead-lock in std.allocator.free_list unittest

2016-08-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16352

--- Comment #2 from Martin Nowak  ---
They do, thanks. There is not much information in the log other than it did
hang at the commit hashes.

--


Re: D safety! New Feature?

2016-08-06 Thread ag0aep6g via Digitalmars-d

On 08/06/2016 03:57 AM, Mark J Twain wrote:

On Friday, 5 August 2016 at 21:12:06 UTC, ag0aep6g wrote:

[...]


struct MutableSomething
{
int value;
void mutate(int newValue) { value = newValue; }
}

struct ImmutableSomething
{
int value;
/* no mutate method here */
}

void main()
{
auto i = ImmutableSomething(1);
(cast(MutableSomething) i).mutate(2);
import std.stdio;
writeln(i.value); /* prints "2" */
}



wow, that seems like a huge design issues.

Change one value to double though and it won't work. It's not general
and seems to be a design flaw. It is not proof of anything in this case
though. Why? Because an Queue might not have the same size as an
ImmutableQueue and your "proof" only works when they are the same size.


You can cast between structs of different sizes. Just need to add some 
magical symbols:



struct MutableSomething
{
float value;
void mutate(int newValue) { value = newValue; }
float somethingElse;
}

struct ImmutableSomething
{
int value;
/* no mutate method here */
}

void main()
{
auto i = ImmutableSomething(1);
(* cast(MutableSomething*) ).mutate(2);
import std.stdio;
writeln(i.value); /* prints "2" */
}


You can cast anything to anything this way. It's absolutely unsafe, of 
course, just like casting away immutable and then mutating.


Also, MutableFoo and ImmutableFoo would usually have the same fields, 
wouldn't they?


[...]

What kind of example? I have already given examples and proved that
there is a functional difference.


The first example you gave was ImmutableArray. You later said it was a 
bad example, and I should disregard it. You also said something about 
caching allocations. I'm still not sure how that's supposed to work 
differently than with an ordinary array/object/whatever. If you uphold 
that point, I would appreciate an example in code that shows how it works.


Then you said that ImmutableQueue would have better errors than 
immutable Queue. That's wrong. Again, I'd appreciate a code example that 
shows the better errors, if you think you were correct there.


The next point was that one can cast immutable away, but they can't cast 
ImmutableFoo to MutableFoo. That's wrong, too. Casting immutable away 
and then mutating has undefined behavior, i.e. it's not allowed. And you 
can cast between unrelated types just fine.


One thing that's true is that an uncallable method is slightly different 
from one that's not there at all. Chris Wright gave an example where 
this would make it a tiny bit harder to mess up. But I think that's a 
rather minor point. And it has to be weighed against the downsides of 
ImmutableFoo, like the added complexity and it being more of an eyesore 
than immutable Foo. If there's more to this, a code example would do 
wonders.


Those are the points/examples that I'm aware of. I don't see much in 
favor of ImmutableFoo. If I missed anything, please point it out. I'm 
repeating myself, but code examples would be great then.



I will not continue this conversation
because you refuse accept that. First you haven't given your criteria
for me to prove anything to you to satisfy whatever it is you want.


I'm not asking for (strict mathematical) proof. Just some example that 
highlights the pros of ImmutableFoo over immutable Foo. Can be 
subjective, like "See? This looks much nicer with my version". So far I 
just don't see any reason to use it over the immutable keyword.



Second, I'm not here to waste my time trying to prove the merits of the
tool. Again, either use it if you feel like it does something or don't.
The method is not flawed in and of itself. It works(if it didn't,
someone would have probably already shown it not to work).

All you can do is assert a negative, which you can't prove yourself.


I'm not disputing that it works. (Neither am I saying that it does work 
correctly.) Assuming that it works just fine, it apparently does almost 
exactly what immutable does, just in a more cumbersome way.



So we end up chasing each other tails, which I refuse to do.


I don't think the discussion has been that way so far. You've made some 
points where you were mistaken about how immutable and casting work. So 
they fell flat. You can defend them (point out errors in my refusals), 
or you can bring up other points. You don't have to, of course, but I 
don't think we have to agree to disagree just yet.


I don't know why you refuse to give code examples. If you're using this 
thing, you have something lying around, no? Some code that shows the 
benefits of your approach would shut me up. I'd try for sure to show 
that it works similarly well with plain immutable, though ;)



If you want more proof then it is up to you, not me. I have my proof and
it is good enough for me.


Sure, go ahead. But it seems to me that you're basing this on wrong 
impressions of immutable and casts.


Re: D safety! New Feature?

2016-08-06 Thread ag0aep6g via Digitalmars-d

On 08/06/2016 03:38 AM, Chris Wright wrote:

Some reflection stuff is a bit inconvenient:

class A {
  int foo() { return 1; }
}

void main() {
  auto a = new immutable(A);
  // This passes:
  static assert(is(typeof(a.foo)));
  // This doesn't:
  static assert(__traits(compiles, () { a.foo; }));
}

__traits(compiles) is mostly an evil hack, but things like this require
its use.


The two are not equivalent, though. The first one checks the type of the 
method.


__traits(compiles, ...) also passes when used that way:

static assert(__traits(compiles, a.foo)); /* passes */

Add parentheses to the typeof one and it fails as expected:

static assert(is(typeof(a.foo(; /* fails */

Can also do the function literal thing you did in the __traits one:

static assert(is(typeof(() { a.foo; }))); /* fails */

If the method wasn't there at all, they would all fail, of course. So 
the programmer couldn't mess this up as easily. I don't think this 
outweighs the inconvenience of bending over backwards to avoid 
immutable, though.