Re: Unit Testing in Action

2017-10-24 Thread Ali Çehreli via Digitalmars-d-announce

On 10/24/2017 07:15 PM, Walter Bright wrote:

On 10/24/2017 3:06 PM, Ali Çehreli wrote:

It would be very useful if the compiler could do that automatically.


On 10/24/2017 2:58 PM, qznc wrote:
 > The information is there just not expressed in a useable way.


The problem is how to display it in a text file with the original source 
code.




I wouldn't mind as ugly as needed. The following original code

if (api1 == 1 && api2 == 2 ||
api2 == 2 && api3 == 3) {
foo();
}

could be broken like the following and I wouldn't mind:

if (api1 == 1 &&
api2 == 2 ||
api2 == 2 &&
api3 == 3) {
foo();
}

I would go work on the original code anyway.

Ali


Re: Unit Testing in Action

2017-10-24 Thread Ali Çehreli via Digitalmars-d-announce

On 10/24/2017 01:51 PM, Walter Bright wrote:
> On 10/23/2017 4:44 AM, Martin Nowak wrote:

> There would be a separate coverage count for line 3 which would be the
> sum of counts for (api2 == 2) and (api3 == 3).
>
> Generally, if this is inadequate, just split the expression into more
> lines.

It would be very useful if the compiler could do that automatically.

Ali



Re: Unit Testing in Action

2017-10-24 Thread Walter Bright via Digitalmars-d-announce

On 10/23/2017 4:44 AM, Martin Nowak wrote:

On Saturday, 21 October 2017 at 22:50:51 UTC, Walter Bright wrote:

Coverage would give:

1|    x = 2;
2|    if (x == 1 || x == 2)

I.e. the second line gets an execution count of 2. By contrast,

1|    x = 1;
1|    if (x == 1 || x == 2)


Interesting point, but would likely fail for more complex stuff.

1| stmt;
2| if (api1 == 1 && api2 == 2 ||
    api2 == 2 && api3 == 3)


There would be a separate coverage count for line 3 which would be the sum of 
counts for (api2 == 2) and (api3 == 3).


Generally, if this is inadequate, just split the expression into more lines.

The same thing for for loop statements and ?:


Anyhow, I think the current state is good enough and there are gdc/ldc for 
further coverage features.





Re: iopipe alpha 0.0.1 version

2017-10-24 Thread Dmitry Olshansky via Digitalmars-d-announce

On Tuesday, 24 October 2017 at 19:05:02 UTC, Martin Nowak wrote:
On Tuesday, 24 October 2017 at 14:47:02 UTC, Steven 
Schveighoffer wrote:
iopipe provides "infinite" lookahead, which is central to its 
purpose. The trouble with bolting that on top of ranges, as 
you said, is that we have to copy everything out of the range, 
which necessarily buffers somehow (if it's efficient i/o), so 
you are double buffering. iopipe's purpose is to get rid of 
this unnecessary buffering. This is why it's a great fit for 
being the *base* of a range.


In other words, if you want something to have optional 
lookahead and range support, it's better to start out with an 
extendable buffering type like an iopipe, and bolt ranges on 
top, vs. the other way around.


Arguably this it is somewhat hacky to use a range as end marker 
for slicing sth., but you'd get the same benefit, access to the 
random buffer with zero-copying.


auto beg = rng.save; // save current position
auto end = rng.find("bla"); // lookahead using popFront
auto window = beg[0 .. end]; // get a random access window to 
underlying buffer



I had a design like that except save returned a “mark” (not full 
range) and there was a slice primitive. It even worked with 
patched std.regex, but at a non-zero performance penalty.


I think that maintaining the illusion of a full copy of range 
when you do “save” for buffered I/O stream is too costly. Because 
a user can now legally advance both - you need to RC buffers 
behind the scenes with separate “pointers” for each range that 
effectively pin them.



So basically forward ranges with slicing.
At least that would require to extend all algorithms with 
`extend` support, though likely you could have a small extender 
proxy range for IOPipes.


Note that rng could be a wrapper around unbuffered IO reads.





Re: iopipe alpha 0.0.1 version

2017-10-24 Thread Martin Nowak via Digitalmars-d-announce
On Tuesday, 24 October 2017 at 14:47:02 UTC, Steven Schveighoffer 
wrote:
iopipe provides "infinite" lookahead, which is central to its 
purpose. The trouble with bolting that on top of ranges, as you 
said, is that we have to copy everything out of the range, 
which necessarily buffers somehow (if it's efficient i/o), so 
you are double buffering. iopipe's purpose is to get rid of 
this unnecessary buffering. This is why it's a great fit for 
being the *base* of a range.


In other words, if you want something to have optional 
lookahead and range support, it's better to start out with an 
extendable buffering type like an iopipe, and bolt ranges on 
top, vs. the other way around.


Arguably this it is somewhat hacky to use a range as end marker 
for slicing sth., but you'd get the same benefit, access to the 
random buffer with zero-copying.


auto beg = rng.save; // save current position
auto end = rng.find("bla"); // lookahead using popFront
auto window = beg[0 .. end]; // get a random access window to 
underlying buffer


So basically forward ranges with slicing.
At least that would require to extend all algorithms with 
`extend` support, though likely you could have a small extender 
proxy range for IOPipes.


Note that rng could be a wrapper around unbuffered IO reads.


Re: Caching D compiler - preview version

2017-10-24 Thread Dmitry Olshansky via Digitalmars-d-announce
On Tuesday, 24 October 2017 at 14:17:32 UTC, Dmitry Olshansky 
wrote:

On Tuesday, 24 October 2017 at 13:29:12 UTC, Mike Parker wrote:
On Tuesday, 24 October 2017 at 13:19:15 UTC, Dmitry Olshansky 
wrote:

What is dcache?

It's a patch for dmd that enables a *persistent* 
shared-memory hash-map, protected by a spin-lock from races. 
Dmd processes with -cache flag would detect the following 
pattern:


Blog post or it didn't happen!


Let us at least try it outside of toy examples.

If anybody has std.regex.ctRegex usage I'd be curious to see:

1. Build time w/o -cache=mmap
2. First build time w -cache=mmap
3. Subsequent build times w -cache=mmap

P.S. It's a crude PoC. I think we can do better.


Another caveat: Posix-only for now.

Did a few cleanups and widened the scope a bit.

So here is what happens in my benchmark for std.regex.

-O -inline -release:
88s --> 80s,  memory use ~700Mb -> ~400Mb

-release:
19s -> 12.8s

Experimental std.regex.v2 is sadly broken by a recent change to 
array ops.
It would be very interesting to check as it eats up to 17Gb of 
RAM.




Re: iopipe alpha 0.0.1 version

2017-10-24 Thread Steven Schveighoffer via Digitalmars-d-announce

On 10/24/17 5:32 AM, Martin Nowak wrote:

On Monday, 23 October 2017 at 16:34:19 UTC, Steven Schveighoffer wrote:

On 10/21/17 6:33 AM, Martin Nowak wrote:

On 10/19/2017 03:12 PM, Steven Schveighoffer wrote:

On 10/19/17 7:13 AM, Martin Nowak wrote:

On 10/13/2017 08:39 PM, Steven Schveighoffer wrote:
It's solving a different problem than iopipe is solving. I plan on 
adding iopipe-on-range capability soon as well, since many times, all 
you have is a range.


You mean chunk based processing vs. infinite lookahead for parsing?
They both provide a similar API, sth. to extend the current window and 
sth. to release data.


Yes, definitely.

The example input here was an input range, but it's read in page sizes 
and could as well be a socket.


iopipe provides "infinite" lookahead, which is central to its purpose. 
The trouble with bolting that on top of ranges, as you said, is that we 
have to copy everything out of the range, which necessarily buffers 
somehow (if it's efficient i/o), so you are double buffering. iopipe's 
purpose is to get rid of this unnecessary buffering. This is why it's a 
great fit for being the *base* of a range.


In other words, if you want something to have optional lookahead and 
range support, it's better to start out with an extendable buffering 
type like an iopipe, and bolt ranges on top, vs. the other way around.


-Steve


Re: LDC 1.5.0-beta1

2017-10-24 Thread kinke via Digitalmars-d-announce
On Monday, 23 October 2017 at 22:43:15 UTC, Guillaume Piolat 
wrote:
So far my benchmark scripts are Windows-only so no LTO is 
available AFAIK. I can work on providing such measures (or any 
flags you want) on OSX in the future.


I performed an extremely rudimentary -flto=full test on Win64 for 
a hello-world program yesterday, linking manually via LLD 5.0 
(lld-link.exe), which worked (and saved 1 KB for the executable 
;)).


Re: LDC 1.5.0-beta1

2017-10-24 Thread Guillaume Piolat via Digitalmars-d-announce

On Monday, 23 October 2017 at 22:57:00 UTC, Nicholas Wilson wrote:


would it help to have them grouped/filterable by category?
e.g.
$ldc2 -help-hidden=category


Perhaps, but the sheer amount of customizability makes you wish 
for a superoptimizer for compiler flags (hard to do this 
generically though).


Re: Caching D compiler - preview version

2017-10-24 Thread Dmitry Olshansky via Digitalmars-d-announce

On Tuesday, 24 October 2017 at 13:29:12 UTC, Mike Parker wrote:
On Tuesday, 24 October 2017 at 13:19:15 UTC, Dmitry Olshansky 
wrote:

What is dcache?

It's a patch for dmd that enables a *persistent* shared-memory 
hash-map, protected by a spin-lock from races. Dmd processes 
with -cache flag would detect the following pattern:


Blog post or it didn't happen!


Let us at least try it outside of toy examples.

If anybody has std.regex.ctRegex usage I'd be curious to see:

1. Build time w/o -cache=mmap
2. First build time w -cache=mmap
3. Subsequent build times w -cache=mmap

P.S. It's a crude PoC. I think we can do better.

---
Dmitry Olshansky


Re: Caching D compiler - preview version

2017-10-24 Thread Nicholas Wilson via Digitalmars-d-announce
On Tuesday, 24 October 2017 at 13:19:15 UTC, Dmitry Olshansky 
wrote:

What is dcache?

It's a patch for dmd that enables a *persistent* shared-memory 
hash-map, protected by a spin-lock from races. Dmd processes 
with -cache flag would detect the following pattern:




Ooooh, very nice! Looking forward to non-std lib usage.


Re: Caching D compiler - preview version

2017-10-24 Thread Mike Parker via Digitalmars-d-announce
On Tuesday, 24 October 2017 at 13:19:15 UTC, Dmitry Olshansky 
wrote:

What is dcache?

It's a patch for dmd that enables a *persistent* shared-memory 
hash-map, protected by a spin-lock from races. Dmd processes 
with -cache flag would detect the following pattern:


Blog post or it didn't happen!


Caching D compiler - preview version

2017-10-24 Thread Dmitry Olshansky via Digitalmars-d-announce

What is dcache?

It's a patch for dmd that enables a *persistent* shared-memory 
hash-map, protected by a spin-lock from races. Dmd processes with 
-cache flag would detect the following pattern:


enum/static variable = func(args..);

And if mangle of func indicates it is from std.* we use a cache 
to store D source code form of a result of function call (a 
literal) produced by CTFE.


In action:

https://github.com/dlang/dmd/pull/7239

(Watch as 2.8s - 4.4s to compile various ctRegex programs becomes 
constant ~1.0s.)


Caching is done per expression so it stays active even after you 
change various parts of your files.


Broadening the scope to 3rd party libraries is planned but cache 
invalidation is going to be tricky. Likewise there is a trove of 
things aside from CTFE that can be easily cached and shared 
across both parallel and sequental compiler invocations.



Why caching compiler?

It became apparent that CTFE computations could be quite 
time-consuming and memory intensive. The fact that each CTFE 
invocation depends on a set of constant arguments, makes it a 
perfect candidate for caching.


Motivating example is ctRegex, patterns are hardly ever change 
and std.library changes only on compiler upgrade,  yet each 
change to a file causes complete re-evaluation of all patterns in 
a module.


With presistent per-expression cache we can precompile all of 
CTFE evluations for regexes, so we get to use ctRegex and 
maintain sane compile-times.




How to use

Pass new option to dmd:

-cache=mmap

This enables persistent cache using memory-mapped file.
Future backends would take the form of e.g.:

-cache=memcache:memcached.my.network:11211



Implementation

Caveats emptor: this is alpha version, use at your own risk!

https://github.com/DmitryOlshansky/dmd/tree/dcache

Keeping things simple - it's a patch of around 200 SLOCs.
I envision it becoming a hundred lines more if we get to do 
things cleanly.


Instead of going with strangely popular idea of compilation 
servers I opted for simple distributed cache, as it doesn't 
require changing any of the build systems.


Shared memory mapping split in 3 sections: Metadata (spinlock) + 
ToC (hash-table index) + Data (chunks)


For now it's an immutable cache w/o eviction.

A ToC entry is as follows:

hash(64-bit), data index, data size, last_recent_use

Indexes point to Data section of memory map.

Data itself is a linked list of blocks, where a header contains:

(isFree, next, 0-terminated key, padding to 16 bytes)

last_recent_use is a ts of the start of the respective 
compilation.  last_recent < now - 24h is considered unutilized 
and may be reused.


In theory we can cache result of any compilation step with a 
proper key and invalidation strategy.


1. Lexing - key is compiler-version + abs path + timestamp, store 
as is. Lexing from cache is simply taking slices of memory.


2. Parsing to Ast - key is compiler-version + abs path + 
timestamp + version/debug flags


3. CTFE invocations - key is tricky, for now only enabled for 
std.* as follows:


enum/static varname = func(args...);

Use compiler-version + compiler-flags + mangleof + stringof args.


Re: DMD Installation Wiki

2017-10-24 Thread Nicholas Wilson via Digitalmars-d-announce

On Tuesday, 24 October 2017 at 10:09:44 UTC, Mike Parker wrote:
On Tuesday, 24 October 2017 at 08:26:17 UTC, Nicholas Wilson 
wrote:

On Tuesday, 24 October 2017 at 07:32:52 UTC, Mike Parker wrote:

In preparation for an upcoming blog post


Speaking of which, I've sent you a draft for an article on 
DCompute.


Yes, sorry. I should have acknowledged that I saw the email. I 
plan to look it over and get back to you before the end of this 
week with an eye toward publishing it next week.


Sounds good.


Re: Unit Testing in Action

2017-10-24 Thread Mario Kröplin via Digitalmars-d-announce

On Monday, 23 October 2017 at 12:38:01 UTC, Atila Neves wrote:
"parallel test execution (from it’s name, the main goal of 
unit-threaded) was quite problematic with the first test suite 
we converted"


I'd love to know what the problems were, especially since it's 
possible to run in just one thread with a command-line option, 
or to use UDAs to run certain tests in a module in the same 
thread (sometimes one has to change global state, as much as 
that is usually not a good idea).


Delays are our business, so we use the clock and timers 
everywhere. Long ago, we introduced Singletons to be able to 
replace the implementations for unit testing. By now, lots of 
tests fake time and it's a problem if they do so in parallel. 
It's not too hard, however, to change this to thread-local 
replacements of the clock and timers.


Another problem was using the same port number for different test 
cases. We now apply "The port 0 trick" 
(https://www.dnorth.net/2012/03/17/the-port-0-trick/).


"With the new static foreach feature however, it is easy to 
implement parameterized tests without the support of a 
framework"


It is, but it's the same problem with plain asserts in terms of 
knowing what went wrong unless the parameterised value happens 
to be in the assertion. And there's also the issue of running 
the test only for the value/type that it failed for instead of 
going through the whole static foreach everytime.


That's why I recommend to put the `static foreach` around the 
`unitest`. My example shows how to instantiate test descriptions 
(with CTFE of `format`) so that these string attributes are used 
to report failures or to slectively execute a test in isolation.


Re: DMD Installation Wiki

2017-10-24 Thread Mike Parker via Digitalmars-d-announce
On Tuesday, 24 October 2017 at 08:26:17 UTC, Nicholas Wilson 
wrote:

On Tuesday, 24 October 2017 at 07:32:52 UTC, Mike Parker wrote:

In preparation for an upcoming blog post


Speaking of which, I've sent you a draft for an article on 
DCompute.


Yes, sorry. I should have acknowledged that I saw the email. I 
plan to look it over and get back to you before the end of this 
week with an eye toward publishing it next week.


Re: iopipe alpha 0.0.1 version

2017-10-24 Thread Martin Nowak via Digitalmars-d-announce
On Monday, 23 October 2017 at 16:34:19 UTC, Steven Schveighoffer 
wrote:

On 10/21/17 6:33 AM, Martin Nowak wrote:

On 10/19/2017 03:12 PM, Steven Schveighoffer wrote:

On 10/19/17 7:13 AM, Martin Nowak wrote:

On 10/13/2017 08:39 PM, Steven Schveighoffer wrote:
It's solving a different problem than iopipe is solving. I plan 
on adding iopipe-on-range capability soon as well, since many 
times, all you have is a range.


You mean chunk based processing vs. infinite lookahead for 
parsing?
They both provide a similar API, sth. to extend the current 
window and sth. to release data.
The example input here was an input range, but it's read in page 
sizes and could as well be a socket.


Re: DMD Installation Wiki

2017-10-24 Thread Nicholas Wilson via Digitalmars-d-announce

On Tuesday, 24 October 2017 at 07:32:52 UTC, Mike Parker wrote:

In preparation for an upcoming blog post


Speaking of which, I've sent you a draft for an article on 
DCompute.


DMD Installation Wiki

2017-10-24 Thread Mike Parker via Digitalmars-d-announce
In preparation for an upcoming blog post on DMD & Windows, I've 
edited the DMD installation page on the Wiki with more up-to-date 
and generic instructions than what was there before. I invite 
anyone and everyone to look it over and fix any errors, 
grammatical or otherwise.


More to the point, though, I haven't touched the other sections 
on the page yet. However, I feel there should be information 
there for each platform on what's available from the download 
page, what steps are necessary to configure the system when using 
the zip archives, and how to use a DMD installed via the 
install.sh script. I can get to it myself eventually, but I would 
prefer if someone already familiar with these things got to it 
first :-)


Re: DMD Installation Wiki

2017-10-24 Thread Mike Parker via Digitalmars-d-announce

On Tuesday, 24 October 2017 at 07:32:52 UTC, Mike Parker wrote:
In preparation for an upcoming blog post on DMD & Windows, I've 
edited the DMD installation page on the Wiki with more 
up-to-date and generic instructions than what was there before. 
I invite anyone and everyone to look it over and fix any 
errors, grammatical or otherwise.



https://wiki.dlang.org/Installing_DMD