Re: Appender is ... slow

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Thursday, 14 August 2014 at 17:16:42 UTC, Philippe Sigaud 
wrote:
From time to time, I try to speed up some array-heavy code by 
using std.array.Appender, reserving some capacity and so on.


It never works. Never. It gives me executables that are maybe 
30-50% slower than bog-standard array code.


I don't do anything fancy: I just replace the types, use 
clear() instead of = null...


Do people here get good results from Appender? And if yes, how 
are you using it?


Quick test...


import std.array;
import std.datetime;
import std.stdio;

enum size = 1000;

void test1()
{
auto arr = appender!(int[])();
foreach(n; 0 .. size)
arr.put(n);
}

void test2()
{
int[] arr;
foreach(n; 0 .. size)
arr ~= n;
}

void test3()
{
auto arr = new int[](size);
foreach(n; 0 .. size)
arr[n] = n;
}

void test4()
{
auto arr = uninitializedArray!(int[])(size);
foreach(n; 0 .. size)
arr[n] = n;
}

void main()
{
auto result = benchmark!(test1, test2, test3, test4)(10_000);
writeln(cast(Duration)result[0]);
writeln(cast(Duration)result[1]);
writeln(cast(Duration)result[2]);
writeln(cast(Duration)result[3]);
}



With size being 1000, I get

178 ms, 229 μs, and 4 hnsecs
321 ms, 272 μs, and 8 hnsecs
27 ms, 138 μs, and 7 hnsecs
24 ms, 970 μs, and 9 hnsecs

With size being 100,000, I get

15 secs, 731 ms, 499 μs, and 1 hnsec
29 secs, 339 ms, 553 μs, and 8 hnsecs
2 secs, 602 ms, 385 μs, and 2 hnsecs
2 secs, 409 ms, 448 μs, and 9 hnsecs

So, it looks like using Appender to create an array of ints is 
about twice as fast as appending to the array directly, and 
unsurprisingly, creating the array at the correct size up front 
and filling in the values is far faster than either, whether the 
array's elements are default-initialized or not. And the numbers 
are about the same if it's an array of char rather than an array 
of int.


Also, curiously, if I add a test which is the same as test2 (so 
it's just appending to the array) except that it calls reserve on 
the array with size, the result is almost the same as not 
reserving. It's a smidgeon faster but probably not enough to 
matter. So, it looks like reserve doesn't buy you much for some 
reason. Maybe the extra work for checking whether it needs to 
reallocate or whatever fancy logic it has to do in ~= dwarfs the 
extra cost of the reallocations? That certainly seems odd to me, 
but bizarrely, the evidence does seem to say that reserving 
doesn't help much. That should probably be looked into.


In any case, from what I can see, if all you're doing is creating 
an array and then throwing away the Appender, it's faster to use 
Appender than to directly append to the array. Maybe that changes 
with structs? I don't know. It would be interesting to know 
what's different about your code that's causing Appender to be 
slower, but from what I can see, it's definitely faster to use 
Appender unless you know the target size of the array up front.


- Jonathan M Davis


extern (c++) std::function?

2014-08-14 Thread Etienne Cimon via Digitalmars-d-learn
I'm looking into making a binding for the C++ API called Botan, and the 
constructors in it take a std::function. I'm wondering if there's a D 
equivalent for this binding to work out, or if I have to make a C++ 
wrapper as well?


Re: Very vague compiler error message

2014-08-14 Thread Jeremy DeHaan via Digitalmars-d-learn

On Thursday, 14 August 2014 at 13:47:58 UTC, Théo Bueno wrote:

On Thursday, 14 August 2014 at 13:28:03 UTC, bearophile wrote:

Théo Bueno:


Same issue here with dsfml-audio, this is really annoying :/


Have you filed the issue?

Bye,
bearophile


Jebbs filed an issue on his bugtracker :
https://github.com/Jebbs/DSFML/issues/125
... and he seems to have a fix, or at least a clue.

Personally, I was not able to figure out where is the bug.


Yeah, sorry. I did get it to compile, but I didn't have access to 
my computer last night unexpectedly and I haven't checked it to 
confirm that  the audio module still works. If everything does 
still work, then I'll upload the fix in the next couple of hours.


Re: Appender is ... slow

2014-08-14 Thread Dicebot via Digitalmars-d-learn
On Thursday, 14 August 2014 at 21:34:04 UTC, Jonathan M Davis 
wrote:

On Thursday, 14 August 2014 at 21:11:51 UTC, safety0ff wrote:
IIRC it manages the capacity information manually instead of 
calling the runtime which reduces appending overhead.


That would make some sense, though it must be completely 
avoiding ~= then and probably is even GC-mallocing the array 
itself. Regardless, I clearly need to study the code if I want 
to know what it's actually doing.


- Jonathan M Davis


I wonder if using plain `Array` instead may be result in better 
performance where immutability is not needed.


Re: core.thread.Fiber --- runtime stack overflow unlike goroutines

2014-08-14 Thread Dicebot via Digitalmars-d-learn

On Thursday, 14 August 2014 at 18:52:00 UTC, Sean Kelly wrote:
On 64 bit, reserve a huge chunk of memory, set a SEGV handler 
and commit more as needed. Basically how kernel thread stacks 
work. I've been meaning to do this but haven't gotten around to 
it yet.


I think using some sort of thread-local shared heap pool is 
better approach in general as it does need any SEGV handling 
overhead and simplifies fiber implementation (all fibers are 
guaranteed to take fixed amount of memory). It is not a silver 
bullet and forces you to think much more about application memory 
layout but I believe this is much better approach for high 
performance services than segmented stack like in Go.


Re: Appender is ... slow

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Thursday, 14 August 2014 at 21:34:04 UTC, Jonathan M Davis 
wrote:

On Thursday, 14 August 2014 at 21:11:51 UTC, safety0ff wrote:
IIRC it manages the capacity information manually instead of 
calling the runtime which reduces appending overhead.


That would make some sense, though it must be completely 
avoiding ~= then and probably is even GC-mallocing the array 
itself. Regardless, I clearly need to study the code if I want 
to know what it's actually doing.


It looks like what it does is essentially to set the array's 
length to the capacity that the GC gave it and then manage the 
capacity itself (so, basically what you were suggesting) and 
essentially avoids the runtime overhead of ~= by reimplementing 
~=. Whether it does it in a more efficient manner is an open 
question, and it also begs the question why it would be cheaper 
to do it this way rather than in the GC. That's not at all 
obvious to me at the moment, especially because the code for 
ensureAddable and put in Appender are both fairly complicated.


So, I really have no idea how Appender fairs in comparison to 
just using ~=, and I have to wonder why something similar can't 
be done in the runtime itself if Appender actually is faster. I'd 
have to spend a lot more time looking into that to figure it out.


- Jonathan M Davis


Re: What hashing algorithm is used for the D implementation of associative arrays?

2014-08-14 Thread safety0ff via Digitalmars-d-learn

On Thursday, 14 August 2014 at 13:10:58 UTC, bearophile wrote:


D AAs used to be not vulnerable to collision attacks because 
they resolved collisions building a red-black tree for each 
bucket. Later buckets became linked lists for speed,


Slight corrections:
It was a effectively a randomized BST, it used the hash value + 
comparison function to place the elements in the tree.

E.g. The AA's node comparison function might be:

   if (hash == other.hash)
  return value.opCmp(other.value);
   else if (hash < other.hash)
  return -1;
   return 1;

The hash function has a significant influence on how balanced the 
BST will be.
Insertion and removal order also have performance influence since 
rebalancing was only done when growing the AA.

It had no performance guarantees.

I believe it was removed to reduce memory consumption, see the 
Mar 19 2010 cluster of commits by Walter Bright to aaA.d.
Since GC rounds up allocations to powers of two for small 
objects, the additional pointer doubles the allocation size per 
node.


A template library based AA implementation should be able to 
handily outperform built-in AAs and provide guarantees.
Furthermore, improved memory management could be a significant 
win.


Fun fact:
The AA implementation within DMD still uses the "randomized" BST 
though the hash functions are very rudimentary.


Re: More Generic Variants of findSplit.*() in Demangling/Parsing

2014-08-14 Thread Nordlöw

On Thursday, 14 August 2014 at 21:04:06 UTC, Nordlöw wrote:

Should this go into Phobos?


My variants can be found at the bottom of

https://github.com/nordlow/justd/blob/master/algorithm_ex.d


Re: Appender is ... slow

2014-08-14 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 14/08/14 23:33, Joseph Rushton Wakeling via Digitalmars-d-learn wrote:

An example where it worked for me:
http://braingam.es/2013/09/betweenness-centrality-in-dgraph/


I should add that I don't think I ever explored the case of just using a regular 
array with assumeSafeAppend.  Now that you've raised the question, I think I 
ought to try it :)




Re: Appender is ... slow

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn

On Thursday, 14 August 2014 at 21:11:51 UTC, safety0ff wrote:
IIRC it manages the capacity information manually instead of 
calling the runtime which reduces appending overhead.


That would make some sense, though it must be completely avoiding 
~= then and probably is even GC-mallocing the array itself. 
Regardless, I clearly need to study the code if I want to know 
what it's actually doing.


- Jonathan M Davis


Re: Appender is ... slow

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Thursday, 14 August 2014 at 21:00:55 UTC, Jonathan M Davis 
wrote:
On Thursday, 14 August 2014 at 19:47:33 UTC, Brad Anderson 
wrote:
On Thursday, 14 August 2014 at 19:10:18 UTC, Jonathan M Davis 
wrote:
I've never really tried to benchmark it, but it was my 
understanding that the idea behind Appender was to use it to 
create the array when you do that via a lot of appending, and 
then you use it as a normal array and stop using Appender. It 
sounds like you're trying to use it as a way to manage 
reusing the array, and I have no idea how it works for that. 
But then again, I've never actually benchmarked it for just 
creating arrays via appending. I'd just assumed that it was 
faster than just using ~=, because that's what it's 
supposedly for. But maybe I just completely misunderstood 
what the point of Appender was.


- Jonathan M Davis


I too have trouble understanding what Appender does that 
supposedly makes it faster (at least from the documentation). 
My old, naive thought was that it was something like a linked 
list of fixed size arrays so that appends didn't have to move 
existing elements until you were done appending, at which 
point it would bake it into a regular dynamic array moving 
each element only once looking at the code it appeared to be 
nothing like that (an std::deque with a copy into a vector in 
c++ terms).


Skimming the code it appears to be more focused on the much 
more basic "~= always reallocates" performance problem. It 
seems it boils down to doing essentially this (someone feel 
free to correct me) in the form of an output range:


auto a = /* some array */;
auto b = a;
a = a.array();
for(...)
 b.assumeSafeAppend() ~= /* element */;


It was my understanding that that was essentially what it did, 
but I've never really studied its code, and I don't know if 
there are any other tricks that it's able to pull. It may be 
that it really doesn't do anything more than be  wrapper which 
handles assumeSafeAppend for you correctly. But if that's the 
case, I wouldn't expect operating on arrays directly to be any 
faster. Maybe it would be _slightly_ faster, because there are 
no wrapper functions to inline away, but it wouldn't be much 
faster, it would ensure that you used assumeSafeAppend 
correctly. If it's really as much slower as Phillippe is 
finding, then I have no idea what's going on. Certainly, it 
merits a bug report and further investigation.


Okay. This makes no sense actually. You call assumeSafeAppend 
after you _shrink_ an array and then want to append to it, not 
when you're just appending to it. So, assumeSafeAppend wouldn't 
help with something like


auto app = appender!string();
// Append a bunch of stuff to app
auto str = app.data;

So, it must be doing something else (though it may be using 
assumeSafeAppend in other functions). I'll clearly have to look 
over the actual code to have any clue about what it's actually 
doing.


Though in reference to your idea of using a linked list of 
arrays, IIRC, someone was looking at changing it to do something 
like that at some point, but it would drastically change what 
Appender's data property would do, so I don't know if it would be 
a good idea to update Appender that way rather than creating a 
new type. But I don't recall what became of that proposal.


- Jonathan M Davis


Re: Appender is ... slow

2014-08-14 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 14/08/14 19:16, Philippe Sigaud via Digitalmars-d-learn wrote:

Do people here get good results from Appender? And if yes, how are you using it?


An example where it worked for me:
http://braingam.es/2013/09/betweenness-centrality-in-dgraph/

(You will have to scroll down a bit to get to the point where appenders get 
introduced.)




Re: Appender is ... slow

2014-08-14 Thread safety0ff via Digitalmars-d-learn
IIRC it manages the capacity information manually instead of 
calling the runtime which reduces appending overhead.


Re: More Generic Variants of findSplit.*() in Demangling/Parsing

2014-08-14 Thread Nordlöw

On Thursday, 14 August 2014 at 20:48:42 UTC, Nordlöw wrote:
Ooops, I just realized that we can't express current Phobos 
implementations using my variant. Current algorithms take 
ranges not range values as a needle. My mistake. Are my 
overloads still wanted?


Ok. I finally understood that a simplification was the way to go:

/** Simpler Variant of Phobos' findSplitBefore. */
auto findSplitBefore(alias pred, R1)(R1 haystack) if 
(isForwardRange!R1)

{
static if (isSomeString!R1 ||
   sRandomAccessRange!R1)
{
auto balance = haystack.find!pred;
immutable pos = haystack.length - balance.length;
return tuple(haystack[0 .. pos],
 haystack[pos .. haystack.length]);
}
else
{
auto original = haystack.save;
auto h = haystack.save;
size_t pos;
while (!h.empty)
{
if (unaryFun!pred(h.front))
{
h.popFront();
}
else
{
haystack.popFront();
h = haystack.save;
++pos;
}
}
return tuple(takeExactly(original, pos),
 haystack);
}
}

unittest
{
import std.algorithm: equal;
import std.ascii: isDigit;
auto x = "11ab".findSplitBefore!(a => !a.isDigit);
assert(equal(x[0], "11"));
assert(equal(x[1], "ab"));
}

Should this go into Phobos?


Re: Appender is ... slow

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn

On Thursday, 14 August 2014 at 19:47:33 UTC, Brad Anderson wrote:
On Thursday, 14 August 2014 at 19:10:18 UTC, Jonathan M Davis 
wrote:
I've never really tried to benchmark it, but it was my 
understanding that the idea behind Appender was to use it to 
create the array when you do that via a lot of appending, and 
then you use it as a normal array and stop using Appender. It 
sounds like you're trying to use it as a way to manage reusing 
the array, and I have no idea how it works for that. But then 
again, I've never actually benchmarked it for just creating 
arrays via appending. I'd just assumed that it was faster than 
just using ~=, because that's what it's supposedly for. But 
maybe I just completely misunderstood what the point of 
Appender was.


- Jonathan M Davis


I too have trouble understanding what Appender does that 
supposedly makes it faster (at least from the documentation). 
My old, naive thought was that it was something like a linked 
list of fixed size arrays so that appends didn't have to move 
existing elements until you were done appending, at which point 
it would bake it into a regular dynamic array moving each 
element only once looking at the code it appeared to be nothing 
like that (an std::deque with a copy into a vector in c++ 
terms).


Skimming the code it appears to be more focused on the much 
more basic "~= always reallocates" performance problem. It 
seems it boils down to doing essentially this (someone feel 
free to correct me) in the form of an output range:


auto a = /* some array */;
auto b = a;
a = a.array();
for(...)
  b.assumeSafeAppend() ~= /* element */;


It was my understanding that that was essentially what it did, 
but I've never really studied its code, and I don't know if there 
are any other tricks that it's able to pull. It may be that it 
really doesn't do anything more than be  wrapper which handles 
assumeSafeAppend for you correctly. But if that's the case, I 
wouldn't expect operating on arrays directly to be any faster. 
Maybe it would be _slightly_ faster, because there are no wrapper 
functions to inline away, but it wouldn't be much faster, it 
would ensure that you used assumeSafeAppend correctly. If it's 
really as much slower as Phillippe is finding, then I have no 
idea what's going on. Certainly, it merits a bug report and 
further investigation.


(assumeSafeAppend's documentation doesn't say whether or not 
it'll reallocate when capacity is exhausted, I assume it does).


All assumeSafeAppend does is tell the runtime that this 
particular array is the array the farthest into the memory block 
(i.e. that any of the slices which referred to farther in the 
memory block don't exist anymore). So, the value in the runtime 
that keeps track of the farthest point into the memory block 
which has been referred to by an array is then set to the end of 
the array that you passed to assumeSafeAppend. And because it's 
now the last array in that block, it's safe to append to it 
without reallocating. But the appending then works the same way 
that it always does, and it'll reallocate when there's no more 
capacity. The whole idea is to just make it so that the runtime 
doesn't think that the memory after that array is unavailable for 
that array to expand into.


- Jonathan M Davis


Re: Appender is ... slow

2014-08-14 Thread Dicebot via Digitalmars-d-learn

On Thursday, 14 August 2014 at 20:50:37 UTC, Dicebot wrote:
Oh crap I had std.array.Array in mind which does have `length` 
exposes. And Appender seems to indeed clear / shrink data in a 
horrible way:


https://github.com/D-Programming-Language/phobos/blob/master/std/array.d#L2597
https://github.com/D-Programming-Language/phobos/blob/master/std/array.d#L2611

Wow, this thing is real garbage!


OK, looks like Appender is written to be fully GC-compatible and 
usable with immutable data which kind of explain such 
implementation. But that makes it inherently slow compared to 
plain `Array` which owns its memory and gets it via malloc.


Re: Appender is ... slow

2014-08-14 Thread Dicebot via Digitalmars-d-learn
On Thursday, 14 August 2014 at 20:42:08 UTC, Philippe Sigaud via 
Digitalmars-d-learn wrote:

You mean by using the shrinkTo method? (Appender does not have a
length, that's a bit of a bother btw).
I just did, it does not change anything. Too bad.

Heck, my code is simpler to read and use *and* faster by using a
bog-standard D array. I'll keep my array for now :)


Oh crap I had std.array.Array in mind which does have `length` 
exposes. And Appender seems to indeed clear / shrink data in a 
horrible way:


https://github.com/D-Programming-Language/phobos/blob/master/std/array.d#L2597
https://github.com/D-Programming-Language/phobos/blob/master/std/array.d#L2611

Wow, this thing is real garbage!


Re: More Generic Variants of findSplit.*() in Demangling/Parsing

2014-08-14 Thread Nordlöw

On Thursday, 14 August 2014 at 20:28:38 UTC, Nordlöw wrote:

On Thursday, 14 August 2014 at 20:25:20 UTC, Nordlöw wrote:

Destroy!


Correction: These algorithms require ForwardRanges.


Ooops, I just realized that we can't express current Phobos 
implementations using my variant. Current algorithms take ranges 
not range values as a needle. My mistake. Are my overloads still 
wanted?


Re: Appender is ... slow

2014-08-14 Thread Philippe Sigaud via Digitalmars-d-learn
> Thanks! Repeating what I have mentioned during DConf talk - have you ever
> considered proposing Pegged for Phobos inclusion? It feels like important
> bit of infrastructure to me.

At the time, it was considered (rightfully) far too slow and
memory-hogging. I think having a generic lexer and a standard D parser
in Phobos would already be a great step forward.


Re: Appender is ... slow

2014-08-14 Thread Philippe Sigaud via Digitalmars-d-learn
>> There is a misunderstanding there: I'm using clear only to flush the
>> state at the beginning of the computation. The Appender is a class
>> field, used by the class methods to calculate. If I do not clear it at
>> the beginning of the methods, I keep appending new results to old
>> computations, which is not what I want. But really, calling clear is a
>> minor point: I'm interested in Appender's effect on *one* (long,
>
>
> This is exactly what I propose to change - set `length` to 0 instead of
> calling `clear`. That way you will keep using same memory chunk simply
> abandoning its old state at the beginning of each computation.

You mean by using the shrinkTo method? (Appender does not have a
length, that's a bit of a bother btw).
I just did, it does not change anything. Too bad.

Heck, my code is simpler to read and use *and* faster by using a
bog-standard D array. I'll keep my array for now :)


More Generic Variants of findSplit.*() in Demangling/Parsing

2014-08-14 Thread Nordlöw
I'm currently working on implementing demangling of ELF C++ 
symbols according to


https://en.wikipedia.org/wiki/Name_mangling

A reoccurring pattern high-level pattern is to use the 
std.algorithm: findSplit.* functions to make algorithm 
single-pass when possible.


I'm however lacking a variant of this that takes a condition 
template argument instead of an input range element as function 
parameter.


Something like

auto findSplitBefore(alias condition, R)(R range) if 
(isInputRange!R)

{
import std.algorithm: until;
auto hit = range.until!condition;
auto rest = "";
return tuple(hit, rest);
}

unittest
{
import std.algorithm: equal;
import std.ascii: isDigit;
auto x = "11ab".findSplitBefore!(a => !a.isDigit);
assert(equal(x[0], "11"));
/* assert(equal(x[1], "ab")); */
}

But how do I get the second part in a single pass?

Is copying (duplicating) the existing Phobos implementations the 
only alternative here?


If so I believe we should generalize these Phobos algorithms into 
the variants described above and implement the old ones by 
calling the generalized ones typically as


auto findSplit(R)(R range, E element) if (isInputRange!R)
{
return range.findSplit!(a => a == element);
}

BTW: Does anybody have any good refs on ELF C++ demangling. The 
wikipedia article is a bit sparse.


Destroy!


Re: More Generic Variants of findSplit.*() in Demangling/Parsing

2014-08-14 Thread Nordlöw

On Thursday, 14 August 2014 at 20:25:20 UTC, Nordlöw wrote:

Destroy!


Correction: These algorithms require ForwardRanges.


Re: Appender is ... slow

2014-08-14 Thread Dicebot via Digitalmars-d-learn
On Thursday, 14 August 2014 at 19:29:28 UTC, Philippe Sigaud via 
Digitalmars-d-learn wrote:
There is a misunderstanding there: I'm using clear only to 
flush the
state at the beginning of the computation. The Appender is a 
class
field, used by the class methods to calculate. If I do not 
clear it at
the beginning of the methods, I keep appending new results to 
old
computations, which is not what I want. But really, calling 
clear is a

minor point: I'm interested in Appender's effect on *one* (long,


This is exactly what I propose to change - set `length` to 0 
instead of calling `clear`. That way you will keep using same 
memory chunk simply abandoning its old state at the beginning of 
each computation.


Re: Appender is ... slow

2014-08-14 Thread Dicebot via Digitalmars-d-learn
On Thursday, 14 August 2014 at 18:55:55 UTC, Philippe Sigaud via 
Digitalmars-d-learn wrote:
btw, I saw your Dconf talk yesterday, nice content! And thanks 
for

talking about Pegged!
It might interest you to know that the code I'm trying to use 
Appender
on is a new engine for Pegged, based on GLL parsing, that 
should be

able to produce a parser for any grammar, even ambiguous ones.


Thanks! Repeating what I have mentioned during DConf talk - have 
you ever considered proposing Pegged for Phobos inclusion? It 
feels like important bit of infrastructure to me.


Re: Appender is ... slow

2014-08-14 Thread Brad Anderson via Digitalmars-d-learn
On Thursday, 14 August 2014 at 19:10:18 UTC, Jonathan M Davis 
wrote:
I've never really tried to benchmark it, but it was my 
understanding that the idea behind Appender was to use it to 
create the array when you do that via a lot of appending, and 
then you use it as a normal array and stop using Appender. It 
sounds like you're trying to use it as a way to manage reusing 
the array, and I have no idea how it works for that. But then 
again, I've never actually benchmarked it for just creating 
arrays via appending. I'd just assumed that it was faster than 
just using ~=, because that's what it's supposedly for. But 
maybe I just completely misunderstood what the point of 
Appender was.


- Jonathan M Davis


I too have trouble understanding what Appender does that 
supposedly makes it faster (at least from the documentation). My 
old, naive thought was that it was something like a linked list 
of fixed size arrays so that appends didn't have to move existing 
elements until you were done appending, at which point it would 
bake it into a regular dynamic array moving each element only 
once looking at the code it appeared to be nothing like that (an 
std::deque with a copy into a vector in c++ terms).


Skimming the code it appears to be more focused on the much more 
basic "~= always reallocates" performance problem. It seems it 
boils down to doing essentially this (someone feel free to 
correct me) in the form of an output range:


auto a = /* some array */;
auto b = a;
a = a.array();
for(...)
  b.assumeSafeAppend() ~= /* element */;


(assumeSafeAppend's documentation doesn't say whether or not 
it'll reallocate when capacity is exhausted, I assume it does).


Re: Appender is ... slow

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Thursday, 14 August 2014 at 19:29:28 UTC, Philippe Sigaud via 
Digitalmars-d-learn wrote:
It sounds like you're trying to use it as a way to manage 
reusing

the array, and I have no idea how it works for that.


There is a misunderstanding there: I'm using clear only to 
flush the
state at the beginning of the computation. The Appender is a 
class
field, used by the class methods to calculate. If I do not 
clear it at
the beginning of the methods, I keep appending new results to 
old
computations, which is not what I want. But really, calling 
clear is a

minor point: I'm interested in Appender's effect on *one* (long,
concatenation-intensive) computation.


Then it sounds like you're reusing the Appender. I've never done 
that. In fact, I would have assumed that that would mean that you 
were attempted to fill in the same array again, and I wouldn't 
have even thought that that was safe, because I would have 
assumed that Appnder used assumeSafeAppend, which would mean that 
reusing the Appender would be highly unsafe unless you weren't 
using the array that you got from it anymore.


I always use Appender to construct an array, and then I get rid 
of the Appender. I don't think that I've ever had a member 
variable which was an Appender. I only ever use it for local 
variables or function arguments.



I've
never actually benchmarked it for just creating arrays via 
appending. I'd
just assumed that it was faster than just using ~=, because 
that's what it's
supposedly for. But maybe I just completely misunderstood what 
the point of

Appender was.


I don't know. People here keep telling newcomers to use it, but 
I'm
not convinced by its results. Maybe I'm seeing worse results 
because
my arrays are do not have millions of elements and Appender 
shines for

long arrays?


I have no idea. It was my understandnig that it was faster to 
create an array via appending using Appender than ~=, but I've 
never benchmarked it or actually looked into how it works. It's 
quite possible that while it's _supposed_ to be faster, it's 
actually flawed somehow and is actually slower, and no one has 
noticed previously, simply assuming that it was faster because 
it's supposed to be.


- Jonathan M Davis


Re: Appender is ... slow

2014-08-14 Thread Philippe Sigaud via Digitalmars-d-learn
> I've never really tried to benchmark it, but it was my understanding that
> the idea behind Appender was to use it to create the array when you do that
> via a lot of appending, and then you use it as a normal array and stop using
> Appender.

That's how I use it, yes.

> It sounds like you're trying to use it as a way to manage reusing
> the array, and I have no idea how it works for that.

There is a misunderstanding there: I'm using clear only to flush the
state at the beginning of the computation. The Appender is a class
field, used by the class methods to calculate. If I do not clear it at
the beginning of the methods, I keep appending new results to old
computations, which is not what I want. But really, calling clear is a
minor point: I'm interested in Appender's effect on *one* (long,
concatenation-intensive) computation.

> I've
> never actually benchmarked it for just creating arrays via appending. I'd
> just assumed that it was faster than just using ~=, because that's what it's
> supposedly for. But maybe I just completely misunderstood what the point of
> Appender was.

I don't know. People here keep telling newcomers to use it, but I'm
not convinced by its results. Maybe I'm seeing worse results because
my arrays are do not have millions of elements and Appender shines for
long arrays?


Re: core.thread.Fiber --- runtime stack overflow unlike goroutines

2014-08-14 Thread Brad Anderson via Digitalmars-d-learn
On Thursday, 14 August 2014 at 07:46:29 UTC, Carl Sturtivant 
wrote:


The default size of the runtime stack for a Fiber is 4*PAGESIZE 
which is very small, and a quick test shows that a Fiber 
suffers a stack overflow that doesn't lead to a clean 
termination when this limit is exceeded.


This makes it difficult to simulate deterministic alternation 
where the stack size needed is unpredictable because complex 
deterministic computations are going on inside Fibers.


In contrast, the Go programming language's goroutines can 
extend their stacks as needed at runtime, and so can be used to 
simulate deterministic alternation without this limitation, and 
yet be initially executed with each having only a small stack 
size.


There seems to be a claim that all that's needed to add 
D-routines (goroutines for D) is a scheduler and a Channel 
type, on top of Fiber.

http://forum.dlang.org/thread/lphnen$1ml7$1...@digitalmars.com
See the initial post, point 7., as well as supporting remarks 
in later replies.


Am I missing something? Is there a clean and simple way to get 
Fiber to no longer suffer a stack overflow when implementing 
D-routines?


Segmented stacks come with a cost. Rust abandoned them for 
reasons you can read about here:


https://mail.mozilla.org/pipermail/rust-dev/2013-November/006314.html

I believe Go has taken steps to help mitigate stack thrashing but 
I don't know if they have been successful yet.


Re: Appender is ... slow

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Thursday, 14 August 2014 at 17:16:42 UTC, Philippe Sigaud 
wrote:
From time to time, I try to speed up some array-heavy code by 
using std.array.Appender, reserving some capacity and so on.


It never works. Never. It gives me executables that are maybe 
30-50% slower than bog-standard array code.


I don't do anything fancy: I just replace the types, use 
clear() instead of = null...


Do people here get good results from Appender? And if yes, how 
are you using it?


I've never really tried to benchmark it, but it was my 
understanding that the idea behind Appender was to use it to 
create the array when you do that via a lot of appending, and 
then you use it as a normal array and stop using Appender. It 
sounds like you're trying to use it as a way to manage reusing 
the array, and I have no idea how it works for that. But then 
again, I've never actually benchmarked it for just creating 
arrays via appending. I'd just assumed that it was faster than 
just using ~=, because that's what it's supposedly for. But maybe 
I just completely misunderstood what the point of Appender was.


- Jonathan M Davis


Re: A little of coordination for Rosettacode

2014-08-14 Thread bearophile via Digitalmars-d-learn

safety0ff:

Here's a candidate for 
http://rosettacode.org/wiki/Extensible_prime_generator#D in 
case it is preferred to the existing entry:

http://dpaste.dzfl.pl/43735da3f1d1


I was away. I have added your nice code with some small changes 
as an alternative faster version. I think you have compiled your 
code without -wi, so I have added a "goto default;" inside the 
third switch case of the sieveOne function.


Bye,
bearophile


Re: What hashing algorithm is used for the D implementation of associative arrays?

2014-08-14 Thread Sean Kelly via Digitalmars-d-learn
Superfast. Though Murmur has gotten good enough that I'm tempted 
to switch.  At the time, Murmur didn't even have a license so it 
wasn't an option.


Re: Appender is ... slow

2014-08-14 Thread Philippe Sigaud via Digitalmars-d-learn
> I don't know much about Phobos appender implementation details but the key
> thing with reusable buffer is avoid freeing them. AFAIR Appender.clear frees
> the allocated memory but `Appender.length = 0` does not, making it possible
> to just overwrite stuff again and again.

I call .clear() only at the beginning of the computation, to avoid any
strange effects with std.datetime.benchmark (using benchmark with
memoizing functions can lead to strange results if one does not take
care to flush any result between to calls.)
After that initial call to clear, I just append.

The thing is, it's not the first time it happens. For years, I tried
to use Appender to get faster code, to no avail.


btw, I saw your Dconf talk yesterday, nice content! And thanks for
talking about Pegged!
It might interest you to know that the code I'm trying to use Appender
on is a new engine for Pegged, based on GLL parsing, that should be
able to produce a parser for any grammar, even ambiguous ones.


Re: core.thread.Fiber --- runtime stack overflow unlike goroutines

2014-08-14 Thread Sean Kelly via Digitalmars-d-learn
On 64 bit, reserve a huge chunk of memory, set a SEGV handler and 
commit more as needed. Basically how kernel thread stacks work. 
I've been meaning to do this but haven't gotten around to it yet.


Re: Appender is ... slow

2014-08-14 Thread Dicebot via Digitalmars-d-learn
On Thursday, 14 August 2014 at 17:16:42 UTC, Philippe Sigaud 
wrote:
From time to time, I try to speed up some array-heavy code by 
using std.array.Appender, reserving some capacity and so on.


It never works. Never. It gives me executables that are maybe 
30-50% slower than bog-standard array code.


I don't do anything fancy: I just replace the types, use 
clear() instead of = null...


Do people here get good results from Appender? And if yes, how 
are you using it?


I don't know much about Phobos appender implementation details 
but the key thing with reusable buffer is avoid freeing them. 
AFAIR Appender.clear frees the allocated memory but 
`Appender.length = 0` does not, making it possible to just 
overwrite stuff again and again.


Won't promise you anything though!


Re: String Prefix Predicate

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn

On Thursday, 14 August 2014 at 17:41:08 UTC, Nordlöw wrote:

On Thursday, 14 August 2014 at 17:33:41 UTC, Justin Whear wrote:

std.algorithm.startsWith?  Should auto-decode, so it'll do a


What about 
https://github.com/D-Programming-Language/phobos/pull/2043


Auto-decoding should be avoided when possible.

I guess something like

whole.byDchar().startsWith(part.byDchar())

is preferred right?

If so is this what we will live with until Phobos has been 
upgraded to using pull 2043 in a few years?


Except that you _have_ to decode in this case. Unless the string 
types match, there's no way around it. And startsWith won't 
decode if the string types match. So, I really see no issue in 
just straight-up using startsWith.


Where you run into problems with auto-decoding in Phobos 
functions is when a function results in a new range type. That 
forces you into a range of dchar, whether you wanted it or not. 
But beyond that, Phobos is actually pretty good about avoiding 
unnecessary decoding (though there probably are places where it 
could be improved). The big problem is that that requires 
special-casing a lot of functions, whereas that wouldn't be 
required with a range of char or wchar.


So, the biggest problems with automatic decoding are when a 
function returns a range of dchar when you wanted to operate on 
code units or when you write a function and then have to special 
case it for strings if you want to avoid the auto-decoding, 
whereas that's already been done for you with most Phobos 
functions.


- Jonathan M Davis


Re: String Prefix Predicate

2014-08-14 Thread Nordlöw

On Thursday, 14 August 2014 at 17:33:41 UTC, Justin Whear wrote:

std.algorithm.startsWith?  Should auto-decode, so it'll do a


What about 
https://github.com/D-Programming-Language/phobos/pull/2043


Auto-decoding should be avoided when possible.

I guess something like

whole.byDchar().startsWith(part.byDchar())

is preferred right?

If so is this what we will live with until Phobos has been 
upgraded to using pull 2043 in a few years?


Re: String Prefix Predicate

2014-08-14 Thread Justin Whear via Digitalmars-d-learn
On Thu, 14 Aug 2014 17:17:11 +, Nordlöw wrote:

> What's the preferrred way to check if a string starts with another
> string if the string is a
> 
> 1. string (utf-8) BiDir 2. wstring (utf-16) BiDir 3. dstring (utf-32)
> Random

std.algorithm.startsWith?  Should auto-decode, so it'll do a utf-32 
comparison behind the scenes.


Re: String Prefix Predicate

2014-08-14 Thread Nordlöw

On Thursday, 14 August 2014 at 17:17:13 UTC, Nordlöw wrote:
What's the preferrred way to check if a string starts with 
another string if the string is a


Should I use std.algorithm.startsWith() in all cases?


String Prefix Predicate

2014-08-14 Thread Nordlöw
What's the preferrred way to check if a string starts with 
another string if the string is a


1. string (utf-8) BiDir
2. wstring (utf-16) BiDir
3. dstring (utf-32) Random


Appender is ... slow

2014-08-14 Thread Philippe Sigaud via Digitalmars-d-learn
From time to time, I try to speed up some array-heavy code by 
using std.array.Appender, reserving some capacity and so on.


It never works. Never. It gives me executables that are maybe 
30-50% slower than bog-standard array code.


I don't do anything fancy: I just replace the types, use clear() 
instead of = null...


Do people here get good results from Appender? And if yes, how 
are you using it?


Re: What hashing algorithm is used for the D implementation of associative arrays?

2014-08-14 Thread H. S. Teoh via Digitalmars-d-learn
On Thu, Aug 14, 2014 at 04:32:24PM +, via Digitalmars-d-learn wrote:
> On Thursday, 14 August 2014 at 13:10:58 UTC, bearophile wrote:
> >Marc Schütz:
> >
> >>Isn't SuperFastHash vulnerable to collision attacks?
> >
> >D AAs used to be not vulnerable to collision attacks because they
> >resolved collisions building a red-black tree for each bucket. Later
> >buckets became linked lists for speed, leading to the current
> >sensitivity to collision attacks. I think D is not yet in the stage
> >of its development where it starts to care a lot about attacks.
> 
> IMO this is a _very_ dangerous stance. These kinds of attacks became
> well known in 2011, when it turned out that almost all of the commonly
> used languages and web frameworks were vulnerable:
> http://events.ccc.de/congress/2011/Fahrplan/events/4680.en.html
> 
> It would be nice if D behaved correctly before any actual attack
> becomes known.
> 
> Besides, AAs are probably already exposed to outside attackers in
> vibe.d (didn't check though).
[...]

PR's to fix this issue would be greatly appreciated!


T

-- 
Nobody is perfect.  I am Nobody. -- pepoluan, GKC forum


Re: What hashing algorithm is used for the D implementation of associative arrays?

2014-08-14 Thread via Digitalmars-d-learn

On Thursday, 14 August 2014 at 13:10:58 UTC, bearophile wrote:

Marc Schütz:


Isn't SuperFastHash vulnerable to collision attacks?


D AAs used to be not vulnerable to collision attacks because 
they resolved collisions building a red-black tree for each 
bucket. Later buckets became linked lists for speed, leading to 
the current sensitivity to collision attacks. I think D is not 
yet in the stage of its development where it starts to care a 
lot about attacks.


IMO this is a _very_ dangerous stance. These kinds of attacks 
became well known in 2011, when it turned out that almost all of 
the commonly used languages and web frameworks were vulnerable:

http://events.ccc.de/congress/2011/Fahrplan/events/4680.en.html

It would be nice if D behaved correctly before any actual attack 
becomes known.


Besides, AAs are probably already exposed to outside attackers in 
vibe.d (didn't check though).


Currently D programs are able to "attack themselves" just fine 
:-) But as usual patches are (slowly) welcome.


Re: Very vague compiler error message

2014-08-14 Thread via Digitalmars-d-learn

On Thursday, 14 August 2014 at 13:28:03 UTC, bearophile wrote:

Théo Bueno:


Same issue here with dsfml-audio, this is really annoying :/


Have you filed the issue?

Bye,
bearophile


Jebbs filed an issue on his bugtracker :
https://github.com/Jebbs/DSFML/issues/125
... and he seems to have a fix, or at least a clue.

Personally, I was not able to figure out where is the bug.


Re: Very vague compiler error message

2014-08-14 Thread bearophile via Digitalmars-d-learn

Théo Bueno:


Same issue here with dsfml-audio, this is really annoying :/


Have you filed the issue?

Bye,
bearophile


Re: What hashing algorithm is used for the D implementation of associative arrays?

2014-08-14 Thread bearophile via Digitalmars-d-learn

Marc Schütz:


Isn't SuperFastHash vulnerable to collision attacks?


D AAs used to be not vulnerable to collision attacks because they 
resolved collisions building a red-black tree for each bucket. 
Later buckets became linked lists for speed, leading to the 
current sensitivity to collision attacks. I think D is not yet in 
the stage of its development where it starts to care a lot about 
attacks. Currently D programs are able to "attack themselves" 
just fine :-) But as usual patches are (slowly) welcome.


Bye,
bearophile


Re: Max/Min values in an associative array

2014-08-14 Thread bearophile via Digitalmars-d-learn

TJB:

I am trying to find the max and min values in an associative 
array. Say I have:


double[char] bids;
bid['A'] = 37.50;
bid['B'] = 38.11;
bid['C'] = 36.12;

How can I find the max and min values. I am thinking that I 
need to use max and min functions from std.algorithm, but not 
sure how to.


void main() {
import std.stdio, std.algorithm;

double[char] bids = ['A': 37.50,
 'B': 38.11,
 'C': 36.12];

bids.byValue.reduce!(min, max).writeln;
}

Bye,
bearophile


Re: Capture parameter identifier name in a template?

2014-08-14 Thread Rémy Mouëza via Digitalmars-d-learn
I have just checked it and yes, it works with a constant that is not an 
enum: `const int FOO` defined in the module namespace or `static int 
BAR` defined in the dummy Vm class.


On 08/14/2014 02:08 PM, Maxime Chevalier-Boisvert wrote:
> Thanks. Does it also work with a constant that's not an enum, e.g.: 
const int FOO?

>
> On 08/14/2014 01:23 PM, Rémy Mouëza wrote:
>> Using __traits (identifier, ...) and a template alias seems to work 
for me:

>>
>> import std.stdio;
>>
>> /// Two kinds of enums:
>>
>> /// A named enum.
>> enum VmParams {
>>  OBJ_MIN_CAP,
>>  PROTO_SLOT_IDX,
>>  FPTR_SLOT_IDX,
>> }
>>
>> /// An anonymous one.
>> enum {
>>  ATTR_CONFIGURABLE = 3,
>>  ATTR_WRITABLE,
>>  ATTR_ENUMERABLE,
>>  ATTR_DELETED,
>>  ATTR_GETSET,
>>  ATTR_DEFAULT
>> }
>>
>> /// A dummy Vm class for example purpose.
>> class Vm {
>>  /// Stores values.
>>  int [string] table;
>>
>>  /// The "classic" runtime const API.
>>  void defRTConst (string id, int val) {
>>  table [id] = val;
>>  }
>>
>>  /// Using an alias with the identifier trait.
>>  void rtConst (alias p) () {
>>  table [__traits (identifier, p)] = p;
>>  }
>>
>>  /// Initializes our .table member.
>>  this () {
>>  /// Using the allMembers traits we can process all the members
>> of a
>>  /// named enum.
>>  foreach (member; __traits (allMembers, VmParams)) {
>>  int value = mixin ("VmParams." ~ member);
>>  this.defRTConst (member, value);
>>  }
>>
>>  /// Without duplicating the name and its value.
>>  rtConst!ATTR_CONFIGURABLE;
>>  rtConst!ATTR_WRITABLE;
>>  rtConst!ATTR_ENUMERABLE;
>>  rtConst!ATTR_DELETED;
>>  rtConst!ATTR_GETSET;
>>  rtConst!ATTR_DEFAULT;
>>
>>  /* rtConst won't work with local variables:
>>  //  auto foo = ATTR_DEFAULT;
>>  //  rtConst!foo;
>>  The code above raises a compiler error:
>>  Error: template instance rtConst!(foo) cannot use local
>> 'foo' as parameter to non-global template rtConst(alias p)()
>>  */
>>  }
>> }
>>
>> void main ()  {
>>  Vm vm = new Vm;
>>  vm.table.writeln;
>>
>>  /// output:
>>  /// ["OBJ_MIN_CAP":0, "ATTR_WRITABLE":4, "ATTR_ENUMERABLE":5,
>> "ATTR_GETSET":7, "PROTO_SLOT_IDX":1, "FPTR_SLOT_IDX":2,
>> "ATTR_CONFIGURABLE":3, "ATTR_DELETED":6, "ATTR_DEFAULT":8]
>> }
>>
>>
>> On 08/12/2014 07:36 PM, Maxime Chevalier-Boisvert wrote:
>>> In my JavaScript VM, I have a function whose purpose is to expose 
D/host

>>> constants to the JavaScript runtime code running inside the VM. This
>>> makes for somewhat redundant code, as follows:
>>>
>>> vm.defRTConst("OBJ_MIN_CAP"w, OBJ_MIN_CAP);
>>> vm.defRTConst("PROTO_SLOT_IDX"w, PROTO_SLOT_IDX);
>>> vm.defRTConst("FPTR_SLOT_IDX"w, FPTR_SLOT_IDX);
>>> vm.defRTConst("ATTR_CONFIGURABLE"w  , ATTR_CONFIGURABLE);
>>> vm.defRTConst("ATTR_WRITABLE"w  , ATTR_WRITABLE);
>>> vm.defRTConst("ATTR_ENUMERABLE"w, ATTR_ENUMERABLE);
>>> vm.defRTConst("ATTR_DELETED"w   , ATTR_DELETED);
>>> vm.defRTConst("ATTR_GETSET"w, ATTR_GETSET);
>>> vm.defRTConst("ATTR_DEFAULT"w   , ATTR_DEFAULT);
>>>
>>> I'm just wondering if there's a way to template defRTConst so that the
>>> name of an identifier I'm passing (e.g.: ATTR_DEFAULT) can be captured
>>> by the template, making it so that I don't also need to pass the 
name as

>>> a string. I expect the answer to be no, but maybe someone with more
>>> knowledge of D template magic knows better.
>>


Re: Very vague compiler error message

2014-08-14 Thread via Digitalmars-d-learn

On Tuesday, 12 August 2014 at 07:16:50 UTC, Jeremy DeHaan wrote:

I recently got this error messege when building my library:

dmd: cppmangle.c:154: void 
CppMangleVisitor::cpp_mangle_name(Dsymbol*): Assertion `0' 
failed.


I have no idea what it means and haven't found much information 
about it. I think I have narrowed it down to the file that is 
giving this error, but does anyone know what the heck causes 
something like this?  Talk about a nondescript error.


Same issue here with dsfml-audio, this is really annoying :/


Re: Linked list as a bidirectional range? I have some questions...

2014-08-14 Thread via Digitalmars-d-learn
On Wednesday, 13 August 2014 at 19:30:53 UTC, H. S. Teoh via 
Digitalmars-d-learn wrote:
On Wed, Aug 13, 2014 at 07:23:30PM +, via 
Digitalmars-d-learn wrote:

On Wednesday, 13 August 2014 at 18:58:59 UTC, H. S. Teoh via
Digitalmars-d-learn wrote:
>On Wed, Aug 13, 2014 at 06:31:32PM +, Gary Willoughby via
>Digitalmars-d-learn wrote:

[...]
>>I've used your advice and implemented a range over the list 
>>as

>>suggested, the problem being i cannot get it to pass the
>>isForwardRange check.
>>
>>Code: http://dpaste.dzfl.pl/cad89406bbcc#line-220
>>
>>You'll notice the assert on line 220 fails. Any idea what 
>>i'm doing

>>wrong?
>
>You need to put @property on .save.

Hmm...

struct Result {
public @property auto save1() {
return this;
}
public auto save2() {
return this;
}
}

pragma(msg, typeof(Result.init.save1));
pragma(msg, typeof(Result.init.save2));

This outputs:

Result
Result()

`Result()` looks bogus.


File a bug. :-)

https://issues.dlang.org/enter_bug.cgi


https://issues.dlang.org/show_bug.cgi?id=13293

And I see that Vladimir already reported a bug about the original 
problem:

https://issues.dlang.org/show_bug.cgi?id=11761


Re: Capture parameter identifier name in a template?

2014-08-14 Thread Rémy Mouëza via Digitalmars-d-learn

Using __traits (identifier, ...) and a template alias seems to work for me:

import std.stdio;

/// Two kinds of enums:

/// A named enum.
enum VmParams {
OBJ_MIN_CAP,
PROTO_SLOT_IDX,
FPTR_SLOT_IDX,
}

/// An anonymous one.
enum {
ATTR_CONFIGURABLE = 3,
ATTR_WRITABLE,
ATTR_ENUMERABLE,
ATTR_DELETED,
ATTR_GETSET,
ATTR_DEFAULT
}

/// A dummy Vm class for example purpose.
class Vm {
/// Stores values.
int [string] table;

/// The "classic" runtime const API.
void defRTConst (string id, int val) {
table [id] = val;
}

/// Using an alias with the identifier trait.
void rtConst (alias p) () {
table [__traits (identifier, p)] = p;
}

/// Initializes our .table member.
this () {
/// Using the allMembers traits we can process all the members 
of a

/// named enum.
foreach (member; __traits (allMembers, VmParams)) {
int value = mixin ("VmParams." ~ member);
this.defRTConst (member, value);
}

/// Without duplicating the name and its value.
rtConst!ATTR_CONFIGURABLE;
rtConst!ATTR_WRITABLE;
rtConst!ATTR_ENUMERABLE;
rtConst!ATTR_DELETED;
rtConst!ATTR_GETSET;
rtConst!ATTR_DEFAULT;

/* rtConst won't work with local variables:
//  auto foo = ATTR_DEFAULT;
//  rtConst!foo;
The code above raises a compiler error:
Error: template instance rtConst!(foo) cannot use local 
'foo' as parameter to non-global template rtConst(alias p)()

*/
}
}

void main ()  {
Vm vm = new Vm;
vm.table.writeln;

/// output:
/// ["OBJ_MIN_CAP":0, "ATTR_WRITABLE":4, "ATTR_ENUMERABLE":5, 
"ATTR_GETSET":7, "PROTO_SLOT_IDX":1, "FPTR_SLOT_IDX":2, 
"ATTR_CONFIGURABLE":3, "ATTR_DELETED":6, "ATTR_DEFAULT":8]

}


On 08/12/2014 07:36 PM, Maxime Chevalier-Boisvert wrote:

In my JavaScript VM, I have a function whose purpose is to expose D/host
constants to the JavaScript runtime code running inside the VM. This
makes for somewhat redundant code, as follows:

vm.defRTConst("OBJ_MIN_CAP"w, OBJ_MIN_CAP);
vm.defRTConst("PROTO_SLOT_IDX"w, PROTO_SLOT_IDX);
vm.defRTConst("FPTR_SLOT_IDX"w, FPTR_SLOT_IDX);
vm.defRTConst("ATTR_CONFIGURABLE"w  , ATTR_CONFIGURABLE);
vm.defRTConst("ATTR_WRITABLE"w  , ATTR_WRITABLE);
vm.defRTConst("ATTR_ENUMERABLE"w, ATTR_ENUMERABLE);
vm.defRTConst("ATTR_DELETED"w   , ATTR_DELETED);
vm.defRTConst("ATTR_GETSET"w, ATTR_GETSET);
vm.defRTConst("ATTR_DEFAULT"w   , ATTR_DEFAULT);

I'm just wondering if there's a way to template defRTConst so that the
name of an identifier I'm passing (e.g.: ATTR_DEFAULT) can be captured
by the template, making it so that I don't also need to pass the name as
a string. I expect the answer to be no, but maybe someone with more
knowledge of D template magic knows better.




Re: core.thread.Fiber --- runtime stack overflow unlike goroutines

2014-08-14 Thread Yuriy via Digitalmars-d-learn
Infinite stack comes with a cost. Every function call first 
checks the state of the stack. This should be done with the help 
of compiler. And such a "tradeoff" could scare bare-metal 
programmers away from D. Though maybe there's a chance of making 
this stack check controllable, but not that i can think of.


On Thursday, 14 August 2014 at 07:46:29 UTC, Carl Sturtivant 
wrote:


The default size of the runtime stack for a Fiber is 4*PAGESIZE 
which is very small, and a quick test shows that a Fiber 
suffers a stack overflow that doesn't lead to a clean 
termination when this limit is exceeded.


This makes it difficult to simulate deterministic alternation 
where the stack size needed is unpredictable because complex 
deterministic computations are going on inside Fibers.


In contrast, the Go programming language's goroutines can 
extend their stacks as needed at runtime, and so can be used to 
simulate deterministic alternation without this limitation, and 
yet be initially executed with each having only a small stack 
size.


There seems to be a claim that all that's needed to add 
D-routines (goroutines for D) is a scheduler and a Channel 
type, on top of Fiber.

http://forum.dlang.org/thread/lphnen$1ml7$1...@digitalmars.com
See the initial post, point 7., as well as supporting remarks 
in later replies.


Am I missing something? Is there a clean and simple way to get 
Fiber to no longer suffer a stack overflow when implementing 
D-routines?




core.thread.Fiber --- runtime stack overflow unlike goroutines

2014-08-14 Thread Carl Sturtivant via Digitalmars-d-learn


The default size of the runtime stack for a Fiber is 4*PAGESIZE 
which is very small, and a quick test shows that a Fiber suffers 
a stack overflow that doesn't lead to a clean termination when 
this limit is exceeded.


This makes it difficult to simulate deterministic alternation 
where the stack size needed is unpredictable because complex 
deterministic computations are going on inside Fibers.


In contrast, the Go programming language's goroutines can extend 
their stacks as needed at runtime, and so can be used to simulate 
deterministic alternation without this limitation, and yet be 
initially executed with each having only a small stack size.


There seems to be a claim that all that's needed to add 
D-routines (goroutines for D) is a scheduler and a Channel type, 
on top of Fiber.

http://forum.dlang.org/thread/lphnen$1ml7$1...@digitalmars.com
See the initial post, point 7., as well as supporting remarks in 
later replies.


Am I missing something? Is there a clean and simple way to get 
Fiber to no longer suffer a stack overflow when implementing 
D-routines?


Re: drop* and take* only for specific element values

2014-08-14 Thread Nordlöw
On Thursday, 14 August 2014 at 00:56:47 UTC, Jonathan M Davis 
wrote:

You forgot the !, making the predicate a function argument. It


Great!

My solution:

auto dropWhile(R, E)(R range, E element) if (isInputRange!R &&
 is(ElementType!R == 
E))

{
import std.algorithm: find;
return range.find!(a => a != element);
}

unittest
{
assert([1, 2, 3].dropWhile(1) == [2, 3]);
assert([1, 1, 1, 2, 3].dropWhile(1) == [2, 3]);
assert([1, 2, 3].dropWhile(2) == [1, 2, 3]);
assert("abc".dropWhile(cast(dchar)'a') == "bc");
}