Re: Advice requested for fixing issue 17914

2017-10-24 Thread safety0ff via Digitalmars-d

On Wednesday, 25 October 2017 at 01:26:10 UTC, Brian Schott wrote:


I've been reading the Fiber code and (so far) that seems seems 
to be reasonable. Can anybody think of a reason that this would 
be a bad idea? I'd rather not create a pull request for a 
design that's not going to work because of a detail I've 
overlooked.


Just skimming the Fiber code I found the reset(...) API functions 
whose purpose is to re-use Fibers once they've terminated.


Eager stack deallocation would have to coexist with the Fiber 
reuse API.


Perhaps the Fiber reuse API could simply be polished & made easy 
to integrate so that your original use case no longer hits system 
limits.


I.e. Perhaps an optional delegate could be called upon 
termination, making it easier to hook in Fiber recycling.


The reason my thoughts head in that direction is that I've read 
that mmap/unmmap 'ing frequently isn't recommended in performance 
conscious programs.


Re: Antipattern in core.memory.GC.addRange?

2017-09-22 Thread safety0ff via Digitalmars-d
On Friday, 22 September 2017 at 21:29:10 UTC, Steven 
Schveighoffer wrote:

GC.addRange has this signature:

static nothrow @nogc void addRange(in void* p, size_t sz, const 
TypeInfo ti = null);


I see a large problem with this. Let's say you malloc an array 
of struct pointers:


struct Foo { ... }

import core.stdc.stdlib;
auto ptrs = (cast(Foo *)malloc(Foo.sizeof * 10))[0 .. 10];

Now, you want to store GC pointers in that block, you need to 
add the range to the GC:


GC.addRange(ptrs.ptr, ptrs.length);

See the problem?


Yes, you forgot to multiply by Foo.sizeof.

Using the pattern from the example in the documentation,
the code would be:

size_t bytes = Foo.sizeof * 10;
auto ptrs = (cast(Foo *)malloc(bytes))[0 .. 10];
GC.addRange(ptrs.ptr, bytes);



Re: Analysis of D GC

2017-06-22 Thread safety0ff via Digitalmars-d

On Monday, 19 June 2017 at 22:35:42 UTC, Dmitry Olshansky wrote:


http://olshansky.me/gc/runtime/dlang/2017/06/14/inside-d-gc.html

"But the main unanswered question is why? Why an extra pass?"


It's likely to pave over the many pitfalls of D finalizers.

E.g. finalizers corrupting data:
class A { size_t i; }
class B { A a; this(){ a = new A; } ~this() { a.i = 1; } }
// modifying B.a.i is undefined behavior (e.g. it could corrupt 
the GC's freelist)


E.g. finalizers reading undefined data:
class A { virtual bool check() { return true; } }
class B { A a; this(){ a = new A; } ~this() { a.check(); } }
// B.a's object header is undefined (e.g. replaced with GC 
freelist pointer)


There's also invariants, which are prepended to the finalizers, 
so their code is subject to the same issues.


The best thing about the current implementation is that object 
resurrection has never been supported.


Re: Analysis of D GC

2017-06-19 Thread safety0ff via Digitalmars-d

On Monday, 19 June 2017 at 23:39:54 UTC, H. S. Teoh wrote:
On Mon, Jun 19, 2017 at 10:50:05PM +, Adam D. Ruppe via 
Digitalmars-d wrote:
What is it about Windows that makes you call it a distant 
possibility? Is it just that you are unfamiliar with it or is 
there some specific OS level feature you plan on needing?


AFAIK, Windows does not have equivalent functionality to this.



I've read that there is such a function on Windows but you need 
to use undocumented/unofficial API to access it:


e.g. 
https://github.com/opencollab/scilab/blob/master/scilab/modules/parallel/src/c/forkWindows.c


Re: Analysis of D GC

2017-06-19 Thread safety0ff via Digitalmars-d

On Monday, 19 June 2017 at 22:35:42 UTC, Dmitry Olshansky wrote:
My take on D's GC problem, also spoiler - I'm going to build a 
new one soonish.


http://olshansky.me/gc/runtime/dlang/2017/06/14/inside-d-gc.html

---
Dmitry Olshansky


Good overview, however:
the binary search pool lookup is used because it naturally 
supports variable sized pools.
IMHO, simply concluding "A hash table could have saved quite a 
few cycles." glosses over the issue of handling variable sizes.


Re: D Spam filter ridiculous/broke

2017-01-09 Thread safety0ff via Digitalmars-d

On Tuesday, 10 January 2017 at 05:48:10 UTC, Ignacious wrote:
It would be nice, also, if when we click on a link in the forum 
that it takes us to the last page/message(Scrolls down to it) 
rather than forcing us to do this.


You can click on the time of the last post (on the right side) to 
go to the last post. At least on desktop.





Regarding sporadic unit test failures

2017-01-05 Thread safety0ff via Digitalmars-d
While trying to make a few minor contributions during the 
holidays, it seemed like every other PR was hitting a "random" 
unit test failure.


So I solved the few that I noticed [1-3] (the last is still 
waiting review / merge.)


However finding failures that have gone unnoticed or unreported 
is a manual process. So I wrote a simple script that scrapes the 
auto tester website, process the logs of "interesting" failures, 
groups identical failures and outputs them.



I'm wondering if there is interest in having the auto tester 
detect & track the sporadic failures to make them more prominent 
and easier to find & fix.


I perceive that the attitude towards "random" test failures is 
widespread apathy. Failures seldom get filed in the bug tracker. 
While bad tests are often the cause, sometimes there are 
significant issues:


For example, [1] was a significant implementation flaw in 
std.experimental.allocator that was present for ~1.25 years.


[1] Phobos PRs #4988 & #4993
[2] Phobos PR #4997
[3] Phobos PR #5004


Re: const(Rvalue) resolved to different overloads

2016-12-31 Thread safety0ff via Digitalmars-d

On Thursday, 29 December 2016 at 22:54:35 UTC, Ali Çehreli wrote:


Can you explain that behavior?


What about: http://dlang.org/spec/const3.html#implicit_conversions
"An expression may be converted from mutable or shared to 
immutable if the expression is unique and all expressions it 
transitively refers to are either unique or immutable."


Re: Beta D 2.072.2-b1

2016-12-27 Thread safety0ff via Digitalmars-d-announce

On Tuesday, 27 December 2016 at 04:36:54 UTC, Martin Nowak wrote:


This version resolves a number of regressions and bugs in the 
2.072.1 release.


I thought https://github.com/dlang/druntime/pull/1707 was in 
stable and slated for this point release.


I see at the bottom of: 
https://github.com/dlang/druntime/pull/1708

"@klickverbot klickverbot deleted the stable branch 18 days ago"

Also, https://github.com/dlang/druntime/pull/1715/ should be 
included IMO in addition to PR 1707.


Re: Many documentation examples can now be run online

2016-12-23 Thread safety0ff via Digitalmars-d-announce

On Saturday, 24 December 2016 at 06:08:49 UTC, Saurabh Das wrote:


Feedback:

1. It will be aesthetically better if the edit/run buttons are 
inside the code box, say just inside the right top corner.


I agree the button placement should be improved, I think they 
should be immediately to the right of "Examples:"


e.g. "Examples: [Edit][Run]"

Which makes it more clear that the examples can be run & edited.


Re: Optimization problem: bulk Boolean operations on vectors

2016-12-23 Thread safety0ff via Digitalmars-d

On Friday, 23 December 2016 at 22:11:31 UTC, Walter Bright wrote:


For this D code:

enum SIZE = 1;

void foo(int* a, int* b) {
int* atop = a + 1000;
ptrdiff_t offset = b - a;
for (; a < atop; ++a)
*a &= *(a + offset);
}


Is subtraction of pointers which do not belong to the same array 
defined behavior in D?


Re: Optimization problem: bulk Boolean operations on vectors

2016-12-23 Thread safety0ff via Digitalmars-d
On Friday, 23 December 2016 at 16:15:44 UTC, Andrei Alexandrescu 
wrote:

An interesting problem to look at:



The foreach macro (src/tk/vec.h#L62) looks like low hanging fruit 
for optimization as well.


std.experimental.allocator.SharedFreeList random autotester deadlock

2016-12-22 Thread safety0ff via Digitalmars-d

Looks like classic ABA to me.*
See my comment here: 
https://issues.dlang.org/show_bug.cgi?id=16352#c6


I'm posting here because it should be addressed sooner rather 
than later due to pull requests going red randomly and wasting 
people's time.


*Disclaimer, it's very late.


Re: Red Hat's issues in considering the D language

2016-12-21 Thread safety0ff via Digitalmars-d

On Thursday, 22 December 2016 at 02:32:30 UTC, Jerry wrote:


Yup looks like that was the cause. Removed some of the 
functions that did a "foreach()" over some large tuples. Down 
to 26 seconds with that removed.


Also: https://issues.dlang.org/show_bug.cgi?id=2396


Re: Red Hat's issues in considering the D language

2016-12-21 Thread safety0ff via Digitalmars-d
On Thursday, 22 December 2016 at 01:30:44 UTC, Andrei 
Alexandrescu wrote:


Must be a pathological case we should fix anyway. -- Andrei


Likely related bug has been open 5 years minus 1 day: 
https://issues.dlang.org/show_bug.cgi?id=7157


Re: A betterC modular standard library?

2016-12-19 Thread safety0ff via Digitalmars-d
On Sunday, 18 December 2016 at 18:02:58 UTC, Ilya Yaroshenko 
wrote:


Thank you for the answer (it is hard to understand me because 
English and other reasons),


Ilya


It was difficult to understand your vision until this post, now I 
think I grasp it.


Let me try to summarize what I've understood:

D as it stands, is not suitable for writing low level libraries 
as well as large scale software development because of compiler 
dependence of compiled code.


Examples:
If you have two software teams, and team A's software dependent 
on compiler X (e.g requires newer feature, requires 
bug/regression fix, etc) and team B's software depends on 
compiler Y to meet performance requirements, they get stuck.


Also you want to create a low level library than can be easily 
distributed and linked from other languages (e.g. GLAS) extern 
(C) is the only viable option, but that can still lock in the D 
compiler used if you depend on phobos/druntime.



So the proposal is to make binary compatibility possible in the 
near future by implementing "betterC" which provides a bare-bones 
language and removes the greatest sources of incompatibilities.


Once this is done a community can form around it and create 
completely modular libraries. These can be used by all D and 
non-D users alike without compatibility problems.



Since this is all predicated on "betterC", which isn't 
implemented. I think it is imperative to create a full 
specification.


I look forward to seeing where this initiative goes.

P.S.: I think Ilya writes "evaluates" where he means "evolves"


Re: Making AssertError a singleton

2016-12-12 Thread safety0ff via Digitalmars-d
On Monday, 12 December 2016 at 15:51:07 UTC, Andrei Alexandrescu 
wrote:


But of course there are many situations out there.


Wouldn't it break chained assertion errors?



Re: Compiler performance with my ridiculous Binderoo code

2016-12-11 Thread safety0ff via Digitalmars-d

On Sunday, 11 December 2016 at 19:00:23 UTC, Stefan Koch wrote:


Just use this little program to simulate the process.


That's not really useful for understanding and making progress on 
the issue.


I had a patch with improved hash functions which I stashed away 
since it seemed the mangle approach would be the way forward.
I also have no test code to benchmark it either (i.e. the 
degenerate case I was asking for.)


Re: Strange memory corruption / codegen bug?

2016-12-11 Thread safety0ff via Digitalmars-d-learn

On Sunday, 11 December 2016 at 11:58:39 UTC, ag0aep6g wrote:


Try putting an `assert(childCrossPoint !is otherCrossPoint);` 
before the assignment. If it fails, the variables refer to the 
same node. That would explain how otherCrossPoint.left gets set.


Furthermore, I think he is calling breed on a Tree with itself.
i.e. assert(other !is this) would be a more reliable test since 
it won't be subject to randomness.




Re: Compiler performance with my ridiculous Binderoo code

2016-12-11 Thread safety0ff via Digitalmars-d

On Sunday, 11 December 2016 at 17:20:24 UTC, Stefan Koch wrote:


That means you have to compute the mangled name which is crazy 
expensive.
And you can't cache the parent part of mangle because it all 
freshly generated by the template.


How often would the mangle be needed regardless later on in 
compilation?

I don't know too much about dmd internals.
It seemed that Martin Nowak had a viable proof of concept, but 
the bugzilla discussion is quite terse.




It seems like I fail to express the problem properly so let me 
try again.


AliasSeq's that are append to look like this
: AliasSeq!(AliasSeq!(AliasSeq!(...)))
An aliasSeq that is "appended to" n times will produce (n^2) 
with ((n-1)^2) sub instances in them all the way till n is 0.


When you want to compare their parameter times you need to go 
through all of them and recursively call findTemplateInstance.


I don't see the n^2 instances, I'm likely missing an 
implementation detail.

However, I understand the quadratic nature of comparing:
AliasSeq!(AliasSeq!(AliasSeq!(...)))
to:
AliasSeq!(AliasSeq!(...))

I also don't see why you'd need to do this comparison often 
(assuming good hash functions are used.)
The comment "Hash collisions will happen!" seems to overestimate 
the current implementation IMO.



All in all, it would probably be best to have some degenerate 
code so that the issue can be investigated on a common footing.


Re: Compiler performance with my ridiculous Binderoo code

2016-12-11 Thread safety0ff via Digitalmars-d

On Sunday, 11 December 2016 at 16:26:29 UTC, Ethan Watson wrote:


At the very least, I now have an idea of which parts of the 
compiler I'm taxing and can attempt to write around that. But 
I'm also tempted to go in and optimise those parts of the 
compiler.


Have a look at this issue: 
https://issues.dlang.org/show_bug.cgi?id=16513


In a nutshell:
Dmd puts template instances in an AA using a terrible hash 
function.
When looking for a match it does an expensive comparison for each 
collision.


The hash function is easily fixed but it does not obviate the 
expensive comparison, so it trades hashing time for comparison 
time. Therefore the performance impact of fixing it is unclear.
Martin Nowak is suggesting that using the mangled name is the way 
forward.


Re: x86 instruction set reference

2016-11-29 Thread safety0ff via Digitalmars-d

On Tuesday, 29 November 2016 at 22:20:06 UTC, Walter Bright wrote:


And I do have a local copy of it. But to just see the hex code 
for an instruction, the clickable reference is much handier 
than navigating 3600 pages.


Other links in the same vein:
http://ref.x86asm.net/coder64.html
https://defuse.ca/online-x86-assembler.htm


Re: Third attempt for SUM

2016-11-14 Thread safety0ff via Digitalmars-d
On Sunday, 13 November 2016 at 16:56:30 UTC, Ilya Yaroshenko 
wrote:


BTW, i have implemented sumOfLogs [1], it is more precise then 
everything else.


Thanks, I was going to use 32.64 fixed point for my program but 
now I think I will get better precision modifying that to use a 
32 bit exponent field with a 1.64 fixed point field (to represent 
[1,2) normalized number.)


The [0.5,1) normalization due to frexp was confusing at first 
glance.


Re: Third attempt for SUM

2016-11-13 Thread safety0ff via Digitalmars-d
On Saturday, 12 November 2016 at 15:37:29 UTC, Ilya Yaroshenko 
wrote:

Hi all,

Advanced summation algorithms [3] from Mir project [1] are 
ready to be merged to Phobos.


Hi,
Do you have any thoughts at to when Kahan should be used over KBN?

I was testing summation for a program (summing logarithms of 
primes as doubles,) and KBN seemed to be slightly outperforming 
Kahan.


Re: https://issues.dlang.org/show_bug.cgi?id=2504: reserve for associative arrays

2016-11-06 Thread safety0ff via Digitalmars-d

On Sunday, 6 November 2016 at 03:28:20 UTC, Jon Degenhardt wrote:
On Sunday, 6 November 2016 at 02:12:12 UTC, Alexandru 
Caciulescu wrote:


I see this topic started a clash of opinions regarding the 
future of AAs. After Andrei suggested a free-list 
implementation I had a good idea of how to proceed but now I 
am not so sure since this discussion isn't converging to a 
single idea/implementation.



[Snip]

I think this suggestion is consistent with Steve 
Schveighoffer's suggestion earlier in the thread.


--Jon


I agree with what Jon Degenhardt said. It also seems to be in 
agreement with what Shachar Shemesh and Steven Schveighoffer 
wrote regarding the implementation of a reserve function for the 
built in AAs.


Re: https://issues.dlang.org/show_bug.cgi?id=2504: reserve for associative arrays

2016-11-03 Thread safety0ff via Digitalmars-d
On Thursday, 3 November 2016 at 13:19:17 UTC, Steven 
Schveighoffer wrote:


So technically, the freelist is still needed.


In case I wasn't clear in my previous post:
We can't use a freelist because it breaks safety.


Re: https://issues.dlang.org/show_bug.cgi?id=2504: reserve for associative arrays

2016-11-02 Thread safety0ff via Digitalmars-d
On Wednesday, 2 November 2016 at 03:36:42 UTC, Andrei 
Alexandrescu wrote:


Last time I looked our associative arrays were arrays of 
singly-linked lists.


F.Y.I. It now appears to use quadratic probing since druntime PR 
#1229.




Each hashtable would have its own freelist, or alternatively 
all hashtables of the same types share the same freelist.


How do you return memory to the freelist when GC is expected to 
managed memory of the entries?


We no longer GC.free since your PR (#1143.)


Without belaboring the point, I think we'd better off with good 
library AA offerings as well.


Re: Reducing the cost of autodecoding

2016-10-25 Thread safety0ff via Digitalmars-d

On Tuesday, 25 October 2016 at 21:46:30 UTC, safety0ff wrote:


P.S. I am aware that this pessimises popFront for code which 
only counts codepoints without inspecting them.


Unfortunately it also changes the API of popFront to throw on 
invalid characters.


So the example would need to be reworked.


Re: Reducing the cost of autodecoding

2016-10-25 Thread safety0ff via Digitalmars-d
On Wednesday, 12 October 2016 at 13:53:03 UTC, Andrei 
Alexandrescu wrote:


Now it's time to look at the end-to-end cost of autodecoding.


Some food for thought:

- front necessarily needs to compute the number of bytes to 
advance.
- We can't change the API to share data between front and 
popFront, however we can create a situation where a pure function 
gets duplicate calls removed by the compiler.


Since we require that the ascii test gets inlined into the caller 
of front/popFront to improve ascii performance, we have a 
situation similar to this:


alias Result = Tuple!("codepoint",dchar,"advance",int);
auto decode(const char[] str) pure
{ pragma(inline,true);
  if (str[0] < 0x80) return Result(str[0],1);
  else return decodeNonAscii(str);
}

dchar front(const char[] str) pure
{ pragma(inline,true);
  return str.decode.codepoint;
}

void popFront(ref const(char)[] str)
{ pragma(inline,true);
  return str[str.decode.advance..$];
}

When used in front/popFront pairs, the duplicated decode calls 
get merged and we don't do any duplicate work (unlike the current 
situation.)


Unfortunately, it's not possible to achieve the best code 
generation due to missed optimizations by the compilers (I 
haven't tried GDC.)
I've reported a highly reduce case to LDC github issue #1851 / 
LDC subforum.


Once we have this is possible only the decodeNonAscii, perhaps 
using the DFA method linked by ketmar.


P.S. I am aware that this pessimises popFront for code which only 
counts codepoints without inspecting them.


Re: [OT] fastest fibbonacci

2016-10-23 Thread safety0ff via Digitalmars-d

On Sunday, 23 October 2016 at 13:04:30 UTC, Stefam Koch wrote:


created a version of fibbonaci which I deem to be faster then 
the other ones floating around.


Rosettacode is a good place to check for "floating around" 
implementations of common practice exercises e.g.:


http://rosettacode.org/wiki/Fibonacci_sequence#Matrix_Exponentiation_Version


Re: Reducing the cost of autodecoding

2016-10-16 Thread safety0ff via Digitalmars-d
On Saturday, 15 October 2016 at 19:00:12 UTC, Patrick Schluter 
wrote:


Just a question. Do encoding errors not have to be detected or 
is validity of the string guaranteed?


AFAIK they have to be detected, otherwise it would be a 
regression.





Re: Reducing the cost of autodecoding

2016-10-15 Thread safety0ff via Digitalmars-d

On Friday, 14 October 2016 at 20:47:39 UTC, Stefan Koch wrote:

On Thursday, 13 October 2016 at 21:49:22 UTC, safety0ff wrote:

Bad benchmark! Bad! -- Andrei


Also, I suspect a benchmark with a larger loop body might not 
benefit as significantly from branch hints as this one.


I disagree in longer loops code compactness is as important as 
in small ones.


You must have misunderstood:

My thought was simply that with a larger loop body, LLVM might 
not make such dramatic rearrangement of the basic blocks.


Take your straw man elsewhere :-/



This is more correct : (Tough for some reason it does not pass 
the unittests)


You're only validating the first byte, current code validates all 
of them.


Re: Reducing the cost of autodecoding

2016-10-13 Thread safety0ff via Digitalmars-d
On Thursday, 13 October 2016 at 01:36:44 UTC, Andrei Alexandrescu 
wrote:


Oh ok, so it's that checksum in particular that got optimized. 
Bad benchmark! Bad! -- Andrei


Also, I suspect a benchmark with a larger loop body might not 
benefit as significantly from branch hints as this one.


Re: Reducing the cost of autodecoding

2016-10-13 Thread safety0ff via Digitalmars-d

On Thursday, 13 October 2016 at 14:51:50 UTC, Kagamin wrote:

On Wednesday, 12 October 2016 at 20:24:54 UTC, safety0ff wrote:

Code: http://pastebin.com/CFCpUftW


Line 25 doesn't look trusted: reads past the end of an empty 
string.


Length is checked in the loop that calls this function.

In phobos length is only checked with an assertion,


Re: Reducing the cost of autodecoding

2016-10-12 Thread safety0ff via Digitalmars-d

On Thursday, 13 October 2016 at 00:32:36 UTC, safety0ff wrote:


It made little difference: LDC compiled into AVX2 vectorized 
addition (vpmovzxbq & vpaddq.)


Measurements without -mcpu=native:
overhead 0.336s
bytes0.610s
without branch hints 0.852s
code pasted 0.766s


Re: Reducing the cost of autodecoding

2016-10-12 Thread safety0ff via Digitalmars-d
On Wednesday, 12 October 2016 at 23:47:45 UTC, Andrei 
Alexandrescu wrote:


Wait, so going through the bytes made almost no difference? Or 
did you subtract the overhead already?




It made little difference: LDC compiled into AVX2 vectorized 
addition (vpmovzxbq & vpaddq.)


Re: Reducing the cost of autodecoding

2016-10-12 Thread safety0ff via Digitalmars-d

On Wednesday, 12 October 2016 at 20:07:19 UTC, Stefan Koch wrote:


where did you apply the branch hints ?


Code: http://pastebin.com/CFCpUftW


Re: Reducing the cost of autodecoding

2016-10-12 Thread safety0ff via Digitalmars-d
On Wednesday, 12 October 2016 at 16:24:19 UTC, Andrei 
Alexandrescu wrote:


Remember the ASCII part is the bothersome one. There's only two 
comparisons, all with 100% predictability. We should be able to 
arrange matters so the loss is negligible. -- Andrei


My measurements:
ldc -O3  -boundscheck=off -release -mcpu=native -enable-inlining
ldc version 1.0.0

overhead 0.350s
bytes0.385s
current autodecoding 0.915s (with new LUT popFront)
copy-pasting std.utf decoding functions into current file 0.840s
adding ASCII branch hints (llvm_expect) 0.770s

With the branch hints LDC moves the non-Ascii code outside of the 
loop and creates a really tight loop body.


Re: Can you shrink it further?

2016-10-12 Thread safety0ff via Digitalmars-d

On Wednesday, 12 October 2016 at 16:48:36 UTC, safety0ff wrote:

[Snip]


Didn't see the LUT implementation, nvm!



Re: Can you shrink it further?

2016-10-12 Thread safety0ff via Digitalmars-d

My current favorites:

void popFront(ref char[] s) @trusted pure nothrow {
  immutable byte c = s[0];
  if (c >= -2) {
s = s.ptr[1 .. s.length];
  } else {
import core.bitop;
size_t i = 7u - bsr(~c);
import std.algorithm;
s = s.ptr[min(i, s.length) .. s.length];
  }
}

I also experimented with explicit speculation:

void popFront(ref char[] s) @trusted pure nothrow {
  immutable byte c = s[0];
  s = s.ptr[1 .. s.length];
  if (c < -2) {
import core.bitop;
size_t i = 6u - bsr(~c);
import std.algorithm;
s = s.ptr[min(i, s.length) .. s.length];
  }
}


LDC and GDC both compile these to 23 instructions.
DMD does worse than with my other code.

You can influence GDC's block layout with __builtin_expect.

I notice that many other snippets posted use uint instead of 
size_t in the multi-byte branch. This generates extra 
instructions for me.


Re: Can you shrink it further?

2016-10-09 Thread safety0ff via Digitalmars-d
On Sunday, 9 October 2016 at 22:11:50 UTC, Andrei Alexandrescu 
wrote:

I suspect there are cleverer things that can be done.


Less clever seemed to work for me:

void popFront1(ref char[] s) @trusted pure nothrow {
  immutable c = s[0];
  if (c < 0x80 || c >= 0xFE) {
s = s.ptr[1 .. s.length];
  } else {
import core.bitop;
size_t i = 7u - bsr(~c); // N.B. changed uint to size_t
import std.algorithm;
s = s.ptr[min(i, s.length) .. s.length];
  }
}


Re: std.math.isPowerOf2

2016-10-01 Thread safety0ff via Digitalmars-d

On Sunday, 2 October 2016 at 03:05:37 UTC, Manu wrote:

Unsigned case is:
  return (x & -x) > (x - 1);

Wouldn't this be better:
  return (sz & (sz-1)) == 0;



https://forum.dlang.org/post/nfkaag$2d6u$1...@digitalmars.com




Re: CompileTime performance measurement

2016-09-08 Thread safety0ff via Digitalmars-d

On Thursday, 8 September 2016 at 17:03:30 UTC, Stefan Koch wrote:


I thought of the same thing a while back.
However I have had the time to decipher the gprof data-format 
yet.
Is there another profile-format for decent visualization tools 
exist ?


I was just using that as an example of what we might want to 
output as text.

e.g. https://sourceware.org/binutils/docs/gprof/Flat-Profile.html

I wasn't saying that we should mimic gmon.out file format, I 
don't think that buys us much.


Re: CompileTime performance measurement

2016-09-08 Thread safety0ff via Digitalmars-d

On Sunday, 4 September 2016 at 00:04:16 UTC, Stefan Koch wrote:


... I have now implemented another pseudo function called 
__ctfeTicksMs.

[Snip]

This does allow meaningful compiletime performance tests to be 
written.

spanning both CTFE and template-incitations timeings.

Please tell me what you think.


I think automated ctfe profiling would be much better and the 
byte-code interpreter seems like a great platform to build this 
onto.


For example, using a command line switch to enable profiling 
which outputs something similar to gprof's flat profile.


Skimming the byte-code work it seems like it is too early to add 
this yet.


My thoughts on __ctfeTicksMs:
- it isn't very meaningful for users without intimate compiler 
knowledge

- it requires writing boilerplate code over and over for profiling
- doesn't seem like it would work well for functions that get 
executed multiple times


While it might be a useful compiler developer hack, I do not 
think it should become a user primitive.


Re: SIGUSR2 from GC interrupts application system calls on Linux

2016-05-26 Thread safety0ff via Digitalmars-d

On Thursday, 26 May 2016 at 18:44:22 UTC, ikod wrote:

Hello,

On linux, in the code below, receive() returns -1 with 
errno=EINTR if syscall is interrupted by GC (so you can see 
several "insterrupted") when GC enabled, and prints nothing 
(this is desired and expected behavior) when GC disabled.


Is there any recommended workaround for this problem? Is this a 
bug?


Looks like recv is non-restartable and it should be called in a 
loop.




Looks like the reason of the problem is call sigaction(2) 
without SA_RESTART for SIGUSR2 (used by GC to restart suspended 
thread) in core.thread.


This isn't the cause, manually adding SA_RESTART using sigaction 
made no difference:


import core.sys.posix.signal;
sigaction_t tmp;
sigaction(SIGUSR2, null, );
tmp.sa_flags |= SA_RESTART;
sigaction(SIGUSR2, , null);

This is because SIGUSR1 is the signal that actually interrupts 
the system call, when SIGUSR2 is received the syscall is already 
interrupted so the flag does not make a difference.


Re: Battle-plan for CTFE

2016-05-16 Thread safety0ff via Digitalmars-d-announce

On Monday, 16 May 2016 at 12:13:14 UTC, Martin Nowak wrote:


Last time people forced me to spend several hours on 
reimplementing and debugging a BitArray implementation


Ouch.
src/tk/vec.(h|c) already contained an implementation.


Re: std.cpuid ARM Issue

2016-05-16 Thread safety0ff via Digitalmars-d
I don't know ARM specifics, but perhaps hwloc is useful to you: 
https://www.open-mpi.org/projects/hwloc/


Re: Threads

2016-05-02 Thread safety0ff via Digitalmars-d

On Monday, 2 May 2016 at 16:39:13 UTC, vino wrote:

Hi All,

 I am a newbie for D programming and need some help, I am 
trying to write a program using the example given in the book 
The "D Programming Language" written by "Andrei Alexandrescu"


Make sure to check the errata on his site: 
http://erdani.com/index.php?cID=109



Page: 406 Print: 1
Current text:
foreach (immutable(ubyte)[] buffer; 
stdin.byChunk(bufferSize)) send(tid, buffer);

Correction:
foreach (buffer; stdin.byChunk(bufferSize)) send(tid, 
buffer.idup)


Re: Will the GC scan this pointer?

2016-04-24 Thread safety0ff via Digitalmars-d-learn

On Sunday, 24 April 2016 at 11:03:11 UTC, Lass Safin wrote:


So the question is: Will the GC scan ptr? As you can see, it is 
a write-only pointer, so reading from it will cause undefined 
behavior (such as return data which looks like a pointer to 
data..), and can potentially be reallly slow.


The GC will see that ptr doesn't point to memory managed by the 
GC and move on.


Do I have to mark it with NO_SCAN each time I call 
glMapNamedBufferRange?


No, calling setAttr on memory not managed by the GC will do 
nothing.


Re: Checking if an Integer is an Exact Binary Power

2016-04-23 Thread safety0ff via Digitalmars-d

On Saturday, 23 April 2016 at 21:04:52 UTC, Nordlöw wrote:

On Saturday, 23 April 2016 at 20:42:25 UTC, Lass Safin wrote:

CPUID: https://en.wikipedia.org/wiki/CPUID.
You can check for the presence of a lot of instructions with 
this instruction.

However this will only work on x86 and only run-time.


Code you give a complete code example in D, please or point out 
a suitable place in druntime/phobos?


https://dlang.org/phobos/core_cpuid.html#.hasPopcnt

However, it is usually better to use the methods stated by Andrei 
/ Dmitry* rather than population count for checking powers of two.


* With the code given by Dmitry you have to check for zero, 
otherwise it will return true for 0. e.g.  x && !(x & (x - 1))


Re: Kinds of containers

2015-10-22 Thread safety0ff via Digitalmars-d

On Thursday, 22 October 2015 at 07:13:58 UTC, KlausO wrote:


Intrusive data structures have their strengths especially when 
nodes are

part of several containers.
I implemented some of the intrusive containers back in D1 times.
See

http://dsource.org/projects/nova/browser/trunk/nova/ds/intrusive

KlausO


I also like having an intrusive container library in my toolbox: 
they don't limit membership to one container and they don't 
"bake" memory management into the container type.


http://www.boost.org/doc/libs/1_59_0/doc/html/intrusive.html


Re: Kinds of containers

2015-10-22 Thread safety0ff via Digitalmars-d

On Thursday, 22 October 2015 at 14:14:09 UTC, safety0ff wrote:


I also like having an intrusive container library in my 
toolbox: they don't limit membership to one container and they 
don't "bake" memory management into the container type.




Also wanted to mention that this allows you to store variable 
sized data directly in the container without being forced to use 
a fixed size structure with a pointer to the variable portion. 
Which is useful for improving cache locality and reducing memory 
usage.


Re: DConf 2015: Individual talk links from the livestream

2015-06-02 Thread safety0ff via Digitalmars-d-announce

Thanks!


Re: Possible to write a classic fizzbuzz example using a UFCS chain?

2015-04-30 Thread safety0ff via Digitalmars-d-learn

Just for fun:

// map,   join,  text, iota,  writeln,   
tuple
import std.algorithm, std.array, std.conv, std.range, std.stdio, 
std.typecons;


void main()
{
  iota(1,100)
  .map!(a = tuple(a, a % 3 == 0 ? 0 : 4, a % 5 == 0 ? 8 : 4))
  .map!(a = a[1] == a[2] ? a[0].text : fizzbuzz[a[1] .. a[2]])
  .join(, )
  .writeln;
}


Re: A more general bsr/bsf implementation

2015-04-13 Thread safety0ff via Digitalmars-d

On Sunday, 12 April 2015 at 15:21:26 UTC, Johan Engelen wrote:


Sorry for not being clear.


I should have thought about it more before answering. :)

I understand why the current bsr behaves like it does, but what 
I meant is whether that is the desired behavior of bsr:

bsr( byte(-1) ) == 31  (32-bit size_t)
bsr( byte(-1) ) == 63  (64-bit size_t)
instead of
bsr( byte(-1) ) == 7


I think 7 is the desired result.

I don't know whether there are uses for bsr with negative signed 
arguments since it returns the MSB position for all values.


Re: A more general bsr/bsf implementation

2015-04-12 Thread safety0ff via Digitalmars-d

On Sunday, 12 April 2015 at 11:53:41 UTC, Johan Engelen wrote

My questions:
1) Is it OK to put a more general bsf/bsr in druntime or in 
Phobos? (if Phobos: in which package to put it?)


IMO I want a std.integer package for such functions.
I started writing one but I have to rewrite it.

I don't know if building up such a package by piecemeal would be 
accepted.


2) Is the current sign-extend up to size_t's width really 
intended behavior?


It's due to integer promotions, so it should only influence bsr 
(when it is called with a signed type.)


Re: Parallelization of a large array

2015-03-10 Thread safety0ff via Digitalmars-d-learn

On Tuesday, 10 March 2015 at 20:41:14 UTC, Dennis Ritchie wrote:

Hi.
How to parallelize a large array to check for the presence of 
an element matching the value with the data?


Here's a simple method (warning: has pitfalls):

import std.stdio;
import std.parallelism;

void main()
{
int[] a = new int[100];

foreach (i, ref elem; a)
elem = cast(int)i;

bool found;
foreach (elem; a.parallel)
if (elem == 895639)
found = true;

if (found)
writeln(Yes);
else
writeln(No);
}


Re: Purity not enforced for default arguments?

2015-03-10 Thread safety0ff via Digitalmars-d-learn

On Tuesday, 10 March 2015 at 21:56:39 UTC, Xinok wrote:


I'm inclined to believe this is a bug.


https://issues.dlang.org/show_bug.cgi?id=11048


Re: Strange behavior of the function find() and remove()

2015-03-08 Thread safety0ff via Digitalmars-d-learn

On Sunday, 8 March 2015 at 21:34:25 UTC, Dennis Ritchie wrote:

This is normal behavior?



Yes it is normal, there are two potential points of confusion:
- remove mutates the input range and returns a shortened slice to 
the range which excludes the removed element.

- remove takes an index as its second argument, not an element.

For more information see: 
https://issues.dlang.org/show_bug.cgi?id=10959


Re: DDMD just went green on all platforms for the first time

2015-02-21 Thread safety0ff via Digitalmars-d
On Saturday, 21 February 2015 at 14:02:41 UTC, Daniel Murphy 
wrote:

https://auto-tester.puremagic.com/?projectid=10

This is a pretty big milestone for the project.  For the first 
time, an unpatched dmd can build ddmd, and that ddmd can build 
druntime and phobos and pass all the test suites.


Congrats!

Does version=GC need any additional work?


Re: why GC not work?

2015-02-08 Thread safety0ff via Digitalmars-d-learn

On Sunday, 8 February 2015 at 16:23:44 UTC, FG wrote:



2. auto buf = new byte[](1024*1024*100);
  now the gc can't free this buf.
  can i free it by manual?


Yes. import core.memory; GC.free(buf.ptr); // and don't use buf 
afterwards


That won't work, see:
http://forum.dlang.org/thread/uankmwjejsitmlmrb...@forum.dlang.org


Re: why GC not work?

2015-02-08 Thread safety0ff via Digitalmars-d-learn

On Sunday, 8 February 2015 at 18:43:18 UTC, FG wrote:

On 2015-02-08 at 19:15, safety0ff wrote:

That won't work, see:
http://forum.dlang.org/thread/uankmwjejsitmlmrb...@forum.dlang.org


Perhaps it was fixed in DMD 2.066.1, because this works for me 
just fine:




Here's the link I couldn't find earlier:
https://issues.dlang.org/show_bug.cgi?id=14134


Re: why GC not work?

2015-02-06 Thread Safety0ff via Digitalmars-d-learn

False pointers, current GC is not precise.


Re: More recent work on GC

2015-01-15 Thread safety0ff via Digitalmars-d
On Wednesday, 14 January 2015 at 06:15:09 UTC, Andrei 
Alexandrescu wrote:

On my reading list:

http://research.microsoft.com/pubs/230708/conservative-gc-oopsla-2014.pdf

http://users.cecs.anu.edu.au/~steveb/downloads/pdf/immix-pldi-2008.pdf 
(this has been mentioned before)



Andrei


These are probably worth re-mentioning since the 2014 paper 
builds upon them:


http://research.microsoft.com/pubs/202163/rcix-oopsla-2013.pdf

http://users.cecs.anu.edu.au/~steveb/downloads/pdf/rc-ismm-2012.pdf

They've been mentioned here before.


Re: What's missing to make D2 feature complete?

2015-01-08 Thread safety0ff via Digitalmars-d

On Thursday, 25 December 2014 at 09:46:19 UTC, Martin Nowak wrote:

On Saturday, 20 December 2014 at 19:22:05 UTC, safety0ff wrote:
On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak 
wrote:

Just wondering what the general sentiment is.



Multiple alias this (DIP66 / #6083.)


It's already in :), at least the DIP just got approved.
Would it really have been a dealbreaker?


Late reply:

No, not a dealbreaker (at least for any of the code I've 
written,) I was focusing on objectively missing features rather 
than sentiments :)


That being said, multiple alias this should considerably expand 
the implicit conversion and composition options in D.


Re: noinline, forceinline, builtin_expect

2015-01-02 Thread safety0ff via Digitalmars-d

On Friday, 2 January 2015 at 14:34:39 UTC, Martin Nowak wrote:
Would be nice to have @noinline, @forceinline and 
__builtin_expect.


It's rare to get measurable gains from __builtin_expect, and 
since there is no macro preprocessor in D, it's likely less 
verbose to just put the common case as the first branch since 
most compilers use that for static branch prediction (with the 
same effect on resulting machine code as __builtin_expect.)


Just my 2c.


Re: What's missing to make D2 feature complete?

2014-12-20 Thread safety0ff via Digitalmars-d

On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak wrote:

Just wondering what the general sentiment is.



Multiple alias this (DIP66 / #6083.)


Re: Allocating aligned memory blocks?

2014-12-11 Thread safety0ff via Digitalmars-d-learn
On Friday, 12 December 2014 at 06:17:56 UTC, H. S. Teoh via 
Digitalmars-d-learn wrote:


Is there a way to allocate GC
memory blocks in D that are guaranteed to fall on OS page 
boundaries?


I don't know about guarantees, I think that in practice, if your 
OS page size is 4096, any GC allocation of 4096 or greater will 
be page aligned.


should I just forget the GC and just use posix_memalign() 
manually?


I think it may be possible to do what you want with mmap/munmap 
alone (selectively map parts of the file to memory.)


Re: GSOC Summer 2015 - Second call for Proposals

2014-11-22 Thread safety0ff via Digitalmars-d
On Wednesday, 5 November 2014 at 03:54:23 UTC, Craig Dillabaugh 
wrote:
This is my second Call for Proposals for the 2015 Google Summer 
of Code. Anyone interested in mentoring, or who has good idea's 
for a project for 2015 please post here.


I think it'd be awesome to have something like 
boost::intrusive[1] in D.
For a GSOC project, the scope could be reduced to a few of the 
containers from the boost version (focus should be to lay 
groundwork for future additions.)


The advantage of intrusive containers which I believe would have 
the most mass appeal is that memory management is external to the 
container instead of baked in.


Further more, intrusive containers can be combined and extended 
in interesting ways (I've found this extremely useful,) which are 
impossible with non-intrusive containers. I've chosen to use C++ 
over D for some of my programs due to this library alone.


[1] http://www.boost.org/doc/libs/1_57_0/doc/html/intrusive.html


Re: naked popcnt function

2014-11-22 Thread safety0ff via Digitalmars-d-learn

On Saturday, 22 November 2014 at 18:30:06 UTC, Ad wrote:
Hello, I would like to write a popcnt function. This works 
fine


ulong popcnt(ulong x)
{
asm { mov RAX, x ; popcnt RAX, RAX ; }
}

However, if I add the naked keyword ( which should improve 
performance? ) it doesn't work anymore and I can't figure out 
what change I am supposed to make ( aside from x[RBP] instead 
of x )

This function is going to be *heavily* used.

Thanks for any help.


Last time I used naked asm simply used the calling convention to 
figure out the location of the parameter (e.g. RCX win64, RDI 
linux 64, iirc.)

N.B. on LDC  GDC there is an intrinsic for popcnt.


Re: Linux 64bit Calling Convention

2014-11-14 Thread safety0ff via Digitalmars-d

On Saturday, 25 October 2014 at 16:14:30 UTC, Trass3r wrote:

Yes it's clearly stated on the ABI page (and sane).
Nobody ever noticed cause it's hard to spot this in assembly.


I've hit it a few times, I wasn't sure if it was me or the 
compiler that was mistaken so I didn't create a report, I just 
swapped my registers and moved on.



On Friday, 14 November 2014 at 19:42:54 UTC, David Nadlinger 
wrote:


However, as this is a breaking change (think naked inline asm)


I'm probably not the only user who has adjusted their registers 
and moved on.
As usual, this is the biggest hindrance to fixing the bug, 
silently swapping the registers between versions is unacceptable.


Re: alias foo = __traits(...)

2014-11-05 Thread safety0ff via Digitalmars-d
On Thursday, 6 November 2014 at 01:31:40 UTC, Shammah Chancellor 
wrote:

Is this fixed now?


https://issues.dlang.org/show_bug.cgi?id=7804


Re: new(malloc) locks everything in multithreading

2014-10-23 Thread safety0ff via Digitalmars-d-learn

On Friday, 24 October 2014 at 02:51:20 UTC, tcak wrote:


I don't want to blame dmd directly because as far as I see from 
the search I did with __lll_lock_wait_private, some C++ 
programs are having same problem with malloc operation as well. 
But still, can this be because of compiler?


Looks like bug #11981 [1], which should be fixed in the latest 
versions of the compiler. Which version are you using?


[1] https://issues.dlang.org/show_bug.cgi?id=11981


Re: Global const variables

2014-10-21 Thread safety0ff via Digitalmars-d-learn

On Tuesday, 21 October 2014 at 08:25:07 UTC, bearophile wrote:

Minas Mina:

Aren't pure functions supposed to return the same result every 
time? If yes, it is correct to not accept it.


But how can main() not be pure? Or, how can't the 'a' array be 
immutable?


Bye,
bearophile


There can exist a mutable reference to a's underlying memory:

const int[] a;
int[] b;

static this()
{
b = [1];
a = b;
}


Re: How would you dive into a big codebase

2014-10-21 Thread safety0ff via Digitalmars-d-learn

On Wednesday, 22 October 2014 at 01:21:19 UTC, Freddy wrote:

Is there any advice/tips for reading medium/big D codebases?


Somewhat D specific: I would consider an IDE/editor like Eclipse 
with DDT that can give an outline of the data structures  
functions names in a source file to make the files easier to 
digest.


Re: On Phobos GC hunt

2014-10-18 Thread safety0ff via Digitalmars-d
On Tuesday, 14 October 2014 at 13:29:33 UTC, Dmitry Olshansky 
wrote:


Also it's universal as in any github-hosted D project, for 
example here is an output for druntime:


http://wiki.dlang.org/Stuff_in_Druntime_That_Generates_Garbage

Still todo:
 - a few bugs to fix in artifact labeling


One artefact labelling bug I noticed was GC.removeRange and 
GC.removeRoot were placed in the Artefact column where it should 
have been GC.rangeIter and GC.rootIter.


Re: A significant performance difference

2014-09-01 Thread safety0ff via Digitalmars-d-learn
The following D code runs over 2x faster than the C++ code 
(comparing dmd no options to g++ no options.) Its not a fair 
comparison because it changes the order of operations.


import core.stdc.stdio;

const uint H = 9, W = 12;

const uint[3][6] g = [[7, 0, H - 3],
  [1 + (1  H) + (1  (2 * H)), 0, H - 1],
  [3 + (1  H), 0, H - 2],
  [3 + (2  H), 0, H - 2],
  [1 + (1  H) + (2  H), 0, H - 2],
  [1 + (1  H) + (1  (H - 1)), 1, H - 1]];

int main() {
ulong p, i, k;
ulong[uint] x, y;
uint l;
x[0] = 1;

for (i = 0; i  W; ++i) {
y = null;
while (x.length)
foreach (j; x.keys) {
p = x[j];
x.remove(j);

for (k = 0; k  H; ++k)
if ((j  (1  k)) == 0)
break;

if (k == H)
y[j  H] += p;
else
for (l = 0; l  6; ++l)
if (k = g[l][1]  k = g[l][2])
if ((j  (g[l][0]  k)) == 0)
x[j + (g[l][0]  k)] += p;
}
x = y;
}

printf(%lld\n, y[0]);
return 0;
}


Re: Blog post on hidden treasure in the D standard library.

2014-08-30 Thread safety0ff via Digitalmars-d-announce
On Saturday, 30 August 2014 at 06:00:31 UTC, ketmar via 
Digitalmars-d-announce wrote:
i believe that those rules are useless and senseless now, so 
it's more like a one man crusade.




It's not a one man's crusade, it affects legibility and creates 
dissonance within the text.

It also appears unprofessional and uneducated.
I gave up on reading the article and started skimming after the 
small introductory paragraph got it wrong 7 times.


Re: Blog post on hidden treasure in the D standard library.

2014-08-30 Thread safety0ff via Digitalmars-d-announce
On Saturday, 30 August 2014 at 07:59:16 UTC, Gary Willoughby 
wrote:


Stop being such a grammar nazi.



I didn't bring it up because I felt like being pedantic, I 
brought it up as a suggestion to make it more pleasant to read.


Since you've already been labelled as a pedant, perhaps you 
should learn the difference between pedantry and Nazism.


On Saturday, 30 August 2014 at 09:38:29 UTC, Gary Willoughby 
wrote:


˙ǝuo pǝʇɐɔnpǝun ǝɥʇ ǝɹɐ noʎ ʇɐɥʇ ɟo ǝsnɐɔǝq ʇı puɐʇsɹǝpun ʇ,uɐɔ 
noʎ ɟI ¡ʇxǝʇ ǝןoɥʍ ǝɥʇ uı s,ı ǝsɐɔ-ɹǝʍoן uǝ⊥ ˙*uǝʇ* sɐɥ ǝןɔıʇɹɐ 
ǝɹıʇuǝ ǝɥʇ 'uǝʌǝs ʇou 's,ı ǝsɐɔ-ɹǝʍoן *ǝʌıɟ* sɐɥ ɥdɐɹƃɐɹɐd 
ʎɹoʇɔnpoɹʇuı ǝɥ⊥


Firstly:


I’ve been using D for a number of years and i


1


am constantly surprised by the hidden treasure i


2

find in the standard library. I guess the reason for my 
surprise is that i’ve


3


never exhaustively read the entire library documentation, i


4

only skim it for what’s needed at any given time. I’ve promised 
myself i


5


will read it thoroughly one day but until then i’ll


6

enjoy these little discoveries. This article highlights a few 
of these hidden treasures which i


7

Secondly, as I've already stated, it's not a matter of it being 
incomprehensible.



˙ǝןdɯıs 'ǝʇıɹʍ ı ƃuıɥʇʎuɐ pɐǝɹ ʇ,uop 'ʇı ǝʞıן ʇ,uop noʎ ɟI


In that case, don't complain when the opinions you've written are 
dismissed.


uoıʇɐnʇɔund ou puɐ ʎןuo ǝsɐɔ-ɹǝʍoן uı uo ʍou ɯoɹɟ ƃuıɥʇʎɹǝʌǝ 
ƃuıʇıɹʍ ʇɹɐʇs ʇɥƃıɯ ı


The second coming of e e cummings!


Re: Blog post on hidden treasure in the D standard library.

2014-08-30 Thread safety0ff via Digitalmars-d-announce

Just a correction:

On Saturday, 30 August 2014 at 10:44:20 UTC, safety0ff wrote:


Since you've already been labelled as a pedant, perhaps you 
should learn the difference between pedantry and Nazism.


I meant:
Since you've already labelled *me*

Anyways.


Re: D daemon GC?

2014-08-30 Thread safety0ff via Digitalmars-d-learn

On Saturday, 30 August 2014 at 17:09:41 UTC, JD wrote:

Hi all,

I tried to write a Linux daemon in D 2.065 (by translating one  
in C we use at work). My basic skeleton works well. But as soon 
as I start allocating memory it crashed with several 
'core.exception.InvalidMemoryOperationError's.


It works for me with 2.066, I do not have 2.065 installed at the 
moment to see if it fails on 2.065.


Re: Blog post on hidden treasure in the D standard library.

2014-08-28 Thread safety0ff via Digitalmars-d-announce
On Thursday, 28 August 2014 at 16:06:11 UTC, Gary Willoughby 
wrote:

Direct link:
http://nomad.so/2014/08/hidden-treasure-in-the-d-standard-library/


What do you have against capitalizing 'I' ?
It's annoying / distracting to read text filled with 
uncapitalised 'I's.


Re: D 2.066 new behavior

2014-08-21 Thread safety0ff via Digitalmars-d-announce

On Friday, 22 August 2014 at 01:54:55 UTC, Paul D Anderson wrote:


Is this expected behavior that has never been enforced before, 
or is it something new?


And is anyone else having the same problem?

Paul


Looks like a regression, I've filed it here: 
https://issues.dlang.org/show_bug.cgi?id=13351


Re: D 2.066 is out. Enjoy!

2014-08-19 Thread safety0ff via Digitalmars-d-announce
On Monday, 18 August 2014 at 23:18:46 UTC, Vladimir Panteleev 
wrote:

On Monday, 18 August 2014 at 23:14:45 UTC, Dicebot wrote:
I also propose to start 2.067 beta branch right now and 
declare it yet another bug-fixing release.


Isn't this what point-releases are for, though?


I agree, I think 2.066.next should be the focus considering the 
known issues of 2.066.


On Monday, 18 August 2014 at 20:43:44 UTC, Vladimir Panteleev 
wrote:


How is it decided when it's time to cut off a new release? Do 
we have two RCs and that's it?


I find it hard to believe that it is just a coincidence that a 
surprise release occurred on the same day as Java 9 and C++14 
announcements.


Re: D 2.066 is out. Enjoy!

2014-08-19 Thread safety0ff via Digitalmars-d-announce
On Wednesday, 20 August 2014 at 00:14:59 UTC, Andrew Edwards 
wrote:

On 8/20/14, 8:38 AM, safety0ff wrote:


I agree, I think 2.066.next should be the focus considering 
the known

issues of 2.066.


Fear not, point releases will address known deficiencies.



Btw, thank you for the good work you've done as release manager!


Re: Won a programming contest using D - Thank you for the tool!

2014-08-19 Thread safety0ff via Digitalmars-d

On Tuesday, 19 August 2014 at 17:39:13 UTC, Ivan Kazmenko wrote:


It looks like I want to have most of my data under something 
like GC.BlkAttr.NO_SCAN.  But I don't yet see a clean way to 
introduce something like that in the code.


GC.BlkAttr.NO_INTERIOR can also be useful for eliminating false 
pointers pointing into large structures. It only works when you 
know that there is always a pointer to the base of the object 
while it is alive.


N.B. That attribute is ignored for small allocations.


Re: How to declare a parameterized recursive data structure?

2014-08-16 Thread safety0ff via Digitalmars-d

On Saturday, 16 August 2014 at 17:55:28 UTC, Gary Willoughby
wrote:


Funnily enough i've been toying with linked lists using the 
same kind of nodes here:

https://github.com/nomad-software/etcetera/blob/master/source/etcetera/collection/linkedlist.d

Might be of use to you?


Why did you put the data between the two pointers?

I would put the pointers side-by-side in memory:

- The two pointers will be in the same cache line no matter which
type T is used.
- It should reduce the amount of padding in the Node struct.


Re: Appender is ... slow

2014-08-14 Thread safety0ff via Digitalmars-d-learn
IIRC it manages the capacity information manually instead of 
calling the runtime which reduces appending overhead.


Re: What hashing algorithm is used for the D implementation of associative arrays?

2014-08-14 Thread safety0ff via Digitalmars-d-learn

On Thursday, 14 August 2014 at 13:10:58 UTC, bearophile wrote:


D AAs used to be not vulnerable to collision attacks because 
they resolved collisions building a red-black tree for each 
bucket. Later buckets became linked lists for speed,


Slight corrections:
It was a effectively a randomized BST, it used the hash value + 
comparison function to place the elements in the tree.

E.g. The AA's node comparison function might be:

   if (hash == other.hash)
  return value.opCmp(other.value);
   else if (hash  other.hash)
  return -1;
   return 1;

The hash function has a significant influence on how balanced the 
BST will be.
Insertion and removal order also have performance influence since 
rebalancing was only done when growing the AA.

It had no performance guarantees.

I believe it was removed to reduce memory consumption, see the 
Mar 19 2010 cluster of commits by Walter Bright to aaA.d.
Since GC rounds up allocations to powers of two for small 
objects, the additional pointer doubles the allocation size per 
node.


A template library based AA implementation should be able to 
handily outperform built-in AAs and provide guarantees.
Furthermore, improved memory management could be a significant 
win.


Fun fact:
The AA implementation within DMD still uses the randomized BST 
though the hash functions are very rudimentary.


Re: DMD v2.066.0-rc2

2014-08-10 Thread safety0ff via Digitalmars-d-announce

On Friday, 8 August 2014 at 12:01:43 UTC, Andrew Edwards wrote:

DMD v2.066.0-rc2 binaries are available for testing:

http://wiki.dlang.org/Beta_Testing


Probably too late but 
https://github.com/D-Programming-Language/dmd/pull/3826 is an ICE 
 wrong-code fix which requires review (green auto-tester status.)


Re: proposal: allow 'with(Foo):' in addition to 'with(Foo){..}'

2014-08-10 Thread safety0ff via Digitalmars-d

On Sunday, 10 August 2014 at 08:34:40 UTC, Era Scarecrow wrote:


Depends on how many pesky extra braces you want to avoid...

enum Flags {a,b,c,readonly,write,etc}

void func(Flags f){
  switch(f) {
with(Flags):   //or put this outside the switch...


For that specific case, put it outside the switch and drop the 
colon:

http://dpaste.dzfl.pl/f3e78f3265a7


Re: What have I missed?

2014-08-08 Thread safety0ff via Digitalmars-d

On Friday, 8 August 2014 at 22:28:56 UTC, Era Scarecrow wrote:
On Friday, 8 August 2014 at 21:35:59 UTC, Dmitry Olshansky 
wrote:

FYI
https://github.com/D-Programming-Language/phobos/pull/2248
https://github.com/D-Programming-Language/phobos/pull/2249

These were mostly bugfixes not trying to fix any design flaws.


 Yeah doesn't look like any of my code. Unfortunately due to 
the huge re-write i did most of those fixes go in the garbage 
(my rewrite probably covered most of them anyways).




Yea, one of those was my code and the other was somebody else's 
PR that I revived since they weren't responding.


I was moving forward with the philosophy that we should make the 
existing implementation as correct as possible and leave new 
features to new designs.


I think it will be difficult to make a one size fits all 
BitArray that satisfies everybody's wishes.

E.g.:
Bit level slice operations versus performance.
Value semantics versus D slice semantics.
Having compatibility with other parts of phobos versus having a 
maximum of 2^35-1 bits on a 32 bit system.


This is not as bad making a one size fits all fixed point 
integer, but it's not pleasant either.


Re: A little of coordination for Rosettacode

2014-08-07 Thread safety0ff via Digitalmars-d-learn

On Tuesday, 12 February 2013 at 01:07:35 UTC, bearophile wrote:


In practice at the moment I am maintaining all the D entries of 
Rosettacode.




Here's a candidate for 
http://rosettacode.org/wiki/Extensible_prime_generator#D in case 
it is preferred to the existing entry:

http://dpaste.dzfl.pl/43735da3f1d1


Re: Haskell calling D code through the FFI

2014-08-05 Thread safety0ff via Digitalmars-d-learn

On Tuesday, 5 August 2014 at 23:23:43 UTC, Jon wrote:
So that does indeed solve some of the problems.  However, using 
this method, when linking I get two errors, undefined reference 
rt_init() and rt_term() I had just put these methods in the 
header file.  If I put wrappers around these functions and 
export I get the rt_init, rt_term is private.




It works for me, here are the main parts of my Makefile:

DC = ~/bin/dmd

main: Main.hs FunctionsInD.a
ghc -o main Main.hs FunctionsInD.a ~/lib/libphobos2.a -lpthread

FunctionsInD.a: FunctionsInD.d
$(DC) -c -lib FunctionsInD.d

I passed in the phobos object directly because I don't know how 
to specify the ~/lib directory on the ghc command line.


Re: Haskell calling D code through the FFI

2014-08-04 Thread safety0ff via Digitalmars-d-learn
Don't forget to call rt_init: 
http://dlang.org/phobos/core_runtime.html#.rt_init


Re: Haskell calling D code through the FFI

2014-08-04 Thread safety0ff via Digitalmars-d-learn

On Monday, 4 August 2014 at 21:14:17 UTC, Jon wrote:

On Monday, 4 August 2014 at 21:10:46 UTC, safety0ff wrote:
Don't forget to call rt_init: 
http://dlang.org/phobos/core_runtime.html#.rt_init


Where/when should I call this?


Before calling any D functions, but usually it's simplest to call 
it early in main.


It initializes the GC and notifies the D runtime of its existence.
For simple D functions you might get away without calling it.


Re: Haskell calling D code through the FFI

2014-08-04 Thread safety0ff via Digitalmars-d-learn

On Monday, 4 August 2014 at 21:35:21 UTC, Jon wrote:
I get Error: core.runtime.rt_init is private.  And Error: 
core.runtime.init is not accessible.




I would add them to the header and Haskell wrapper 
(FunctionsInD.h and ToD.hs.)

The signatures are:
int rt_init();
int rt_term();

When it is linked it will find the symbols in druntime.


Re: Threadpools, difference between DMD and LDC

2014-08-03 Thread safety0ff via Digitalmars-d-learn

On Sunday, 3 August 2014 at 19:52:42 UTC, Philippe Sigaud wrote:


Can someone confirm the results and tell me what I'm doing 
wrong?


LDC is likely optimizing the summation:

int sum = 0;
foreach(i; 0..task.goal)
sum += i;

To something like:

int sum = cast(int)(cast(ulong)(task.goal-1)*task.goal/2);


Re: unittest affects next unittest

2014-08-01 Thread safety0ff via Digitalmars-d-learn

On Friday, 1 August 2014 at 23:09:39 UTC, sigod wrote:

Code: http://dpaste.dzfl.pl/51bd62138854
(It was reduced by DustMite.)

Have I missed something about structs? Or this simply a bug?


Isn't this the same mistake as: 
http://forum.dlang.org/thread/muqgqidlrpoxedhyu...@forum.dlang.org#post-mpcwwjuaxpvwiumlyqls:40forum.dlang.org


In other words:
private Node * _root = new Node();
looks wrong.


Re: Getting the hash of any value easily?

2014-07-31 Thread safety0ff via Digitalmars-d

On Thursday, 31 July 2014 at 22:45:44 UTC, Timon Gehr wrote:


For example, because it might actually be random on the first 
invocation and cached later. :-)


This is interesting.
Some hash functions in druntime might break in an environment 
with a precise moving GC.


  1   2   >