Re: Any usable SIMD implementation?

2016-04-12 Thread Marco Leise via Digitalmars-d
The system seems to call CPUID at startup and for every
multiversioned function, patch an offset in its dispatcher
function. The dispatcher function is then nothing more than a
jump realtive to RIP, e.g.:

  jmpQWORD PTR [rip+0x200bf2]

This is as efficient as it gets short of using whole-program
optimization.

-- 
Marco



Re: Any usable SIMD implementation?

2016-04-12 Thread Marco Leise via Digitalmars-d
Am Tue, 12 Apr 2016 10:55:18 +
schrieb xenon325 :

> Have you seen how GCC's function multiversioning [1] ?
> 
> This whole thread is far too low-level for me and I'm not sure if 
> GCC's dispatcher overhead is OK, but the syntax looks really nice 
> and it seems to address all of your concerns.
> 
>   __attribute__ ((target ("default")))
>   int foo ()
>   {
> // The default version of foo.
> return 0;
>   }
> 
>   __attribute__ ((target ("sse4.2")))
>   int foo ()
>   {
> // foo version for SSE4.2
> return 1;
>   }
> 
> 
> [1] https://gcc.gnu.org/wiki/FunctionMultiVersioning
> 
> -Alexander

Awesome! I just tried it and it ties runtime and compile-time
selection of code paths together in an unprecedented way!

As you said, there is the runtime dispatcher overhead if you
just compile normally. But if you specifically compile with
"gcc -msse4.2 <…>", GCC calls the correct function directly:

00400512 :
  400512:   e8 f5 ff ff ff  callq  40050c <_Z3foov.sse4.2>
  400517:   f3 c3   repz retq 
  400519:   0f 1f 80 00 00 00 00nopl   0x0(%rax)

For demonstration purposes I disabled the inliner here.

The best thing about it is that for users of libraries
employing this technique, it happens behind the scenes and user
code stays clean of instrumentation. No ugly versioning and
hand written switch-case blocks! (It currently only works with
C++ on x86, but I like the general direction.)

-- 
Marco



Re: Any usable SIMD implementation?

2016-04-12 Thread Marco Leise via Digitalmars-d
Am Mon, 11 Apr 2016 14:29:11 -0700
schrieb Walter Bright :

> On 4/11/2016 7:24 AM, Marco Leise wrote:
> > Am Mon, 4 Apr 2016 11:43:58 -0700
> > schrieb Walter Bright :
> >  
> >> On 4/4/2016 9:21 AM, Marco Leise wrote:  
> >>> To put this to good use, we need a reliable way - basically
> >>> a global variable - to check for SSE4 (or POPCNT, etc.). What
> >>> we have now does not work across all compilers.  
> >>
> >> http://dlang.org/phobos/core_cpuid.html  
> >
> > That's what I implied in "what we have now":
> >
> > import core.cpuid;
> >
> > writeln( mmx );  // prints 'false' with GDC
> > version(InlineAsm_X86_Any)
> > writeln("DMD and LDC support the Dlang inline assembler");
> > else
> > writeln("GDC has the GCC extended inline assembler");  
> 
> There's no reason core.cpuid, which has a platform-independent API, cannot be 
> made to work with GDC and LDC. Adding more global variables to do the same 
> thing 
> would add no value and would not be easier to implement.

LDC implements InlineAsm_X86_Any (DMD style asm), so
core.cpuid works. GDC is the only compiler that does not
implement it. We agree that core.cpuid should provide this
information, but what we have now - core.cpuid in a mix with
GDC's lack of DMD style asm - does not work in practice for
the years to come.

> > Both LLVM and GCC have moved to "extended inline assemblers"
> > that require you to provide information about input, output
> > and scratch registers as well as memory locations, so the
> > compiler can see through the asm-block for register allocation
> > and inlining purposes. It's more difficult to get right, but
> > also more rewarding, as it enables you to write no-overhead
> > "one-liners" and "intrinsics" while having calling conventions
> > still handled by the compiler.  
> 
> I know, but "more difficult" is a bit of an understatement. For example, 
> core.cpuid has not been implemented using those assemblers.

Yep, and that makes it unavailable in GDC. All feature tests
return false, even MMX or SSE2 on amd64.

> BTW, dmd's inline assembler does know about which instructions read/write 
> which 
> registers, and makes use of that when inserting the code so it will work with 
> the rest of the code generator's register usage tracking.

That is a pleasant surprise. :)

> I find needing to tell gcc which registers are read/written by a particular 
> instruction to be a step BACKWARDS in usability. This is what computers are 
> supposed to be good for :-)

Still, DMD does not inline asm and always adds a function
prolog and epilog around asm blocks in an otherwise
empty function (correct me if I'm wrong). "naked" means you
have to duplicate code for the different calling conventions,
in particular Win32.

Your look on GCC (and LLVM) may be a bit biased. First of all
you don't need to tell it exactly which registers to use. A
rough classification is enough and gives the compiler a good
idea of where calculations should be stored upon arrival at
the asm statement. You can be specific down to the register
name or let the backend chose freely with "rm" (= any register
or memory).
An example: We have a variable x that is computed inside a
function followed by an asm block that multiplies it with
something else. Typically you would "MOV EAX, [x]" to load x
into the register that the MUL instruction expects. With
extended assemblers you can be declarative about that and just
state that x is needed in EAX as an input. You drop the MOV
from the asm block and let the compiler figure out in its
codegen, how x will end up in EAX. That's a step FORWARD in
usability.

> DMD doesn't inline functions with asm in them, but that is not the fault of 
> the 
> inline assembler.
> 
> The only real weakness in the DMD inline assembler is it doesn't support "let 
> the compiler select the register". DMD's strong support for compiler 
> builtins, 
> however, mitigate this to an acceptable level.

Yes, I've witnessed that in multiply with overflow check.
DMD generates very efficient code for 'mulu'. It's just that
the compiler cannot have builtins for everything. (I
personally was looking for 64-bit multiply with 128-bit
result and SSE4 string scanning.)
The extended assemblers in GCC and LLVM allow me to write
intrinsics, often as a single(!) instruction, that seamlessly
inlines into the surrounding code, just as DMD's builtins
would do.
And it seems to me we could have less backend complexity if we
were able to implement intrinsics as library code with the
same efficiency. ;) But most of the time when I want to access
a specialized CPU instruction for speed with asm in DMD, the
generic pure D code is faster. I would advise to only use it
if the concept is not expressible in pure D at the moment.
You might add that we shouldn't write asm in the first place,
because compilers have become smart enough, but it's not
like I was writing large 

Re: Release D 2.071.0

2016-04-11 Thread Marco Leise via Digitalmars-d-announce
Am Thu, 07 Apr 2016 10:13:35 +
schrieb Dicebot :

> On Tuesday, 5 April 2016 at 22:43:05 UTC, Martin Nowak wrote:
> > Glad to announce D 2.071.0.
> >
> > http://dlang.org/download.html
> >
> > This release fixes many long-standing issues with imports and 
> > the module
> > system.
> > See the changelog for more details.
> >
> > http://dlang.org/changelog/2.071.0.html
> >
> > -Martin  
> 
> It is second time git tag for DMD release refers to commit with 
> VERSION file content which doesn't match the tag. May indicate 
> something is wrong in release procedure.

Or maybe something is wrong with source based Linux
distributions in this time and age. I don't know. But I'm glad
that you fire proof the source bundles first, before I move my
lazy ass to update DMD packages for Gentoo. I hope to start
from a good, clean 2.071.0/2.071.1 tar ball. :D

Nice work on the import bugs. There is so much less on the
attic now.

-- 
Marco



Re: Any usable SIMD implementation?

2016-04-11 Thread Marco Leise via Digitalmars-d
Am Wed, 6 Apr 2016 20:29:21 -0700
schrieb Walter Bright :

> On 4/6/2016 7:25 PM, Manu via Digitalmars-d wrote:
> > TL;DR, defining architectures with an intel-centric naming convention
> > is a very bad idea.  
> 
> You're not making a good case for a standard language defined set of 
> definitions 
> for all these (they'll always be obsolete, inadequate and probably wrong, as 
> you 
> point out).

We can either define the language in terms of CPU models or
features and Manu gave two good reasons to go with features:
1) Typically we end up with "version(SSE4)" and similar in our
   code, not "version(Haswell)".
2) On ARM chips it turns out difficult to translate models to
   features to begin with.

It wasn't a good or bad case for the feature in general.
That said, in the long run Dlang should grow said language.
Aside from scientific servers there are also a few Linux
distributions that compile and install most packages from
sources and telling the compile to target the host CPU comes
naturally there. In practice there is likely some config file
that sets an environment variable like CFLAGS to
"-march=native" on such systems.
I understand that DMD doesn't concern itself with all that,
but the D language itself of which DMD is one implementation
should not artificially be limited compared to popular C/C++
compilers. I died a bit on the inside when I saw Phobos add
both popcnt and _popcnt of which the latter is the version that
uses the POPCNT instruction found in newer x86 CPUs.
In GCC or LLVM when we use such an intrinic, the compiler will
take a look at the compilation target and pick the optimal
code at compile-time. In one micro-benchmark [1], POPCNT was
roughly 50 times faster than bit-twiddling. If I wanted an SSE4
version in otherwise generic amd64 code, I would add
@attribute("target", "+sse4") before the function using popcnt.

So in my eyes a system like GCC offers, where you can specify
target features on the command line and also override them for
specific functions is a viable solution that simplifies user
code (just picking the popcnt, clz, bsr, ... intrinsic will
always be optimal) and Phobos code by making _popcnt et.al.
superfluous. In addition, the compiler could later error out on
mnemonics in our inline assembly that don't exist on the
target. This avoids unexpected "Illegal Instruction" crashes.

[1]
http://kent-vandervelden.blogspot.de/2009/10/counting-bits-population-count-and.html

-- 
Marco



Re: Any usable SIMD implementation?

2016-04-11 Thread Marco Leise via Digitalmars-d
Am Mon, 4 Apr 2016 13:29:11 -0700
schrieb Walter Bright :

> On 4/4/2016 7:02 AM, 9il wrote:
> >> What kind of information?  
> >
> > Target cpu configuration:
> > - CPU architecture (done)  
> 
> Done.
> 
> > - Count of FP/Integer registers  
> 
> ??
> 
> > - Allowed sets of instructions: for example, AVX2, FMA4  
> 
> Done. D_SIMD

I wonder if answers like this are meant to be filled into a
template like this: "We have [$2] in place for that. If that
doesn't get the job $1, please report whatever is missing to
bugzilla. Thanks!"
Since otherwise it should be clear that the distinction
between AVX2 and FMA4 asks for something more specialized
than D_SIMD, which is basically the same as checking the
front-end __VERSION__.

-- 
Marco



Re: Any usable SIMD implementation?

2016-04-11 Thread Marco Leise via Digitalmars-d
Am Mon, 4 Apr 2016 11:43:58 -0700
schrieb Walter Bright :

> On 4/4/2016 9:21 AM, Marco Leise wrote:
> >To put this to good use, we need a reliable way - basically
> >a global variable - to check for SSE4 (or POPCNT, etc.). What
> >we have now does not work across all compilers.  
> 
> http://dlang.org/phobos/core_cpuid.html

That's what I implied in "what we have now":

import core.cpuid;

writeln( mmx );  // prints 'false' with GDC
version(InlineAsm_X86_Any)
writeln("DMD and LDC support the Dlang inline assembler");
else
writeln("GDC has the GCC extended inline assembler");

Both LLVM and GCC have moved to "extended inline assemblers"
that require you to provide information about input, output
and scratch registers as well as memory locations, so the
compiler can see through the asm-block for register allocation
and inlining purposes. It's more difficult to get right, but
also more rewarding, as it enables you to write no-overhead
"one-liners" and "intrinsics" while having calling conventions
still handled by the compiler. An example for GDC:

struct DblWord { ulong lo, hi; }

/// Multiplies two machine words and returns a double
/// machine word.
DblWord bigMul(ulong x, ulong y)
{
DblWord tmp = void;
// '=a' and '=d' are outputs to RAX and RDX
// respectively that are bound to the two
// fields of 'tmp'.
// '"a" x' means that we want 'x' as input in
// RAX and '"rm" y' places 'y' wherever it
// suits the compiler (any general purpose
// register or memory location).
// 'mulq %3' multiplies with the ulong
// represented by the argument at index 3 (y). 
asm {
"mulq %3"
 : "=a" tmp.lo, "=d" tmp.hi
 : "a" x, "rm" y;
}
return tmp;
}

In the above example the compiler has enough information to
inline the function or directly return the result in RAX:RDX
without writing to memory first. The same thing in DMD would
likely have turned out slower than emulating this using
several uint->ulong multiplies.

Although less powerful, the LDC team implemented Dlang inline
assembly according to the specs and so core.cpuid works for
them. GDC on the other hand is out of the picture until either
1) GDC adds Dlang inline assembly
2) core.cpuid duplicates most of its assembly code to support
   the GCC extended inline assembler

I would prefer a common extended inline assembler though,
because when you use it for performance reasons you typically
cannot go with non-inlinable Dlang asm, so you end up with pure
D for DMD, GCC asm for GDC and LDC asm - three code paths.

-- 
Marco



Re: Any usable SIMD implementation?

2016-04-11 Thread Marco Leise via Digitalmars-d
Am Mon, 04 Apr 2016 18:35:26 +
schrieb 9il :

> @attribute("target", "+sse4")) would not work well for BLAS. BLAS 
> needs compile time constants. This is very important because BLAS 
> can be 95% portable, so I just need to write a code that would be 
> optimized very well by compiler. --Ilya

It's just for the case where you want a generic executable
with a generic and a specialized code path. I didn't mean this
to be exclusively used without compile-time information about
target features. 

-- 
Marco



Re: Any reason as to why this isn't allowed?

2016-04-04 Thread Marco Leise via Digitalmars-d
Am Sat, 02 Apr 2016 13:02:18 +
schrieb Lass Safin :

> class C {
>  ~this() {}
>  immutable ~this() {}
> }
> 
> This gives a conflict error between the two destructors.

That is https://issues.dlang.org/show_bug.cgi?id=13628

-- 
Marco



Re: Any usable SIMD implementation?

2016-04-04 Thread Marco Leise via Digitalmars-d
Am Mon, 04 Apr 2016 14:02:03 +
schrieb 9il :

> Target cpu configuration:
> - CPU architecture (done)
> - Count of FP/Integer registers
> - Allowed sets of instructions: for example, AVX2, FMA4
> - Compiler optimization options (for math)
> 
> Ilya

- On amd64, whether floating-point math is handled by the FPU
  or SSE. When emulating floating-point, e.g. for
  float-to-string and string-to-float code, it is useful to
  know where to get the active rounding mode from, since they
  may differ and at least GCC has a switch to choose between
  both.
- For compile time enabling of SSE4 code, a version define is
  sufficient. Sometimes we want to select a code path at
  runtime. For this to work, GDC and LDC use a conservative
  feature set at compile time (e.g. amd64 with SSE2) and tag
  each SSE4 function with an attribute to temporarily elevate
  the instruction set. (e.g. @attribute("target", "+sse4"))
  If you didn't tag the function like that the compiler would
  error out, because the SSE4 instructions are not supported
  by a minimal amd64 CPU.
  To put this to good use, we need a reliable way - basically
  a global variable - to check for SSE4 (or POPCNT, etc.). What
  we have now does not work across all compilers.

-- 
Marco



Re: Fixed-Length Array Sorting

2016-04-04 Thread Marco Leise via Digitalmars-d
Am Mon, 04 Apr 2016 12:11:20 +
schrieb Nordlöw :

> On Monday, 4 April 2016 at 10:53:51 UTC, Brian Schott wrote:
> > That's too readable.  
> 
> ?
 
When you talk about optimizing there is always a "how far will
you go". Your code is still plain D and barely digestible.
Hackerpilot presents a different solution that is outright
cryptic and platform dependent, but supposedly even faster.

The humor lies in the continued obfuscation of the higher
level sort operation to the point of becoming an obfuscated
code challenge and the controversy it starts in our minds when
we demand both fast and maintainable library code.

-- 
Marco



Re: Any usable SIMD implementation?

2016-04-04 Thread Marco Leise via Digitalmars-d
Am Sun, 03 Apr 2016 06:14:23 +
schrieb 9il :

> Hello Martin,
> 
> Is it possible to introduce compile time information about target 
> platform? I am working on BLAS from scratch implementation. And 
> it is no hope to create something useable without CT information 
> about target.
> 
> Best regards,
> Ilya

+1000!

I've hardcoded SSE4 in fast.json, but would much prefer to
type version(sse4) and have it compile on older CPUs as well.

-- 
Marco



Re: debugger blues

2016-03-29 Thread Marco Leise via Digitalmars-d
Am Mon, 28 Mar 2016 19:29:38 +
schrieb cy :

> On Sunday, 27 March 2016 at 15:40:47 UTC, Marco Leise wrote:
> > Is it just me? I've never heard of a programming environment, 
> > let alone a system programming language providing that 
> > information.
> 
> Well, not by default certainly. It is a bit pie-in-the-sky […].

At first you framed it like it was something to be expected in
a PL. As a pie-in-the-sky for DMD on the other hand I agree
with you. Maybe it is enough that GDC provides this feature.
Or maybe you could turn the idea into a DIP (with GCC as the
reference implementation) so eventually DMD and LDC can provide
this as well.

> > "DOESN'T LET YOU PUT SPACES BETWEEN THE ARGUMENTS" is just 
> > silly and factually wrong.
> 
> No, it's factually true. You can provide arguments that have 
> spaces in them, but there is no way to put spaces between the 
> arguments to your logging function. I would expect that the 
> logger would be called with the arguments provided, but instead 
> the logging code uses a private function to stringify and 
> concatenate all the arguments, and provides it to the logger as 
> one single opaque string. This is certainly something that could 
> be improved.

It is straightforward to put spaces between arguments:
warningf("%s %s %s", "puts", "spaces", "inbetween");

If you refer to the common headers that go before each
message, those are indeed not configurable for existing Logger
implementations. Swapping them out is not complicated though:

import std.experimental.logger;
import std.stdio;

stdThreadLocalLog = new class FileLogger
{
this() { super(stderr, LogLevel.all); }

override protected void beginLogMsg(
string file, int line, string
funcName, string prettyFuncName,
string moduleName, LogLevel logLevel,
Tid threadId, SysTime timestamp,
Logger logger)
{
import std.string : lastIndexOf;
auto idx1 = file.lastIndexOf('/');
auto idx2 = funcName.lastIndexOf('.');
formattedWrite(this.file.lockingTextWriter(),
"%s %s %s %u ", timestamp,
file[idx1 + 1 .. $],
funcName[idx2 + 1 .. $], line);
}
};

warningf("%s", "<- the header now has spaces instead of ':' in it");

You could also make 'beginLogMsg' call a user provided
delegate for formatting to freely change the looks at runtime.
 
> What I did was write my own logging framework, that joins 
> arguments with spaces in between, and provides that as a single 
> argument to std.experimental.logging. But that requires me to 
> duplicate all the template boilerplate (func = __func__, file = 
> __file__, line = __line__ etc) so all the code in 
> std.experimental.logging to do that can't be used, reducing 
> std.experimental.logging to basically nothing other than writeln.

WTF? :p
If you wonder why some feature is not provided in the default
implementation, the answer is likely that Logger was meant to
be as light-weight as possible with the option to completely
carve it out and re-implement it. For example, if the header
could be formatted with a format string, "%s" would have been
the only option for the time stamp.

> > If you ask for a *guarantee* of no copy on struct return, then 
> > you are right.
> 
> I wish I was right. There are still some gotchas there, that I 
> don't fully understand. Things like:
> > BigStruct ret = returnsBigStructToo();
> 
> look the same as things like:
> > BigStruct ret = returnedbigstruct;
> 
> but the latter is actually a copy. D making paretheses optional 
> for properties makes it even more confusing.
> 
> > A a = b.myA // copy or emplacement?

I believe the idea is that to copy or not to copy a struct is
just an optimization step and in cases where this strictly
must not happen @disable this(this); catches all code that
would need to perform a copy to work. Maybe you can return
your structs by reference or pointer? It is hard to tell
without seeing the code, how it could be refactored with
Dlang's design choices in mind.

> But even taking that into account, I've had mysterious situations 
> where the pointer to a returned structure changed, from within 
> the function to without, without any apparant copying going on.

Be assured that if that happens without this(this) being
called, it is worth a bug report.



Re: char array weirdness

2016-03-29 Thread Marco Leise via Digitalmars-d-learn
Am Mon, 28 Mar 2016 16:29:50 -0700
schrieb "H. S. Teoh via Digitalmars-d-learn"
:

> […] your diacritics may get randomly reattached to
> stuff they weren't originally attached to, or you may end up with wrong
> sequences of Unicode code points (e.g. diacritics not attached to any
> grapheme). Using filter() on Korean text, even with autodecoding, will
> pretty much produce garbage. And so on.

I'm on the same page here. If it ain't ASCII parsable, you
*have* to make a conscious decision about whether you need
code units or graphemes. I've yet to find out about the use
cases for auto-decoded code-points though.

> So in short, we're paying a performance cost for something that's only
> arguably better but still not quite there, and this cost is attached to
> almost *everything* you do with strings, regardless of whether you need
> to (e.g., when you know you're dealing with pure ASCII data).

An unconscious decision made by the library that yields the
least likely intended and expected result? Let me think ...
mhhh ... that's worse than iterating by char. No talking
back :p.

-- 
Marco



Re: How to be more careful about null pointers?

2016-03-29 Thread Marco Leise via Digitalmars-d-learn
Am Tue, 29 Mar 2016 06:00:32 +
schrieb cy :

> struct Database {
>string derp;
>Statement prepare(string s) {
>   return Statement(1234);
>}
> }
> 
> struct Statement {
>int member;
>void bind(int column, int value) {
>   import std.stdio;
>   writeln("derp",member);
>}
> 
> }
> 
> 
> class Wrapper {
>Database something;
>Statement prep;
>this() {
>  something = Database("...");
>  prep = something.prepare("...");
>}
> }
> 
> Wrapper oops;
> void initialize() {
>oops = new Wrapper();
> }
> 
> class Entry {
>Wrapper parent;
>this(Wrapper parent) {
>  //this.parent = parent;
>  //oops
>  parent = parent;
>}
>void usefulmethod() {
>  parent.prep.bind(1,42);
>  //parent.prep.execute();
>  //parent.prep.reset();
>}
> }
> 
> void main() {
>initialize();
>auto entry = new Entry(oops);
>entry.usefulmethod();
> }
> 
> That program causes a segmentation fault on my machine. Somehow 
> despite never initializing Entry.parent, a class object (whose 
> default init is a null pointer), I can still call methods on it, 
> access members on it, and call methods on those members. No 
> warnings or errors. The segfault doesn't happen until the bind() 
> method.

Dlang aborts programs that run into invalid states. Common
invalid states are failing contract assertions, out of memory
conditions or in this case a null-pointer dereference.

When designing Dlang, Walter decided that null pointers don't
require checks as the operating system will eventually abort
the program and debuggers are designed to assist you in this
situation.

Some people with a different programming background feel that
a modern language should wrap every pointer access in a check
and possibly throw a recoverable Exception here or make
null-pointer opt-in to begin with. The topic is still open for
discussion:
http://wiki.dlang.org/Language_issues#Null_References

-- 
Marco



Re: debugger blues

2016-03-27 Thread Marco Leise via Digitalmars-d
Am Fri, 25 Mar 2016 14:15:57 +
schrieb Craig Dillabaugh :

> On Friday, 25 March 2016 at 08:14:15 UTC, Iain Buclaw wrote:
> clip
> >
> >> * a pony
> >
> > Of all points above, this is one that can actually be arranged.
> >  No joke!
> 
> I've got a Border Collie I could throw in to the mix too if that 
> would be helpful.

My sister always wanted a pony. When we got a dog, she made
him run in circles, jump hurdles and pull the sled. O.o :p

-- 
Marco



Re: debugger blues

2016-03-27 Thread Marco Leise via Digitalmars-d
Am Fri, 25 Mar 2016 09:00:06 +
schrieb cy :

> No, the stack trace is the hierarchy of functions that are 
> currently calling each other. I meant the functions that had been 
> called previously, even the ones that have returned.

Is it just me? I've never heard of a programming environment,
let alone a system programming language providing that
information. While I understand, why the solidity of DWARF
debug information is important, this seems to me like a pie in
the sky idea similar to the pony. I guess the front-end would
have to instrument code for that and the debugger is out of
the picture, but you should detail that in a DIP. It does
sound like an interesting idea to supplement existing
instrumentation for profiling in DMD.

> > I'm afraid to say that only you can improve your own printf 
> > messages.
> 
> Or, write a print function that actually puts spaces between the 
> arguments, or write a function that adds the file and line so you 
> know where the statement was written and don't have to go hunting 
> for it. Or std.experimental.logger or whatever. I'd use 
> std.experimental.logger except it DOESN'T LET YOU PUT SPACES 
> BETWEEN THE ARGUMENTS >:(
> 
> (can you tell I programmed a lot in python?)

Yes, alright, but you can probably tell that std.logger was
modeled after the existing formatting functionality of
std.file (e.g. writefln) and that was modeled after printf, so
to put spaces between the arguments you just add them to the
formatting string. Just use format strings, case solved.
Surely we can look at Python style spaces between arguments,
but "DOESN'T LET YOU PUT SPACES BETWEEN THE ARGUMENTS" is just
silly and factually wrong.

> > Use @disable this(this); in your structs.
> 
> Well, obviously. But I meant a way to specify it in the function 
> I'm writing. Like @nocopy or @move or something.

If you ask for a *guarantee* of no copy on struct return, then
you are right. In praxis, the memory for the return value is
allocated on the caller's stack and the callee writes directly
into it. No copy or copy constructor call is performed. In
addition most (all?) ABIs allow structs of 1 or 2 machine words
to be returned in registers (i.e. EAX:EDX on x86).

Proof:

BigStruct returnsBigStruct()
{
BigStruct ret = returnsBigStructToo();
ret.data[0] = 42;
return ret;
}


BigStruct returnsBigStructToo()
{
import std.range;
BigStruct ret = BigStruct( iota(20).array[0 .. 20] );
return ret;
}

struct BigStruct
{
int[20] data;

this(this)
{
import std.stdio;
writeln("I'm typically not called at all.");
}
}

void main()
{
import std.stdio;
immutable bigStruct = returnsBigStruct();
writeln( bigStruct.data );
}

-- 
Marco



Re: I need some help benchmarking SoA vs AoS

2016-03-26 Thread Marco Leise via Digitalmars-d-learn
Am Sat, 26 Mar 2016 17:43:48 +
schrieb maik klein :

> On Saturday, 26 March 2016 at 17:06:39 UTC, ag0aep6g wrote:
> > On 26.03.2016 18:04, ag0aep6g wrote:
> >> https://gist.github.com/aG0aep6G/a1b87df1ac5930870ffe/revisions
> >
> > PS: Those enforces are for a size of 100_000 not 1_000_000, 
> > because I'm impatient.
> 
> Thanks, okay that gives me more more reliable results.
> for 1_000_000
> 
> benchmarking complete access
> AoS: 1 sec, 87 ms, 266 μs, and 4 hnsecs
> SoA: 1 sec, 491 ms, 186 μs, and 6 hnsecs
> benchmarking partial access
> AoS: 7 secs, 167 ms, 635 μs, and 8 hnsecs
> SoA: 1 sec, 20 ms, 573 μs, and 1 hnsec
> 
> This is sort of what I expected. I will do a few more benchmarks 
> now. I probably also randomize the inputs.

That looks more like it. :) There is a few things to keep in
mind. When you use constant data and don't use the result
compilers can:

- Const-fold computations away.
- Specialize functions on compile-time known arguments. That
  works mostly as if the argument was a template argument. A
  new instance of the function is created for each invokation
  with a compile-time known value. (Disabling inlining wont
  prevent this.)
- Call pure functions with the same argument only once in a
  loop of 1_000_000.
- Replace 1_000_000 additions of the number X in a loop with
  the expression 1_000_000*X.

In addition to these real-world optimizations, when you don't
accumulate the result of the function call and print it or
store it in some global variable, the whole computation may be
removed as "no side-effect", as others have pointed out. When
inlining is used the compiler may also see through attempts to
only use a part of the result and remove instructions that
lead to the rest of it. For example when you return a struct
with two fields - a and b - and store the sum of a, but ignore
b, then the compiler may remove computations that are only
needed for b!

Try to generate input from random number generators or
external files. Disable inlining for the benchmarked function
via attribute or pragma(inline, false) or otherwise make sure
that the compiler cannot guess what any of the arguments are
and perform const-folding after inlining. When the result is
returned, make sure you use so much of it, that the
compiler cannot elide instructions after inlining. It is
often enough to just store it in a global variable.

-- 
Marco



Re: D Profile Viewer

2016-03-24 Thread Marco Leise via Digitalmars-d-announce
Am Thu, 24 Mar 2016 20:34:07 +
schrieb Andrew :

> Hi
> 
> I wrote a program to turn the non-human-readable trace.log into 
> an interactive HTML file that can be used to help profile a D 
> program.
> 
> Its here: https://bitbucket.org/andrewtrotman/d-profile-viewer
> 
> There's also a readme that (hopefully) explains how to use it.
> 
> Please let me know if you find any bugs.
> 
> Andrew.

Sexy pie charts! Although I'm using OProfile since it works
without instrumenting the code.

-- 
Marco



Re: Females in the community.

2016-03-23 Thread Marco Leise via Digitalmars-d
Am Wed, 23 Mar 2016 11:33:55 +
schrieb QAston :

> https://marketplace.visualstudio.com/items?itemName=shinnn.alex

"The novelist from my motherland excites a lot of sci-fi
addicts by his crazy storytelling."

… (from the screen-shot) turns into …

"The novelist from my native land excites a lot of sci-fi
people with a drug addiction by their disgusting storytelling."

I feel safer now, having avoided these lingual traps.

-- 
Marco



Re: How can I report what I think a compiler's frontend bug

2016-03-20 Thread Marco Leise via Digitalmars-d
Am Sun, 20 Mar 2016 22:37:37 +
schrieb Vincent R :

> Now I need to understand what the original author wanted
> to do by declaring these 2 constructors.
 
It reminds me of what other wrappers do. The second
constructor pair is the proper one and takes the regular
arguments to construct the object. Sometimes though you
already got an object returned from C code and you merely want
to wrap it. That's what the first constructor pair is for. You
can see how the second pair of constructors always forwards to
the first pair with a newly created wxTreeItemId expressed as
an IntPtr.

Maybe IntPtr should be a pointer to an opaque struct instead:
  struct WxObject;
  alias WxObject* IntPtr;
That could resolve the current issue that 2 of the
constructors take the same types of arguments, (void*).

If this is the only place with this error, you can probably
remove the 3rd and 4th ctor. From the manual: "A wxTreeItemId
is not meant to be constructed explicitly by the user; only those
returned by the wxTreeCtrl functions should be used."
And taking a wild guess here without looking at the code, I
assume TreeCtrl calls the 2nd ctor.

-- 
Marco



Re: How can I report what I think a compiler's frontend bug

2016-03-20 Thread Marco Leise via Digitalmars-d
Am Sun, 20 Mar 2016 11:28:19 +
schrieb Vincent R :

> Hi,
> 
> I would like to start a new project (a bonjour/zeroconf wrapper 
> and a gui browser using it).
> For the gui part I would like to use my existing skills using 
> wxWidgets wrapper (wxD).
> So I have started to report a problem a few months ago:
> https://forum.dlang.org/post/rtarlodeojnmedgsn...@forum.dlang.org
> 
> But so far(DMD32 D Compiler v2.070.2) it's still not fixed.
> Do you think it will be fixed one day ?
> 
> Thanks

Yes, I think it will be fixed one day, since - as you know
by reading the very thread you linked - it is already reported
and the GDC developers chimed in and considered it critical.
There are also 118 open critical/blocker bugs that were
reported before yours. Most of them by people here on the
forums and you would need to explain to us why your bug
deserves higher attention than the others (154 in total).

Dlang is free open-source software and there is only a hand
full of people who are fixing bugs in the compiler just for the
sake of improving it. Most people contribute occasionally when
they are interested in a solution to a particular problem.

If you really need this fixed now ... you know the drill. I
suggest you analyze the problem and start a discussion about
it. Honestly asking why the compiler emits duplicate symbols
in a reduced test case might have yielded you some good
responses from the people who wrote the code.

-- 
Marco



Re: std.allocator issues

2016-03-11 Thread Marco Leise via Digitalmars-d
Am Tue, 8 Mar 2016 16:35:45 -0500
schrieb Andrei Alexandrescu :

> On 3/7/16 11:53 PM, Marco Leise wrote:
> > By the way: jemalloc has `mallocx()` to allocate at least N
> > bytes and `sallocx()` to ask for the actual size of an
> > allocation.
> 
> I know. Jason added them at my behest. -- Andrei

No further questions, your honor. :)

-- 
Marco



Re: std.allocator issues

2016-03-07 Thread Marco Leise via Digitalmars-d
Am Sat, 20 Feb 2016 08:47:47 -0500
schrieb Andrei Alexandrescu :

> On 02/20/2016 12:39 AM, Steven Schveighoffer wrote:
> > Given that there is "goodAllocSize", this seems reasonable. But for
> > ease-of-use (and code efficiency), it may be more straightforward to
> > allow returning more data than requested (perhaps through another
> > interface function? allocateAtLeast?) and then wrap that with the other
> > allocation functions that simply slice the result down to size.
> 
> Actually I confess that was the exact design for a while, and I, too, 
> found it ingenious. You ask me for 100 bytes but I'll allocate 256 
> anyway, why not I just return you the whole 256? But then that puts 
> burden everywhere. Does the user need to remember 256 so they can pass 
> me the whole buffer for freeing? That's bad for the user. Does the 
> allocator accept 100, 256, or any size in between? That complicates the 
> specification.

I found it ingenious, too. The question how much room was
really made free is common in programming. Aside from memory
allocations it can also appear in single reader/writer
circular buffers, where the reader consumes a chunk of memory
and the waiting writer uses `needAtLeast()` to wait until at
least X bytes are available. The writer then caches the actual
number of free bytes and can potentially write several more
entries without querying the free size again, avoiding
synchronization if reader and writer are threads.

Most raw memory allocators overallocate, be it due to fixed
pools or alignment and the extra bytes can be used at a higher
level (in typed allocators or containers) to grow a data
structure or for potential optimizations. I think for
simplicity's sake they should return the overallocated buffer
and expect that length when returning it in `free()`. So in
the above case, 256 would have to be remembered. This is not a
conceptual burden, as we are already used to the 3 properties:

ptr, length = 100, capacity = 256 

By the way: jemalloc has `mallocx()` to allocate at least N
bytes and `sallocx()` to ask for the actual size of an
allocation.

-- 
Marco



Re: Speed kills

2016-03-07 Thread Marco Leise via Digitalmars-d
Am Wed, 17 Feb 2016 19:55:08 +
schrieb Basile B. :

> Also, forgot to say, but an uniform API is needed to set the 
> rounding mode, whether SSE is used or the FPU...

At least GCC has a codegen switch for that. A solution would
have to either set both rounding modes at once or the
compilers would need to expose version MathFPU/MathSSE.

-- 
Marco



Re: If stdout is __gshared, why does this throw / crash?

2016-03-05 Thread Marco Leise via Digitalmars-d-learn
Got it now: https://issues.dlang.org/show_bug.cgi?id=15768

writeln() creates a copy of the stdout struct in a non
thread-safe way. If stdout has been assigned a File struct
created from a file name this copy includes a "racy"
increment/decrement of a reference count to the underlying
C-library FILE*. In the case that the reference count is
erroneously reaching 0, the file is closed prematurely and
when Glibc tries to access internal data it results in the
observable SIGSEGV.

-- 
Marco



Re: If stdout is __gshared, why does this throw / crash?

2016-03-05 Thread Marco Leise via Digitalmars-d-learn
Am Sat, 05 Mar 2016 14:18:31 +
schrieb Atila Neves :

> void main() {
>  stdout = File("/dev/null", "w");
>  foreach(t; 1000.iota.parallel) {
>  writeln("Oops");
>  }
> }

First thing I tried:

void main() {
 stdout = File("/dev/null", "w");
 foreach(t; 1000.iota.parallel) {
 stdout.writeln("Oops");
 }
}

That does NOT segfault ... hmm.

-- 
Marco



Re: Backslash escaping weirdness

2016-03-05 Thread Marco Leise via Digitalmars-d-learn
Am Tue, 01 Mar 2016 05:14:13 +
schrieb Nicholas Wilson :

> On Tuesday, 1 March 2016 at 04:48:01 UTC, Adam D. Ruppe wrote:
> > On Tuesday, 1 March 2016 at 04:18:11 UTC, Nicholas Wilson wrote:
> >> What is causing these errors? I'm using \t and \n in string 
> >> all over the place and they work.
> >
> > I don't think there's enough context to know for sure... but my 
> > guess is you forgot to close one of the quotes a couple lines 
> > above.
> >
> > So look up for an unpaired "
> 
> It was. Thanks.

And then God created syntax highlighting and He saw that it
was good. ;)

-- 
Marco



Re: Is this a bug in std.typecons.Tuple.slice?

2016-02-08 Thread Marco Leise via Digitalmars-d-learn
Am Tue, 09 Feb 2016 00:38:10 +
schrieb tsbockman :

> On Sunday, 7 February 2016 at 02:11:15 UTC, Marco Leise wrote:
> > What I like most about your proposal is that it doesn't break 
> > any existing code that wasn't broken before. That can't be 
> > emphasized enough.
> 
> Although I wish more than 3 people had voted in my poll, two of 
> them did claim to need the `ref`-ness of `Tuple.slice`, so I 
> don't think we can just ditch it. (I did not vote.)
> 
> If you guys want to add a return-by-value version, it should be 
> treated as an enhancement, not a bug fix in my opinion.

As mentioned I never used the feature myself and wont vote
for one or the other. Three people with no source code to
exemplify current use of .slice! is indeed not much to base
decisions on and both fixes yield unexpected results in
different contexts that warrant bug reports.

-- 
Marco



Re: Is this a bug in std.typecons.Tuple.slice?

2016-02-06 Thread Marco Leise via Digitalmars-d-learn
Am Sat, 06 Feb 2016 11:02:37 +
schrieb tsbockman :

> On Saturday, 6 February 2016 at 08:47:01 UTC, Saurabh Das wrote:
> > I think we should add a static assert to slice to ensure that 
> > the current implementation is not used in a case where the 
> > alignment doesn't match. This is better than failing without 
> > any warning.
> 
> If we pursue the deprecation route, I agree that this is a 
> necessary step.

That would hurt the least, yes. It's more like a .dup with
start and end parameter then.

> > We could add new (differently named) functions for slicing 
> > non-aligned tuples.
> >
> > I agree that my approach of removing the ref may break existing 
> > code, so if introduced, it should be named differently.
> >
> > I am not comfortable with tuple(42, true, "abc").slice(1, 3) 
> > being different in type from tuple(true, " abc").
> 
> Why? What practical problem does this cause?

For me it is mostly a gut feeling. Historically types
mimicking other types have often evoked trouble for example in
generic functions or concretely - if you look at my other post
- in D's AA implementation.

-- 
Marco



Re: Things that keep D from evolving?

2016-02-06 Thread Marco Leise via Digitalmars-d-learn
Am Sat, 06 Feb 2016 11:47:02 +
schrieb Ola Fosheim Grøstad
:

> Of course, Swift does not aim for very high performance, but for 
> convenient application/gui development. And frankly JavaScript is 
> fast enough for that kind of programming.

My code would not see much ref counting in performance critical
loops. There is no point in ref counting every single point in
a complex 3D scene.
I could imagine it used on bigger items. Textures for example
since they may be used by several objects. Or - a prime
example - any outside resource that is potentially scarce and
benefits from deterministic release: file handles, audio
buffers, widgets, ...

-- 
Marco



Re: Things that keep D from evolving?

2016-02-06 Thread Marco Leise via Digitalmars-d-learn
Am Sat, 06 Feb 2016 23:18:59 +
schrieb Ola Fosheim Grøstad
:

> Things that could speed up collection:
> - drop destructors so you don't track dead objects

Interesting, that would also finally force external resources
off the GC heap and into deterministic release. That needs a
solution to inheritance though. Think widget kits.

-- 
Marco



Re: Custom hash table key is const, how to call dtors?

2016-02-06 Thread Marco Leise via Digitalmars-d-learn
Am Sun, 07 Feb 2016 01:05:28 +
schrieb cy :

> On Saturday, 6 February 2016 at 03:57:16 UTC, Marco Leise wrote:
> 
> > No, but they could have dtors because they contain malloc'd 
> > data. E.g. string literals that don't live on the GC heap.
> 
> Character arrays allocated with glibc malloc are immutable? News 
> to me...

err... well... you got a point there, but then new string(100)
is probably allocated with malloc, too deep down in druntime.

immutable really means that there is no mutable reference to
the data. At any point in your code you can cast something to
immutable when you that no mutable references will exist
thereafter. We do this all the time during construction of
immutable stuff, because when something is newly created,
there is only one unique reference that is turned immutable
after construction and you are set.

You can go the same route with other MM schemes such as
malloc, just that without a GC you are responsible for not
freeing the immutable data as long as there are references to
it. For example (and this applies to Ds GC'd AA, too) you must
not delete entries while you iterate over the keys. There is
no way to say "Hey I just borrowed the list of keys, please
disallow any writes to it."

For now, issues related to dtor-constness need to be fixed.
Then working with immutable data structures is a lot less of a
mine field.

-- 
Marco



Re: Is this a bug in std.typecons.Tuple.slice?

2016-02-06 Thread Marco Leise via Digitalmars-d-learn
Am Sat, 06 Feb 2016 07:57:08 +
schrieb tsbockman :

> On Saturday, 6 February 2016 at 06:34:05 UTC, Marco Leise wrote:
> > I don't want to sound dismissive, but when that thought came
> > to my mind I considered it unacceptable that the type of
> > Tuple!(int, bool, string).slice(1, 3) would be something
> > different than Tuple!(bool, string). In your case
> > Tuple!(TuplePad!4LU, bool, string). That's just a matter of
> > least surprise when comparing the types.
> >
> > I'll let others decide, since I never used tuple slices.
> 
> I'm not sure which approach is ultimately better, but aside from 
> the performance implications, your "needed change" could break a 
> lot of valid code in the wild - or it might break none; it really 
> just depends on whether anyone actually *uses* the `ref`-ness of 
> the `Tuple.slice` return type.

True.
 
> (It appears that Phobos, at least, does not. But there is no 
> guarantee that the rest of the world is using `Tuple` only in the 
> ways that Phobos does.)
> Leaving aside bizarre meta-programming stuff (because then 
> *anything* is a breaking change), my PR does not break any code, 
> except that which was already broken: the type of the slice is 
> only different in those cases where it *has* to be, for alignment 
> reasons; otherwise it remains the same as it was before.

I understand that. We just have a different perspective on the
problem. Your priorities:

- don't break what's not broken
- .slice! lends on opSlice and should return by ref

My priorities:

- type of .slice! should be as if constructing with same
  values from scratch
- keep code additions in Phobos to a minimum

Why do I insist on the return type? Because surprisingly
simple code breaks if it doesn't match. Not everything can be
covered by runtime conversions in D. It still took me a while
to come up with something obvious:

uint[Tuple!(uint, ulong)] hash;

auto tup = tuple(1u, 2u, 3UL);

hash[tup.slice!(1, 3)] = tup[0];

 compiles?  works?
original Tuple : yesno
Saurabh Das changes: yesyes
your changes   : no no

What I like most about your proposal is that it doesn't break
any existing code that wasn't broken before. That can't be
emphasized enough.

-- 
Marco



Re: Is this a bug in std.typecons.Tuple.slice?

2016-02-05 Thread Marco Leise via Digitalmars-d-learn
Am Sat, 06 Feb 2016 04:28:17 +
schrieb tsbockman :

> On Friday, 5 February 2016 at 19:16:11 UTC, Marco Leise wrote:
> >> > 1. Removing 'ref' from the return type
> >
> > Must happen. 'ref' only worked because of the reinterpreting 
> > cast which doesn't work in general. This will change the 
> > semantics. Now the caller of 'slice' will deal with a whole new 
> > copy of everything in the returned slice instead of a narrower 
> > view into the original data. But that's a needed change to fix 
> > the bug.
> 
> Actually, it's not: 
> https://github.com/D-Programming-Language/phobos/pull/3973
> 
> All that is required is to include a little padding at the 
> beginning of the slice struct to make the alignments match.

I don't want to sound dismissive, but when that thought came
to my mind I considered it unacceptable that the type of
Tuple!(int, bool, string).slice(1, 3) would be something
different than Tuple!(bool, string). In your case
Tuple!(TuplePad!4LU, bool, string). That's just a matter of
least surprise when comparing the types.

I'll let others decide, since I never used tuple slices.

-- 
Marco



Re: Template to create a type and instantiate it

2016-02-05 Thread Marco Leise via Digitalmars-d-learn
Mixin templates is the way to go if you want something new on
every use of the template. Otherwise using the template
multiple times with the same arguments will always give you
the first instance.

-- 
Marco



Re: Is this a bug in std.typecons.Tuple.slice?

2016-02-05 Thread Marco Leise via Digitalmars-d-learn
Am Fri, 05 Feb 2016 05:31:15 +
schrieb Saurabh Das :

> On Friday, 5 February 2016 at 05:18:01 UTC, Saurabh Das wrote:
> [...]
> 
> Apologies for spamming. This is an improved implementation:
> 
>  @property
>  Tuple!(sliceSpecs!(from, to)) slice(size_t from, size_t 
> to)() @safe const
>  if (from <= to && to <= Types.length)
>  {
>  return typeof(return)(field[from .. to]);
>  }
> 
>  ///
>  unittest
>  {
>  Tuple!(int, string, float, double) a;
>  a[1] = "abc";
>  a[2] = 4.5;
>  auto s = a.slice!(1, 3);
>  static assert(is(typeof(s) == Tuple!(string, float)));
>  assert(s[0] == "abc" && s[1] == 4.5);
> 
>  Tuple!(int, int, long) b;
>  b[1] = 42;
>  b[2] = 101;
>  auto t = b.slice!(1, 3);
>  static assert(is(typeof(t) == Tuple!(int, long)));
>  assert(t[0] == 42 && t[1] == 101);
>  }

That's quite concise. I like this. Though 'field' is now
called 'expand':
// backwards compatibility
alias field = expand;

> These questions still remain:
> > 1. Removing 'ref' from the return type

Must happen. 'ref' only worked because of the reinterpreting
cast which doesn't work in general. This will change the
semantics. Now the caller of 'slice' will deal with a whole
new copy of everything in the returned slice instead of a
narrower view into the original data. But that's a needed
change to fix the bug.

> > 2. Adding 'const' to the function signature

Hmm. Since const is transitive this may add const to stuff
that wasn't const, like in a Tuple!(Object). When you call
const slice on that, you would get a Tuple!(const(Object)).
I would use inout, making it so that the tuple's original
constness is propagated to the result.

I.e.: @property inout(Tuple!(sliceSpecs!(from, to)))
slice(size_t from, size_t to)() @safe inout

> > 3. Is the new implementation less efficient for correctly 
> > aligned tuples?

Yes, the previous one just added a compile-time known offset
to the "this"-pointer. That's _one_ assembly instruction after
inlining and optimization.
The new one makes a copy of every field. On struct fields this
can call the copy constructor "this(this)" which is used for
example in reference counting to preform an increment for the
copy.
On "plain old data" it would simply copy the bit patterns. But
that's obviously still less efficient than adding an offset to
the pointer.
You need two methods if you want to offer the best of both
worlds. As soon as your function does not return a pointer or
has 'ref' on it, the compiler will provide memory on the stack
to hold the result and a copy will occur.
That said, sufficiently smart compilers can analyze what's
happening and come to the conclusion that when after the copy,
the original is no longer used, they can be merged.

> 4. @trusted -> @safe?

Sounds good, but be aware of the mentioned implications with
"this(this)". Copy constructors often need to do unsafe
things, so @safe here would disallow them in Tuples.
On the other hand recent versions of the front-end should
infer attributes for templates, so you can generally omit them
and "let the Tuple decide". This mechanism also already adds
@nogc, pure, nothrow as possible in both the original and your
implementation.
(The original code only had @trusted on it because the
compiler would always infer the safety as @system due to the
pointer casts and @system is close to intolerable for a
"high-level" functional programming feature such as tuples.
The other attributes should have been inferred correctly.)

-- 
Marco



Custom hash table key is const, how to call dtors?

2016-02-05 Thread Marco Leise via Digitalmars-d-learn
Usually I want the keys to be declared "immutable" to signal
that their content must not change in order to provide stable
hashes. But when you remove items from the table you need to
call a const/immutable dtor that needs to be written for
everything that can be a hash table key.

What do you put into such a const/immutable dtor?
How do others deal with this?

-- 
Marco



Re: Custom hash table key is const, how to call dtors?

2016-02-05 Thread Marco Leise via Digitalmars-d-learn
Am Sat, 06 Feb 2016 03:38:54 +
schrieb cy :

> On Friday, 5 February 2016 at 22:18:50 UTC, Marco Leise wrote:
> > But when you remove items from the table you need to call a 
> > const/immutable dtor that needs to be written for everything 
> > that can be a hash table key.
> 
> You need to write destructors for hash keys? How would you use 
> string literals as keys then? Could you provide an example 
> maybe...?

No, but they could have dtors because they contain malloc'd
data. E.g. string literals that don't live on the GC heap.

-- 
Marco



Re: Is this a bug in std.typecons.Tuple.slice?

2016-02-04 Thread Marco Leise via Digitalmars-d-learn
https://issues.dlang.org/show_bug.cgi?id=15645


Re: Octree implementation?

2016-02-04 Thread Marco Leise via Digitalmars-d-learn
Am Mon, 01 Feb 2016 02:56:06 +
schrieb Tofu Ninja :

> Just out of curiosity, does anyone have an octree implementation 
> for D laying around? Just looking to save some time.

I have one written in Delphi that you could prune till it fits.

-- 
Marco



Re: Is this a bug in std.typecons.Tuple.slice?

2016-02-04 Thread Marco Leise via Digitalmars-d-learn
Am Thu, 04 Feb 2016 15:17:54 +
schrieb Saurabh Das :

> On Thursday, 4 February 2016 at 12:28:39 UTC, Saurabh Das wrote:
> > This code:
> > [...]
> 
> Update: Simplified, this also doesn't work:
> 
> void main()
> {
>  import std.typecons;
>  auto tp = tuple(10, false, "hello");
> 
>  auto u0 = tp.slice!(0, tp.length);
>  auto u1 = tp.slice!(1, tp.length);
>  auto u2 = tp.slice!(2, tp.length);
> 
>  static assert(is(typeof(u0) == Tuple!(int, bool, string)));
>  static assert(is(typeof(u1) == Tuple!(bool, string)));
>  static assert(is(typeof(u2) == Tuple!(string)));
> 
>  assert(u2[0] == "hello");
>  assert(u0[2] == "hello");
>  assert(u1[1] == "hello");// This fails
> }
> 
> rdmd erasetype.d
> core.exception.AssertError@erasetype.d(16): Assertion failure
> 
> Any ideas?

Yes this is a clear bug, I'll report a bug and post the issue
number later.

-- 
Marco



Re: Distributed Memory implementation

2016-01-20 Thread Marco Leise via Digitalmars-d
Am Mon, 18 Jan 2016 05:59:15 +
schrieb tcak <1ltkrs+3wyh1ow7kz...@sharklasers.com>:

> I, due to a need, will start implementation of distributed memory 
> system.
> 
> Idea is that:
> 
> Let's say you have allocated 1 GiB space in memory. This memory 
> is blocked into 4 KiB.
> 
> After some reservation, and free operations, now only the blocks 
> 0, 12, and 13 are free to be allocated.
> 
> Problem is that those memory blocks are not consecutive.

Well ... there is a way that is a bit arcane. If you don't
need a lot of such remappings and you use page sized blocks
anyways, you could remap the scattered virtual memory to a new
consecutive range and be done.
Basically you reserve a new uncommitted (e.g. not backed by
physical memory) virtual memory range, remap your scattered
blocks into that range in the desired order and then release
the original blocks. I used this technique in a circular
buffer implementation, mirroring the start of the buffer to the
end, thus avoiding the wrapping problems that normally occur
when accessing data close to the end in a traditional
circular buffer.
Caveats: You'd need to allocate/free virtual memory directly
in multiples of the allocation granularity (which is the same
as the page size on POSIX) and you put some stress on the
virtual memory system. A process may have at most 2^16
memory mappings at a time on my Linux system.

-- 
Marco


Re: Three people out of four dislike SDL

2015-12-02 Thread Marco Leise via Digitalmars-d
Am Tue, 01 Dec 2015 03:49:07 +
schrieb Adam D. Ruppe :

> simpledisplay.d can do what SDL does :P
> 
> oh wait this is about the other SDL
> 
> well I want to talk about simpledisplay. I've been doing a 
> documentation book in the ddoc with lots of examples:
> 
> http://arsdnet.net/arsd/simpledisplay.html
> 
> (similar docs are on the way for cgi, dom, minigui, and 
> simpleaudio btw)
> 
> I'm so close to being able to replace SDL with just a couple 
> little D files - no more annoying DLLs!

This is actually good marketing. Everyone into cross-platform
graphics knows what to expect from SDL. Most people here
heard of simpledisplay, but it remained unclear what the scope
is. It can create windows, ok. But by saying the feature set
is the same as SDL we know that it will work on OS X, Windows,
Linux, handle input from various sources, at least mouse,
keyboard and joystick, create OpenGL contexts, handle
fullscreen mode etc. E.g. if you are accustomed to the feature
set SDL offers and don't want to miss anything you take for
granted there, simpledisplay is the way to go.

(I assume there is still a bit of functionality missing, but
then feature parity with SDL is still the target and you are
more likely to add requested features if they are already
offered by SDL.)

-- 
Marco



Re: Graillon 1.0, VST effect fully made with D

2015-11-27 Thread Marco Leise via Digitalmars-d-announce
We can probably agree that we don't know about the impact on a
large multimedia application written in D. What you can
communicate is: Create a @nogc thread routine and don't
register it with the GC to write real-time VSTs.

Guillaume did a good job, taking the GC out of the real-time
thread. It's D, it is a bit of a hack, it's the correct way to
do it and works. But I don't see it debunking any myths about
GC and real time...
A) It doesn't mix them to begin with.
B) "Realtime GCs" are a thing. D's GC is not optimized for
   such a use case.
C) With a small heap it doesn't matter. (We need more complex
   multimedia projects.)

What I've seen is a program, a non-linear video editor, called
PowerDirector that pauses for seconds every now and then.
These pauses reminded me a lot of GC pauses, but I can't be
sure. Although memory use is less after the pause, it could
also be a cleaning of caches. In any case quite a few of these
applications try to make "good use" of available RAM, causing
constant memory pressure.

Now there has been so much talk about the GC that I don't even
know what the filter does!

-- 
Marco



Re: Something about Chinese Disorder Code

2015-11-24 Thread Marco Leise via Digitalmars-d-learn
Am Tue, 24 Nov 2015 17:08:33 +
schrieb BLM768 :

> On Tuesday, 24 November 2015 at 09:48:45 UTC, magicdmer wrote:
> > I display chinese string like:
> >
> > auto str = "你好,世界"
> > writeln(str)
> >
> > and The display is garbled。
> >
> > some windows api like MessageBoxA ,if string is chinese, it 
> > displays disorder code too
> >
> > i think i must use WideCharToMultiByte to convert it , is there 
> > any other answer to solve this question simplely
> 
> You can also try using a wide string literal, i.e. "你好,世界"w. The 
> suffix forces the string to use 16-bit characters.
> 
> The garbled display from writeln might be related to your console 
> settings. If you aren't using the UTF-8 codepage in cmd.exe, that 
> would explain why the text appears garbled. Unfortunately, 
> Windows has some significant bugs with UTF-8 in the console.

This is really our problem. We pretend the output terminal is
UTF-8 without providing any means to configure the terminal or
converting the output to the terminal's encoding. All OSs
provide conversion API that works for the most part (assuming
normalization form D for example), but we just don't use them.
Most contributers were on Linux or English Windows system
where the issue is not obvious. But even in Europe łáö will
come out garbled.

Now this is mostly not an issue with modern Linux and OS X as
the default is UTF-8, but some people refuse to change to
variable length encodings and Windows has always been
preferring fixed length encodings.

The proper way to solve this for the OP in a cross-platform
way is to replace writeln with something that detects whether
the output is a terminal and then converts the string to
something that will display correctly, which can be a simple
wchar[] on Windows or the use of iconv on Linux. Ketmar is
currently using iconv to print D string on his Linux set up
for Cyrillic KIO-8U and I think I used it in some code as
well. It is not perfect, but will make most printable strings
readable. ICU I think is the only library that gets it 100%
correct with regards to normalization and offering you
different error concealment methods.

-- 
Marco



Re: [Poll] On Linux, what should we commonly use as sub-directory name for D?

2015-11-16 Thread Marco Leise via Digitalmars-d
Am Mon, 16 Nov 2015 08:49:57 +0100
schrieb Iain Buclaw via Digitalmars-d
:

> There should be ways to catch ABI changes in the build or test process.
> Maybe I'm misremembering something though. :-)
> 
> There should be a degree of ABI compatibility between releases for plain
> static functions - assuming that we add no more properties to the
> language.  That leaves what breaks the most in moving to a new version are
> template instantiations, no?

Frankly I have no idea what level of testing is in place. :D
To be practical for package maintainers, we would need some
higher level common D ABI versioning scheme that includes
Phobos. Then I could start grouping libraries by architecture
and this ABI version instead of by compiler/FE version. But
let's take baby steps here and first get DMD to use DWARF EH.
Once we can dynamically link DMD<->GDC, DMD<->LDC, we can
think about a stable Dlang ABI in terms of EH, function
mangling, Phobos etc.
I can easily see Phobos being split out of the DMD release
cycle by then, with both loosely depending on each other.

-- 
Marco



Re: Our template emission strategy is broken

2015-11-16 Thread Marco Leise via Digitalmars-d
Am Wed, 11 Nov 2015 14:23:15 +
schrieb Martin Nowak :

> Think of that for a moment, no package manager allows you to
> have cycles in your dependencies.

There are package managers that allow packages to mutually
depend on each other. Order is established by qualifying the
dependency similar to how you declare an owning and a weak
reference in a parent/child pointer relation.
E.g. You must install DMD before druntime/Phobos, but DMD is
only usable after druntime/Phobos has also been installed.

-- 
Marco



Re: Tonight: Introduction to D at Codeaholics (HK)

2015-11-16 Thread Marco Leise via Digitalmars-d-announce
Am Thu, 12 Nov 2015 01:30:06 +0800
schrieb Lionello Lunesu :

> * Why doesn't D explicitly specify the exceptions that can be thrown? 
> (To which I answered that I never saw the point in Java and only found 
> it tedious. This did not convince the person.)

Maybe that's your point of view or maybe you were just
undecided. When you write a library it is sometimes better to
be explicit about your interface and that includes any
exceptions. This not only enables users of the library to
selectively catch exceptions they can handle at layer X, but
facilitates static checks:

* Are any exceptions missing from DDoc/@throws that are
  thrown in the code? (Potential for auto-generating the DDoc.)
* A function is nothrow, but the user only catches, e.g.
  UtfException explicitly. Is that the only exception type
  that could occur?

There were some more nice points that I don't remember from
when I failed at implementing this many months ago. The
questioner has my sympathy in any case, but it's certainly not
a priority.

The way I wanted to implement it was by making attribute-less
functions map to @throws(Exception), which implicitly makes
the feature opt-in: It is always correct to state a super set
of the actual thrown exceptions in an API to have room for
extensions. Thrown exceptions would be collected much like
nothrow is deduced right now, but as a list with respect to the
hierarchical nature of Exceptions.

-- 
Marco



Re: [Poll] On Linux, what should we commonly use as sub-directory name for D?

2015-11-15 Thread Marco Leise via Digitalmars-d
Am Wed, 11 Nov 2015 17:24:18 +
schrieb Chris Piker :

> On Tuesday, 12 November 2013 at 19:50:32 UTC, Marco Leise wrote:
> > I've seen people use both 'd' and 'dlang' now, so I created a 
> > poll. Everyone assembling Linux packages is then free use the 
> > results to create a similar experience on all distributions.
> >
> > http://www.easypolls.net/poll.html?p=52828149e4b06cfb69b97527
> 
> Has this issued been settled?  We are using the dmd compiler on 
> CentOS 6.  I have a custom plplot.d file that I want to put 
> somewhere under our shared /usr/local for our programmers to use. 
>   If I want to follow some sort of precedent then it looks like my 
> choices are:
> 
>/usr/local/include/dmd   (similar to dmd rpm)
>/usr/local/include
>/usr/local/include/d
>/usr/local/include/dlang
>/usr/local/src
>/usr/local/src/d
>/usr/local/src/dlang
> 
> I personally prefer:
> 
>/usr/local/src/d
> 
> but would like to go with some sort of convention if one is 
> starting to gel.

By secret ballot vote in this poll options with "d" in it lost
1:2 to "dlang". As far as I know this was the only real effort
to unify directory names across Linux distributions and at
least two package maintainers (Dicebot for Arch Linux and me
on Gentoo) have used the input from the following discussion
to decide on an import path for the currently active system DMD
compiler:

  /usr/include/dlang/dmd

This was after the official DMD package build script
for .rpm/.deb have been created and "D" was renamed to
"Dlang", so the discussion had no influence on their layout.

As far as /usr/local/src goes, I've never seen anything but OS
sources (i.e. Linux kernel) being installed to /usr/src
and /usr/local is conventionally used like /usr, but for
additional local files that are not covered by the generic OS
installation:
https://www.centos.org/docs/5/html/5.1/Deployment_Guide/s3-filesystem-usr-local.html

As for installing additional Dlang library imports, I advise
to use one subdirectory in /usr/include/dlang for each package
and "parallel version". What I mean by that is that many real
packages change their API over the years, sometimes so
radically that some packages stick with previous versions. One
such example is Gtk, with GIMP still using Gtk 2 and Gnome
being linked against Gtk 3. In such cases you want be able to
include either gtk2 or gtk3. Now if you look at GtkD you see
this reflected in different import paths, namely gtkd-2 and
gtkd-3. Both contain common module paths like gtkc/cairo.d
that would overwrite each other if not put under different
parent directories. And once you have

  /usr/include/dlang/dmd
  /usr/include/dlang/ldc
  /usr/include/dlang/gtkd-2
  /usr/include/dlang/gtkd-3

it makes sense to handle other libraries the same way.

In your specific case you may be able to get away with a
single flat import path for everything by allowing only one
version of dmd, GtkD etc., but other distributions will not be
able to follow this scheme. I hope this helps you in your
decision making.

-- 
Marco



Re: [Poll] On Linux, what should we commonly use as sub-directory name for D?

2015-11-15 Thread Marco Leise via Digitalmars-d
Am Thu, 12 Nov 2015 10:26:54 +
schrieb Marc Schütz :

> I'm interested in this topic, too. Has there been a conclusion as 
> to distributions should install includes and libraries of 
> different compilers (and versions), which sonames to use, etc?

The shared library topic was too hot back then and DMD I think
is still the only compiler which does shared linking out of
the box in a default installation. If you ask about only
Phobos, I would just use the soname provided by upstream:
  libphobos2.so.0.69.1
  libphobos2.so.0.69   -> libphobos2.so.0.69.1
  libphobos2.so-> libphobos2.so.0.69.1
(The convention for the first symlink may differ per
distribution). These files should be installed into your
system's shared library lookup path, so that dynamically linked
executables can be run on other systems. For example you could
compile a program on your machine and ssh copy it to a remote
server running in a low memory VM where compilation would be
impossible. If that use case works, you are doing it right. ;)

As for additional libraries, you are in trouble. Not only is
it common to have 32-bit+64-bit libraries, but we also have
ABI differences between three popular D compilers and
potentially each new release of Dlang. So if you ask where to
put libraries I'd say somewhere, where they are separated by
architecture, compiler and compiler version. While this scheme
works for Gentoo, I had to bend the rules quite liberally to
make it happen and these packages have no chance of getting
into the main tree.

The alternative is to always use one compiler (e.g. DMD) and
update all libraries in sync with a DMD update. Then you can
install these libraries to the standard path as well. This is
practically what is done in the C/C++ world. ABI breakages are
far and in-between there but do happen like just now with the
change to use a namespace prefix for the mangling of "string".

(For includes see previous post and discussion.)

-- 
Marco



Re: Benchmark memchar (with GCC builtins)

2015-10-30 Thread Marco Leise via Digitalmars-d
Am Fri, 30 Oct 2015 22:31:54 +
schrieb Iakh :

> On Friday, 30 October 2015 at 21:33:25 UTC, Andrei Alexandrescu 
> wrote:
> > Could you please take a look at GCC's generated code and 
> > implementation of memchr? -- Andrei
> 
> glibc uses something like pseudo-SIMD with ordinal x86 
> instructions (XOR magic, etc).
> Deap comarison I left for next time :)

Instead of providing a library implementation, these functions
are often best left to compiler intrinsics which can provide
one of several instruction sequences based on the arguments
and the target CPU. In particular compilers often know runtime
length arguments from const-folding and can chose to use the
best matching SIMD instruction set if compiling for a specific
CPU. In regular D or C code you don't have access to this
information.

GlibC might use that particular implementation in its source,
but GCC will override these generic functions with builtin
intrinsics unless you disable them via command-line switch
(-fno-builtins). Same goes for e.g. ctlz (count leading
zeroes): It will be emulated somehow if compiled for older
CPUs and use fast native instructions on more recent CPUs.
Yes, sometimes the intrinsics are lacking, but in general I
trust the compiler devs more than myself, especially since
they likely tested it on several target architectures while I
mostly only do one.

Just ask yourself how you would select the optimal memchr
function matching the -march= (gdc) or -mcpu (ldc) flags at
compile-time.

-- 
Marco



Re: Playing SIMD

2015-10-28 Thread Marco Leise via Digitalmars-d
Am Mon, 26 Oct 2015 14:04:18 +0100
schrieb Iain Buclaw via Digitalmars-d
:

> > Yeah but PMOVMSKB not implemented in core.simd.
> >
> 
> Don't use core.simd, push for getting std.simd in, then leverage the
> generics exposed through that module.
 
Yeah, but PMOVMSKB is not implemented in std.simd.

-- 
Marco



Re: Fastest JSON parser in the world is a D project

2015-10-28 Thread Marco Leise via Digitalmars-d-announce
Am Tue, 27 Oct 2015 14:00:06 +
schrieb Martin Nowak :

> On Tuesday, 27 October 2015 at 13:14:36 UTC, wobbles wrote:
> >> How can `coordinates` member be known at compile-time when the 
> >> input argument is a run-time string?
> >
> > I suspect through the opDispatch operator overload.
> >
> > http://dlang.org/operatoroverloading.html#dispatch
> 
> Yikes, this is such an anti-pattern.
> https://github.com/rejectedsoftware/vibe.d/issues/634

For my defense I can say that the JSON parser is not a range
and thus less likely to be used in UFCS chains. It can be
replaced with .singleKey!"coordinates"()

-- 
Marco



Re: DMD is slow for matrix maths?

2015-10-28 Thread Marco Leise via Digitalmars-d
Am Mon, 26 Oct 2015 11:37:16 +
schrieb Etienne Cimon :

> On Monday, 26 October 2015 at 04:48:09 UTC, H. S. Teoh wrote:
> > On Mon, Oct 26, 2015 at 02:37:16AM +, Etienne Cimon via 
> > Digitalmars-d wrote:
> >
> > If you must use DMD, I recommend filing an enhancement request 
> > and bothering Walter about it.
> >
> >
> > T
> 
> I'd really like the performance benefits to be available to DMD 
> users as well. I think I'll have to write it all with inline 
> assembler just to be sure...

Remember though, that classic inline asm is not transparent to
the compiler and doesn't inline.

-- 
Marco



Re: dmd.conf no longer working?

2015-10-22 Thread Marco Leise via Digitalmars-d
Am Thu, 22 Oct 2015 21:50:33 +0200
schrieb Jacob Carlborg :

> On 2015-10-22 19:43, Andrei Alexandrescu wrote:
> > Hi folks, I'm having trouble setting up dmd on a fresh system. The short
> > of it is nothing in dmd.conf seems to be taken into account.
> > Furthermore, sections such as [Environment] are rejected with the error
> > message:
> >
> > Error: Use 'NAME=value' syntax, not '[ENVIRONMENT]'
> >
> > What happened?
> 
> A couple of things:
> 
> * I recommend compiling with -v, it will output the path to dmd.conf
> 
> * It should be Environment32 and/or Environment64 for Linux. For OS X 
> it's Environment. It should _not_ be ENVIRONMENT
> 
> * You might want to tell us on which platform and how you installed the 
> compiler

You can also combine common options under [Environment] like
this:

[Environment]
DFLAGS=-I/opt/dmd-2.069/import -L--export-dynamic -defaultlib=phobos2 -verrors=0
[Environment32]
DFLAGS=%DFLAGS% -L-L/opt/dmd-2.069/lib32 -L-rpath -L/opt/dmd-2.069/lib32
[Environment64]
DFLAGS=%DFLAGS% -L-L/opt/dmd-2.069/lib64 -L-rpath -L/opt/dmd-2.069/lib64

-- 
Marco



Re: Fastest JSON parser in the world is a D project

2015-10-22 Thread Marco Leise via Digitalmars-d-announce
Am Thu, 22 Oct 2015 06:10:56 -0700
schrieb Walter Bright :

> On 10/21/2015 3:40 PM, Laeeth Isharc wrote:
> > Have you thought about writing up your experience with writing fast json?  
> > A bit
> > like Walter's Dr Dobbs's article on wielding a profiler to speed up dmd.
> 
> Yes, Marco, please. This would make an awesome article, and we need articles 
> like that!
> 
> You've already got this:
> 
>  https://github.com/kostya/benchmarks/pull/46#issuecomment-147932489
> 
> so most of it is already written.

There is at least one hurdle. I don't have a place to publish
articles, no personal blog or site I contribute articles to
and I don't feel like creating a one-shot one right now. :)

-- 
Marco



Re: Fastest JSON parser in the world is a D project

2015-10-21 Thread Marco Leise via Digitalmars-d-announce
Am Wed, 21 Oct 2015 04:17:16 +
schrieb Laeeth Isharc :

> Very impressive.
> 
> Is this not quite interesting ?  Such a basic web back end 
> operation, and yet it's a very different picture from those who 
> say that one is I/O or network bound.  I already have JSON files 
> of a couple of gig, and they're only going to be bigger over 
> time, and this is a more generally interesting question.
> 
> Seems like you now get 2.1 gigbytes/sec sequential read from a 
> cheap consumer SSD today...

You have this huge amount of Reddit API JSON, right?
I wonder if your processing could benefit from the fast
skipping routines or even reading it as "trusted JSON".

-- 
Marco



Re: Fastest JSON parser in the world is a D project

2015-10-21 Thread Marco Leise via Digitalmars-d-announce
Am Wed, 21 Oct 2015 17:00:39 +
schrieb Suliman :

> >> > Nice! I see you are using bitmasking trickery in multiple 
> >> > places. stdx.data.json is mostly just the plain lexing 
> >> > algorithm, with the exception of whitespace skipping. It was 
> >> > already very encouraging to get those benchmark numbers that 
> >> > way. Good to see that it pays off to go further.
> >>
> >> Is there any chance that new json parser can be include in 
> >> next versions of vibed? And what need to including its to 
> >> Phobos?
> >
> > It's already available on code.dlang.org:
> > http://code.dlang.org/packages/std_data_json
> 
> 
> Jonatan, I mean https://github.com/mleise/fast :)

That's nice, but it has a different license and I don't think
Phobos devs would be happy to see all the inline assembly I
used and duplicate functionality like the number parsing and
UTF-8 validation and missing range support.

-- 
Marco



Re: Implicit conversion rules

2015-10-21 Thread Marco Leise via Digitalmars-d-learn
Am Wed, 21 Oct 2015 12:49:35 -0700
schrieb Ali Çehreli :

> On 10/21/2015 12:37 PM, Sigg wrote:
> 
>  > cause at least few more "fun" side effects.
> 
> One of those side effects would be function calls binding silently to 
> another overload:
> 
> void foo(bool){/* ... */}
> void foo(int) {/* ... */}
> 
>auto a = 0;  // If the type were deduced by the value,
>foo(a);  // then this would be a call to foo(bool)...
> // until someone changed the value to 2. :)
> 
> Ali

God forbid anyone implement such nonsense into D !
That would be the last thing we need that we cannot rely on
the overload resolution any more. It would be as if making 'a'
const would change the overload resolution when none of the
overloads deal with constness...

import std.format;
import std.stdio;

string foo(bool b) { return format("That's a boolean %s!", b); }
string foo(uint u) { return format("Thats an integral %s!", u); }

void main()
{
  int a = 2497420, b = 2497419;
const int c = 2497420, d = 2497419;
writeln(foo(a-b));
writeln(foo(c-d));
writeln("WAT?!");
}

-- 
Marco



core.attribute - Remove friction for compiler attributes

2015-10-20 Thread Marco Leise via Digitalmars-d
For a while now GDC and LDC have supported a variety of their
backend's attributes, like inlining or compiling a specific
function with SSE4 in an otherwise generic x64 build.

I think we should unify those into a common core.attribute,
either aliasing or replacing the vendor specific symbols.
They don't need to be functional immediately. There are two
things that I see need to be discussed.

1. Syntax

  All attributes are currently set via @attribute(…). I wonder
  if this is just easier to recognize for the compiler than
  multiple attribute names or if there are other benefits.
  Personally for the most part I'd prefer shorter versions,
  e.g. @forceinline instead of @attribute("forceinline").
  We can also achieve this by keeping the vendor specific
  modules/symbols and creating aliases, so it is mostly about
  what we prefer.

2. Semantics

  Unfortunately the compiler internals are all quite
  different. When it comes to always inlining a function for
  example, DMD won't even try unless "-inline" is given on the
  command-line. GCC will fail compilation if it cannot inline
  and I think LLVM only warns you in such cases.
  My somewhat painful idea is to split these attributes up
  into something like @forceinline (produces warning or error)
  and @inline (silent or warning at most) and map them to the
  closest the respective compilers can offer - or else make
  them a noop.


https://github.com/mleise/fast/blob/master/source/fast/helpers.d#L99
shows how we alias existing attributes or make them noops:

  version (DigitalMars)
  {
enum noinline;
enum forceinline;
enum sse4;
  }
  else version (GNU)
  {
import gcc.attribute;
enum noinline= gcc.attribute.attribute("noinline");
enum forceinline = gcc.attribute.attribute("forceinline");
enum sse4= gcc.attribute.attribute("target", "sse4");
  }
  else version (LDC)
  {
import ldc.attribute;
enum noinline= ldc.attribute.attribute("noinline");
enum forceinline = ldc.attribute.attribute("alwaysinline");
enum sse4;
  }

(Note that GCC's target attribute can actually takes a list of
features.) If you target i586 and started using SSE, the
compiler would tell you that these instructions do not exist
on that architecture. So you write a generic function and an
additional SSE function with @attribute("target", "sse") and
wont run into "illegal instruction" errors at runtime on older
CPUs.

-- 
Marco



Re: KeepTerminator

2015-10-20 Thread Marco Leise via Digitalmars-d
Am Tue, 20 Oct 2015 13:08:07 +0530
schrieb Shriramana Sharma :

> Writing stdin.byLine(KeepTerminator.yes) is quite awkward. Why this long 
> name and not something shorter like KeepEOL?

Because enums work that way in D: .
and probably because someone found that we had too many too
short names in Phobos already.

> BTW on Python the default is to *not* strip the newlines. Is there a reason 
> the opposite is true here?

I assume, that's because typically you are not interested in
the control characters between the lines. They mostly just get
in the way and the last line may or may not have a line-break.

-- 
Marco



Re: Beta D 2.069.0-b2

2015-10-20 Thread Marco Leise via Digitalmars-d-announce
Am Tue, 20 Oct 2015 19:26:13 +0200
schrieb Martin Nowak :

> On 10/17/2015 09:05 PM, Marco Leise wrote:
> > Oh wait, false alert. That was a relic from older days. My
> > build script placed a dummy dmd.conf there.
> > 
> > I do seem to get problems with ldc2-0.16.0:
> 
> Are you using something befor 0.16.0-beta2, b/c I thought the problem
> was resolved.
> https://github.com/D-Programming-Language/dmd/pull/5025#issuecomment-142143727

Indeed I should have checked that. I'm using 0.16.0_alpha4.
Alright then. Everything works as designed now. :)

-- 
Marco



Re: Phobos still being statically linked in?

2015-10-18 Thread Marco Leise via Digitalmars-d
For the Gentoo Linux DMD package I made dynamic linking the
default. It's not just Phobos but other libraries as well,
like GtkD and what else you link into your executable.

A simple GUI converting text in the clipboard on button press
is at around 553 KiB now. With static linking it is 6 MiB.

-- 
Marco



Re: [sdc] linker problems when building programs with sdc

2015-10-18 Thread Marco Leise via Digitalmars-d-learn
Am Sun, 18 Oct 2015 11:35:16 +0200
schrieb Joseph Rushton Wakeling via Digitalmars-d-learn
:

> Hello all,
> 
> I recently decided to have another play with sdc to see how it's doing.  
> Since 
> my dmd is installed in /opt/dmd/ I had to do a couple of tricks to get sdc 
> itself to build:
> 
> (i) put a dlang.conf in /etc/ld.so.conf.d/ containing the /opt/dmd/lib64 path;
> 
> (ii) call 'make LD_PATH=/opt/dmd/lib64' when building sdc
> 
> sdc itself then builds successfully, and I wind up with a bin/ directory 
> containing sdc and sdc.conf (which contains includePath and libPath options) 
> and 
> a lib/ directory containing libd.a, libd-llvm.a, libphobos.a and libsdrt.a.
> 
> However, when I try to build any program, even a simple hello-world, I get a 
> linker error:
> 
> $ ./sdc hello.d
> hello.o: In function `_D5hello4mainFMZv':
> hello.d:(.text+0x1c): undefined reference to `_D3std5stdio7writelnFMAyaZv'
> collect2: error: ld returned 1 exit status
> 
> To solve this, I tried adding in a library-path flag, but this simply 
> resulted 
> in an exception being thrown by sdc's options parsing:
> 
> $ ./sdc -L$MYHOMEDIR/code/D/sdc/lib hello.d
> std.getopt.GetOptException@/opt/dmd/bin/../import/std/getopt.d(604): 
> Unrecognized option -L$MYHOMEDIR/code/D/sdc/lib
> 
> [cut great big backtrace]
> 
> Can anyone advise what's missing in my setup?  I did also try adding 
> $MYHOMEDIR/code/D/sdc/lib to the /etc/ld.so.conf.d/dlang.conf file, and 
> re-running ldconfig, but that didn't seem to make any difference.
> 
> Thanks & best wishes,
> 
>  -- Joe

Maybe you should have started with `return 42;`? :D
writeln is not a light-weight in terms of exercised compiler
features. I didn't even know that it compiles yet. Last time I
heard it was not usable.

-- 
Marco



Re: Beta D 2.069.0-b2

2015-10-17 Thread Marco Leise via Digitalmars-d-announce
Oh wait, false alert. That was a relic from older days. My
build script placed a dummy dmd.conf there.

I do seem to get problems with ldc2-0.16.0:

  make -C druntime -f posix.mak MODEL=32 
  ../dmd/src/dmd -conf= -c -o- -Isrc -Iimport -Hfimport/core/sync/barrier.di 
src/core/sync/barrier.d
  core.exception.AssertError@expression.d(4369): Assertion failure

That is this line of code:
https://github.com/D-Programming-Language/dmd/blob/v2.069.0-b2/src/expression.d#L4369

Stack trace (with file + line numbers now, hey!):
#0  StringExp::compare(RootObject*) (this=0xb66e30, obj=0xb65c80) at 
expression.d:4341
#1  0x004fb6ed in StringExp::equals(RootObject*) (this=0xb66e30, 
o=0xb65c80) at expression.d:4175
#2  0x004c4fe9 in match(RootObject*, RootObject*) (o1=0xb66e30, 
o2=0xb65c80) at dtemplate.d:246
#3  0x004c51c6 in arrayObjectMatch(Array*, 
Array*) (oa1=0x764aca98, oa2=0x764ac898) at dtemplate.d:290
#4  0x004cddd7 in TemplateInstance::compare(RootObject*) 
(this=0x764aca00, o=0x764ac800) at dtemplate.d:6232
#5  0x004cdaf8 in 
TemplateDeclaration::findExistingInstance(TemplateInstance*, 
Array*) (this=0x764ac600, tithis=0x764aca00, fargs=0x0) at 
dtemplate.d:2039
#6  0x004d2f24 in TemplateInstance::semantic(Scope*, 
Array*) (this=0x764aca00, sc=0x765dfc00, fargs=0x0) at 
dtemplate.d:5583
#7  0x00406877 in TemplateInstance::semantic(Scope*) 
(this=0x764aca00, sc=0x765dfc00) at dtemplate.d:5967
#8  0x0057a03a in TypeInstance::resolve(Loc, Scope*, Expression**, 
Type**, Dsymbol**, bool) (this=0x764ae100, loc=..., sc=0x765dfc00, 
pe=0x7fffcec0, pt=0x7fffcec8, ps=0x7fffceb8, intypeid=false) at 
mtype.d:7412
#9  0x0057a327 in TypeInstance::toDsymbol(Scope*) (this=0x764ae100, 
sc=0x765dfc00) at mtype.d:7459
#10 0x0043c5d6 in AliasDeclaration::semantic(Scope*) 
(this=0x764ae200, sc=0x765dfc00) at .:598
#11 0x004897f9 in Module::semantic() (this=0x76378400) at 
dmodule.d:976
#12 0x00488e0f in Import::semantic(Scope*) (this=0x76a82800, 
sc=0x76aa9c00) at dimport.d:258
#13 0x0042759a in AttribDeclaration::semantic(Scope*) 
(this=0x76a82900, sc=0x76aa9600) at attrib.d:168
#14 0x004897f9 in Module::semantic() (this=0x76a7fe00) at 
dmodule.d:976
#15 0x00488e0f in Import::semantic(Scope*) (this=0x77eca100, 
sc=0x77ed2200) at dimport.d:258
#16 0x0042759a in AttribDeclaration::semantic(Scope*) 
(this=0x77eca200, sc=0x77ecee00) at attrib.d:168
#17 0x004897f9 in Module::semantic() (this=0x77ec9000) at 
dmodule.d:976
#18 0x00567de5 in tryMain(unsigned long, char const**) (argc=8, 
argv=0x7fffe978) at mars.d:1485
#19 0x0056a567 in D main () at mars.d:1695

`sz` is 0, `string` and `len` seem to be ok.

-- 
Marco



Re: Beta D 2.069.0-b2

2015-10-17 Thread Marco Leise via Digitalmars-d-announce
Am Wed, 14 Oct 2015 15:52:57 +0200
schrieb Martin Nowak :

> Second beta for the 2.069.0 release.
> 
> http://dlang.org/download.html#dmd_beta
> http://dlang.org/changelog/2.069.0.html
> 
> Please report any bugs at https://issues.dlang.org
> 
> -Martin

When I use a specific host compiler, it still picks up the
dmd.conf provided in the package and doesn't find object.d.
Should I manually delete dmd.conf before building?

-- 
Marco



Re: Fastest JSON parser in the world is a D project

2015-10-17 Thread Marco Leise via Digitalmars-d-announce
Am Sat, 17 Oct 2015 16:27:06 +
schrieb Sean Kelly :

> On Saturday, 17 October 2015 at 16:14:01 UTC, Andrei Alexandrescu 
> wrote:
> > On 10/17/15 6:43 PM, Sean Kelly wrote:
> >> If this is the benchmark I'm remembering, the bulk of the time 
> >> is spent
> >> parsing the floating point numbers. So it isn't a test of JSON 
> >> parsing
> >> in general so much as the speed of scanf.
> >
> > In many cases the use of scanf can be replaced with drastically 
> > faster methods, as I discuss in my talks on optimization 
> > (including Brasov recently). I hope they'll release the videos 
> > soon. -- Andrei
> 
> Oh absolutely. My issue with the benchmark is just that it claims 
> to be a JSON parser benchmark but the bulk of CPU time is 
> actually spent parsing floats. I'm on my phone though so perhaps 
> this is a different benchmark--I can't easily check. The one I 
> recall came up a year or so ago and was discussed on D.general.

1/4 to 1/3 of the time is spent parsing numbers in highly
optimized code. You see that in a profiler the number parsing
shows up on top, but the benchmark also exercises the
structural parsing a lot. It is not a very broad benchmark
though, lacking serialization, UTF-8 decoding, validation of
results etc. I believe the author didn't realize how over time
it became the go-to performance test. The author of RapidJSON
has a very in-depth benchmark suite, but it would be a bit of
work to get something non-C++ integrated:
https://github.com/miloyip/nativejson-benchmark
It includes conformance tests as well.

-- 
Marco



Re: Fastest JSON parser in the world is a D project

2015-10-16 Thread Marco Leise via Digitalmars-d-announce
Am Thu, 15 Oct 2015 18:46:12 +0200
schrieb Sönke Ludwig :

> Am 14.10.2015 um 09:01 schrieb Marco Leise:
> > […]
> > stdx.data.json:  2.76s,  207.1Mb  (LDC)
> >
> > Yep, that's right. stdx.data.json's pull parser finally beats
> > the dynamic languages with native efficiency. (I used the
> > default options here that provide you with an Exception and
> > line number on errors.)
> 
>  From when are the numbers for stdx.data.json? The latest results for 
> the pull parser that I know of were faster than RapidJson:
> http://forum.dlang.org/post/wlczkjcawyteowjbb...@forum.dlang.org

You know, I'm not surprised at the "D new lazy Ldc" result,
which is in the ball park figure of what I measured without
exceptions & line-numbers, but the Rapid C++ result seems way
off compared to kostya's listing. Or maybe that Core i7 doesn't
work well with RapidJSON.

I used your fork of the benchmark, made some modifications
like adding taggedalgebraic and what else was needed to make
it compile with vanilla ldc2 0.16.0. Then I removed the flags
that disable exceptions and line numbers. Compilation options
are the same as for the existing gdc and ldc2 entries. I did
not add " -partial-inliner -boundscheck=off -singleobj ".

> Judging by the relative numbers, your "fast" result is still a bit 
> faster, but if that doesn't validate, it's hard to make an objective 
> comparison.

Every value that is read (as opposed to skipped) is validated
according to RFC 7159. That includes UTF-8 validation. Full
validation (i.e. readJSONFile!validateAll(…);) may add up to
14% overhead here.

-- 
Marco



Re: Fastest JSON parser in the world is a D project

2015-10-16 Thread Marco Leise via Digitalmars-d-announce
Am Thu, 15 Oct 2015 18:17:07 +0200
schrieb Sönke Ludwig :

> Am 15.10.2015 um 13:06 schrieb Rory McGuire via Digitalmars-d-announce:
> > In browser JSON.serialize is the usual way to serialize JSON values.
> > The problem is that on D side if one does deserialization of an object
> > or struct. If the types inside the JSON don't match exactly then vibe
> > freaks out.
> 
> For float and double fields, the serialization code should actually 
> accept both, floating point and integer numbers:
> 
> https://github.com/rejectedsoftware/vibe.d/blob/2fffd94d8516cd6f81c75d45a54c655626d36c6b/source/vibe/data/json.d#L1603
> https://github.com/rejectedsoftware/vibe.d/blob/2fffd94d8516cd6f81c75d45a54c655626d36c6b/source/vibe/data/json.d#L1804
> 
> Do you have a test case for your error?
 
Well it is not an error. Rory originally wrote about
conversions between "1" and 1 happening on the browser side.
That would mean adding a quirks mode to any well-behaving JSON
parser. In this case: "read numbers as strings". Hence I was
asking if the data on the client could be fixed, e.g. the json
number be turned into a string first before serialization.

-- 
Marco



Re: Fastest JSON parser in the world is a D project

2015-10-16 Thread Marco Leise via Digitalmars-d-announce
Am Fri, 16 Oct 2015 11:09:37 +
schrieb Per Nordlöw :

> On Wednesday, 14 October 2015 at 07:01:49 UTC, Marco Leise wrote:
> > https://github.com/kostya/benchmarks#json
> 
> Does fast.json use any non-standard memory allocation patterns or 
> plain simple GC-usage?

Plain GC.  I found no way in D to express something as
"borrowed", but the GC removes the ownership question from the
picture. That said the only thing that I allocate in this
benchmark is the dynamic array of coordinates (1_000_000 * 3 *
double.sizeof), which can be replaced by lazily iterating over
the array to remove all allocations.

Using these lazy techniques for objects too and
calling .borrow() instead of .read!string() (which .idups)
should allow GC free usage. (Well, except for the one in
parseJSON, where I append 16 zero bytes to the JSON string to
make it SSE safe.)

-- 
Marco



Fastest JSON parser in the world is a D project

2015-10-14 Thread Marco Leise via Digitalmars-d-announce
fast.json usage:

UTF-8 and JSON validation of used portions by default:

auto json = parseJSONFile("data.json");

Known good file input:

auto json = parseTrustedJSONFile("data.json");
auto json = parseTrustedJSON(`{"x":123}`);

Work with a single key from an object:

json.singleKey!"someKey"
json.someKey

Iteration:

foreach (key; json.byKey)  // object by key
foreach (idx; json)// array by index

Remap member names:

@JsonRemap(["clazz", "class"])
struct S { string clazz; }

@JsonRemap(["clazz", "class"])
enum E { clazz; }

Example:

double x = 0, y = 0, z = 0;
auto json = parseTrustedJSON(`{ "coordinates": [ { "x": 1, "y": 2, "z": 3 
}, … ] }`);

foreach (idx; json.coordinates)
{
// Provide one function for each key you are interested in
json.keySwitch!("x", "y", "z")(
{ x += json.read!double; },
{ y += json.read!double; },
{ z += json.read!double; }
);
}

Features:
  - Loads double values in compliance with IEEE round-to-nearest
(no precision loss in serialization->deserialization round trips)
  - UTF-8 validation of non-string input (file, ubyte[])
  - Currently fastest JSON parser money can buy
  - Reads strings, enums, integral types, double, bool, POD
structs consisting of those and pointers to such structs

Shortcomings:
  - Rejects numbers with exponents of huge magnitude (>=10^28)
  - Only works on Posix x86/amd64 systems
  - No write capabilities
  - Data size limited by available contiguous virtual memory

-- 
Marco



Re: Fastest JSON parser in the world is a D project

2015-10-14 Thread Marco Leise via Digitalmars-d-announce
Am Wed, 14 Oct 2015 07:55:18 +
schrieb Idan Arye :

> On Wednesday, 14 October 2015 at 07:35:49 UTC, Marco Leise wrote:
> > auto json = parseTrustedJSON(`{ "coordinates": [ { "x": 1, 
> > "y": 2, "z": 3 }, … ] }`);
> 
> I assume parseTrustedJSON is not validating? Did you use it in 
> the benchmark? And were the competitors non-validating as well?

That is correct. For the benchmark parseJSONFile was used
though, which validates UTF-8 and JSON in the used portions.
That probably renders your third question superfluous. I
wouldn't know anyways, but am inclined to think they all
validate the entire JSON and some may skip UTF-8 validation,
which is a low cost operation in this ASCII file anyways.

-- 
Marco



Re: Fastest JSON parser in the world is a D project

2015-10-14 Thread Marco Leise via Digitalmars-d-announce
Am Wed, 14 Oct 2015 10:22:37 +0200
schrieb Rory McGuire via Digitalmars-d-announce
:

> Does this version handle real world JSON?
> 
> I've keep getting problems with vibe and JSON because web browsers will
> automatically make a "1" into a 1 which then causes exceptions in vibe.
> 
> Does yours do lossless conversions automatically? 

No I don't read numbers as strings. Could the client
JavaScript be fixed? I fail to see why the conversion would
happen automatically when the code could explicitly check for
strings before doing math with the value "1". What do I miss?

-- 
Marco



Re: Fastest JSON parser in the world is a D project

2015-10-14 Thread Marco Leise via Digitalmars-d-announce
Am Wed, 14 Oct 2015 08:19:52 +
schrieb Per Nordlöw :

> On Wednesday, 14 October 2015 at 07:01:49 UTC, Marco Leise wrote:
> > https://github.com/kostya/benchmarks#json
> 
> I can't find fast.json here. Where is it?

»»» D Gdc Fast  0.34226.7 «««
C++ Rapid   0.79687.1

Granted if he wrote "D fast.json" it would have been easier to
identify.

-- 
Marco



Re: Synchronized classes have no public members

2015-10-13 Thread Marco Leise via Digitalmars-d
Am Tue, 13 Oct 2015 09:36:22 +
schrieb ponce :

> On Tuesday, 13 October 2015 at 09:07:54 UTC, Chris wrote:
> > On Tuesday, 13 October 2015 at 08:55:26 UTC, Benjamin Thaut 
> > wrote:
> >>
> >> […] The entire synchronized methods give the user the feeling
> >> that he simply slaps synchronized on his class / method and
> >> then its thread safe and he doesn't have to care about threads
> >> anymore. In the real world this is far from true however. So
> >> synchronized methods and classes just give a false sense of
> >> thread safety and should rather be removed.
> >
> > Actually, I once fell foul of this wrong impression of thread 
> > safety via 'synchronized'. I found a different solution and 
> > dropped synchronized.
> 
> I also dropped synchronized and use @nogc mutexes instead. I also 
> think synchronized methods should be removed. It's also difficult 
> to explain: what is a "monitor"? when you write a synchronized { 
> } block, which monitor is taken?

Yep, I prefer to think it sets of variables that need mutex
protection. And these are not generally the set of member
fields in a class. When other mutexes need the same variables
they must be a strict superset or subset of the other with the
mutex with smaller scope always being locked first.

That's all folks. 100% safety. :)
(The catch is you need to get a fix on the variables.)

-- 
Marco



Re: Synchronized classes have no public members

2015-10-13 Thread Marco Leise via Digitalmars-d
Am Tue, 13 Oct 2015 12:52:55 +
schrieb Dicebot :

> On Tuesday, 13 October 2015 at 12:51:14 UTC, Benjamin Thaut wrote:
> > On Tuesday, 13 October 2015 at 12:20:17 UTC, Minas Mina wrote:
> >>
> >> I agree that synchronized classes / functions that not that 
> >> useful.
> >>
> >> But synchronized statements, to me, make the intention of 
> >> locking explicit.
> >
> > Synchronized statements are fine and serve a good purpose, no 
> > need to delete them in my opinion.
> >
> >>
> >> Maybe the internal monitor could be removed (with synchronized 
> >> classes / functions as well), and allow synchronized() {} to 
> >> be called on Lock objects, that essentially locks them at the 
> >> beginning and unlocks them at the end.
> >
> > Yes, I would love that.
> 
> Isn't dedicated language feature a bit too much for a glorified 
> mutex scope guard?

Guys, sorry to break into your wishful thinking, but

   synchronized(mutex) {}

already works as you want it to since as long as I can think.
Yes, it takes a parameter, yes it calls lock/unlock on the
mutex. :)

-- 
Marco



Re: DIP74 - where is at?

2015-10-13 Thread Marco Leise via Digitalmars-d
Am Tue, 13 Oct 2015 17:59:26 +
schrieb deadalnix :

> > It he not really just saying "I have no clue if X is true, but 
> > since I don't know, I'll just assume it's false and assume you 
> > are wrong.".
> >
> > That's not very logical. Why wouldn't he just as well assume X 
> > is true?
> >
> 
> Because people with half a brain know that's not how it works.
> 
> Proves me that unicorn do not exists. I'm waiting. And remember, 
> having no evidence that they do exists doesn't mean they do not !

Ok, so here we arrived in d.religion. Today: "Agnostic vs.
atheist, who is right." And: "Testimony: I tried to change the
world but God didn't give me the source code."

-- 
Marco



Re: DIP74 - where is at?

2015-10-12 Thread Marco Leise via Digitalmars-d
Am Mon, 12 Oct 2015 10:28:55 +0300
schrieb Andrei Alexandrescu :

> On 10/12/15 7:19 AM, Jonathan M Davis wrote:
> > On Monday, 12 October 2015 at 03:59:04 UTC, Marco Leise wrote:
> >> Am Sun, 11 Oct 2015 07:32:26 +
> >> schrieb deadalnix :
> >>
> >>> In C++, you need to assume things are shared, and, as such, use
> >>> thread safe inc/dec . That means compiler won't be able to optimize
> >>> them. D can do better as sharedness is part of the type system.
> >>
> >> With the little nag that `shared` itself is not fleshed out.
> >
> > Well, it really should be better fleshed out, but the reality of the
> > matter is that it actually works pretty well as it is. The problem is
> > primarily that it's a pain to use - and to a certain extent, that's
> > actually a good thing, but it does make it harder to use correctly.
> 
> Yah, I'd like to make "finalizing the language" a priority going 
> forward, and finalizing shared is a big topic. It's hard to present to 
> the world a language with fuzzy corners. -- Andrei

Wouldn't it be great if everyone took notes of the currently
perceived shortcomings of shared so that there is a pile of
use- and corner-cases to look at for a redesign?
Then you would filter by valid use case and undesired usage and
practically had everyone's input already under consideration
when it hits the discussion forums.
My own experience is what Jonathan describes. It is annoying
in a good way. It makes you wait and think twice and more
often than not you really attempted an illegal access on shared
data. This went well up to a point when I had a component that
held references to shared things. There is no such thing as a
shared struct with shared fields in the type system. Once you
share the outer struct, the inner stuff's shared status is
merged with the outer and you can't cast it back to what it
was before. So it works for plain integral types, but
aggregates and what others mentioned about thread
local GC heaps are my big question marks.

-- 
Marco



Re: DIP74 - where is at?

2015-10-11 Thread Marco Leise via Digitalmars-d
Am Sun, 11 Oct 2015 07:32:26 +
schrieb deadalnix :

> In C++, you need to assume things are shared, and, as such, use 
> thread safe inc/dec . That means compiler won't be able to 
> optimize them. D can do better as sharedness is part of the type 
> system.
 
With the little nag that `shared` itself is not fleshed out.

-- 
Marco



Re: Go, D, and the GC

2015-10-08 Thread Marco Leise via Digitalmars-d
Am Mon, 05 Oct 2015 13:42:50 +
schrieb Adam D. Ruppe :

> On Monday, 5 October 2015 at 07:40:35 UTC, deadalnix wrote:
> > Not on the heap. There are many cases where the destructor 
> > won't run and it is allowed by spec. We should do better.
> 
> To be fair, if you new a struct in C++ and never delete it, the 
> destructor will never run there either. D's in the same boat in 
> that regard.

But the D boat reads "Garbage Collector". And besides, D now
runs dtors on new'd structs. The "many cases" may be different
ones than you imagine.

-- 
Marco



Re: Go, D, and the GC

2015-10-08 Thread Marco Leise via Digitalmars-d
Am Mon, 5 Oct 2015 12:22:59 +0300
schrieb Shachar Shemesh :

> On 05/10/15 10:01, Dmitry Olshansky wrote:
> 
> >> When D structs has a destructor that is guaranteed to run for any
> >> instance that finished construction, no matter what is the use case,
> >> then we can have that discussion.
> >>
> >
> > Supposed to be the case for structs except for any bugs.
> >
> 
> Check this one out (no instances on heap):
> import std.stdio;
> 
> struct destructible {
>  int id;
> 
>  @disable this();
>  this( int id ) {
>  writeln("Id ", id, " constructed");
>  this.id = id;
>  }
> 
>  ~this() {
>  writeln("Id ", id, " destructed");
>  }
> }
> 
> void main() {
>  struct container {
>  destructible d;
> 
>  @disable this();
>  this( int id )
>  {
>  this.d = destructible(id);
>  throw new Exception("some random exception");
>  }
>  }
> 
>  try {
>  container(2);
>  } catch( Exception ex ) {
>  writeln("Caught ", ex.msg);
>  }
> }
> 
> As of dmd 2.068.2, the output is:
> Id 2 constructed
> Caught Some random exception
> 
> Of course, if I do not disable this(), things are even worse.

Damn this is sneaky! Now the statements

"When D structs has a destructor that is guaranteed to run for
any instance that finished construction."

"When construction fails, the dtor is not run on that
half-constructed object. Instead it is the ctors
responsibility to roll back its actions."

collide with a struct that finished construction embedded in a
struct that failed construction. This is a deadlock.

-- 
Marco



Re: Go, D, and the GC

2015-10-08 Thread Marco Leise via Digitalmars-d
Am Sun, 04 Oct 2015 23:28:47 +
schrieb Jonathan M Davis :

> On Sunday, 4 October 2015 at 21:41:00 UTC, rsw0x wrote:
> > If D has no intentions of aiding the GC, then the GC should 
> > just be dropped because it's basically just slapping Boehm on 
> > C++ right now.
> 
> I don't understand this attitude at all (and you're not the only 
> one to voice it lately). D has a ton to offer and so little of it 
> has anything to do with the GC. The delegate/lambda/closure 
> situation is generally saner thanks to the GC (at least as far as 
> safety goes), and arrays have some fantastic features thanks to 
> the GC, but D has _way_ more to offer than that, and most of it 
> has nothing to do with the GC. D's templates alone blow C++ 
> totally out of the water. C++ is a great language, and I love it. 
> But at this point, I only use it when I have to. D is just _so_ 
> much more pleasant to program in that I have no interest in 
> programming in C++ anymore. It's been years since I've done any 
> kind of pet project in C++.
> 
> - Jonathan M Davis

It was probably bad wording. I understood it as D's GC works
on a similar basis as Boehm now - conservative, stop the
world mark & sweep. The reason in both being the nature of the
host language. In fact the German Wikipedia says that Boehm GC
was ported with minimal changes to druntime.

-- 
Marco



Re: D 2015/2016 Vision?

2015-10-08 Thread Marco Leise via Digitalmars-d
Am Tue, 6 Oct 2015 18:27:28 -0700
schrieb Walter Bright :

> On 10/4/2015 11:02 AM, bitwise wrote:
> > For example, streams.
> 
> No streams. InputRanges.

... what bitwise said ...
We had this discussion at least once and it did not change my
mind back then: Ranges and streams have overlapping use cases,
but they are not interchangeable. What they have in common is
that they are both lazy and process data from start to end.

=== Composability ===
Streams work on a series of bits coming from the sender. Class
polymorphism allows them to be wrapped into each other as
needed at any point during the transmission. This includes
applying or reversing compression, encryption and endianness
changes. New filter streams can be loaded through DLLs and
extend the hierarchy.

void main()
{
  InputStream is;

  // Simple stream
  is = new FileStream("./test.txt");

  // File from the web, unzipped with a dynamically loaded
  // filter plugin 
  is = new HttpStream("http://a.xy/test.gz;);
  auto fs = cast(FilterStream) Object.factory("UnGzipStream");
  fs.wrapped = is;
  is = cast(InputStream) fs;

  // One common interface works with all combinations. No need
  // to know the stream types at compile time, no
  // combinatorial template explosion
  processTestFile(is);
}

void processTestFile(InputStream is);

=== Primitives ===
While ranges assume a fixed structure over all elements,
streamed data embeds information about upcoming data types and
sizes. To handle these, instead of getting the next "item"
with .front, streams provide a different set of primitives:
bulk reads, wrappers for basic data types (this is where
endianness filters apply) and often bit-wise reads to handle
compressed data.

=== Buffering ===
Since input ranges/streams can not generally be wound back,
one natural component of streams is a buffer, similar to
what .byLine does for text, but more flexible. A circular
buffer that can grow is a good candidate. Like .byLine it
offers slicing, or generally referencing its contents at the
current read position without copying. Several primitives are
provided to ask for a number of bytes to be buffered or
dropped. This does not only obviate the need to check for "end
of file" or "end of buffer" everywhere but also enables code
reuse in cases like reading a BigInt off a stream, which takes
a char[] as ctor argument. With a buffered stream you increase
the look-ahead until the entire number string is buffered and
can be passed to BigInt() by reference. BigInt could be
changed to be constructible from a range, but the raw
look-ahead enables use of SIMD to scan for end-of-line or
end-of-number that traditional input range primitives don't.

-- 
Marco



Re: How to check if JSONValue of type object has a key?

2015-10-06 Thread Marco Leise via Digitalmars-d-learn
Am Tue, 06 Oct 2015 21:39:28 +
schrieb Fusxfaranto :

> Additionally, just like associative arrays, if you need to access 
> the value, you can get a pointer to it with the in operator (and 
> if the key doesn't exist, it will return a null pointer).
> 
> const(JSONValue)* p = "key" in root;
> if (p)
> {
>  // key exists, do something with p or *p
> }
> else
> {
>  // key does not exist
> }

And you could go further and write

if (auto p = "key" in root)
{
 // key exists, do something with p or *p
}
else
{
 // key does not exist
}

-- 
Marco



Re: Idioms you use

2015-10-05 Thread Marco Leise via Digitalmars-d
Am Sun, 04 Oct 2015 00:08:39 +0200
schrieb Artur Skawina via Digitalmars-d
:

>static ctfe = ctfeArr!( iota(256).map!isDigit() );
> 
>immutable typeof(R.front)[R.array().length] ctfeArr(alias R) = R.array();

I like that. Also that 1) In D everything is possible. And 2)
If not, there is a workaround, goto 1).

-- 
Marco



Re: std.benchmarking and getting rid of TickDuration

2015-10-05 Thread Marco Leise via Digitalmars-d
Should the examples have `pragma(inline, false)` on the
benchmarked functions? I'm not so worried about inlining as I
am about const folding the benchmarked expressions away.

Concerning the module review I tend to agree with you. It
would only be ... well if I said in std.datetime the functions
were not so prominent, and went under the radar, but as their
own module they have to add enough value to warrant that. But
I am ok with that, too. std.benchmark is not something you'd
use in a production environment on a daily basis like you would
std.json, std.logger or std.allocator and you didn't add new
API.

If there _was_ a review for a new module I would swamp your
progress with feature requests like "validation of return
value of benchmarked functions" or "benchmark for x seconds
with y iterations of the inner loop for comparing benchmarks
for cases where one function is orders of magnitudes faster
than the other". And I'm sure others have lots of ideas, too
and it will turn into an endless bike-shedding discussion
and take another 6 months to get this merged and no one wants
that, right?

-- 
Marco



Re: Idioms you use

2015-10-03 Thread Marco Leise via Digitalmars-d
Am Mon, 28 Sep 2015 21:40:43 +
schrieb Freddy :

> Are any D idioms you use that you like to share?
> Heres one of mine
> ---
> enum ctfe =
> {
>  return 0xdead & 0xbad;
> }();
> ---

Yep, using that often, although I try to get my head around
using functional style one-line expressions where possible to
avoid the boilerplate.

static immutable ctfe = {
bool[256] result;
foreach (i; 0 .. 256)
result = isDigit(i);
return result;
}();

==>

static immutable bool[256] ctfe = iota(256).map!isDigit.array;

==>

static ctfe = ctfeArr!( iota(256).map!isDigit );

enum ctfeArr(alias r)()
{
// r.length doesn't work as static array size
enum length = r.length;
// immutable doesn't work on this (cannot modify const)
ElementType!(typeof(r))[length] result = r.array;
// Cross fingers here ...
return cast(immutable) result;
}

-- 
Marco



Re: __simd_sto confusion

2015-10-03 Thread Marco Leise via Digitalmars-d-learn
This is a bug in overload resolution when __vector(void[16])
is involved. You can go around it by changing float4 to void16,
only to run into an internal compiler error:
  backend/gother.c 988
So file a bug for both @ issues.dlang.org
Also it looks like DMD wants you to use the return value of
the intrinsic, is that expected?

-- 
Marco



Re: Walter and I talk about D in Romania

2015-10-03 Thread Marco Leise via Digitalmars-d-announce
Am Fri, 2 Oct 2015 07:25:44 -0400
schrieb Andrei Alexandrescu :

> Walter and I will travel to Brasov, Romania to hold an evening-long 
> event on the D language. There's been strong interest in the event with 
> over 300 registrants so far.
> 
> http://curiousminds.ro
> 
> Scott Meyers will guest star in a panel following the talks. We're all 
> looking forward to it!
> 
> 
> Andrei

That's a lot of people. You must be some kind of programming
national hero in Romania. Good luck and watch out for those
C++ moroi in the audience!

-- 
Marco



Re: http://wiki.dlang.org/Building_DMD improvements

2015-10-03 Thread Marco Leise via Digitalmars-d
Am Sat, 03 Oct 2015 10:38:51 +
schrieb Atila Neves :

> On Friday, 2 October 2015 at 11:06:52 UTC, Andrei Alexandrescu 
> wrote:
> > The Wiki page http://wiki.dlang.org/Building_DMD could be 
> > easily reorganized to make better.
> >
> > Currently it uses sections such as "Getting the sources", 
> > "Building the sources", etc. Within each seion there's a Posix 
> > section and a Windows section.
> >
> > However, folks using e.g. Windows have no interest in Posix and 
> > shouldn't have to skip through portions of the document.
> >
> > So the better organization would be to fork the page into two, 
> > and have http://wiki.dlang.org/Building_DMD link to them:
> >
> > * Building on Windows
> > * Building on Posix
> >
> > Could someone please look into this? I see the main author is 
> > BBaz, is (s)he active on this forum?
> >
> >
> > Thanks,
> >
> > Andrei
> 
> Better yet would be to have a process so simple that it doesn't 
> require a wiki.
> 
> Atila

You mean Posix: make && make install ?
Sure, but what about dependencies, an explanation of the
self-hosting process and optional variables ? Even when it is
simple I tend to look at such a Wiki first to know if I'm
missing out on some flag-enabled feature or to know where the
files will install (/opt, /usr/local, ... )

-- 
Marco



Re: __simd_sto confusion

2015-10-03 Thread Marco Leise via Digitalmars-d-learn
Am Sat, 03 Oct 2015 23:42:22 +
schrieb Nachtraaf :

> I changed the type of result to void16 like this:
> 
> float dot_simd1(float4  a, float4 b)
> {
>  void16 result = __simd(XMM.DPPS, a, b, 0xFF);
>  float value;
>  __simd_sto(XMM.STOSS, value, result);
>  return value;
> }
> 
> and for me this code compiles and runs without any errors now.
> I'm using DMD64 D Compiler v2.068 on Linux. If you got an 
> internal compiler error that means that it's a compiler bug 
> though I have no clue what. Did you try the same thing I did or 
> casting the variable?
> I guess I should file a bugreport for overload resolution if it's 
> not a duplicate for now?

Yes. At some point the intrinsics will need a more thorough
rework. Currently none of those that return void, int or set
flags work as they should.

-- 
Marco



Re: std.data.json formal review

2015-10-02 Thread Marco Leise via Digitalmars-d
Am Tue, 28 Jul 2015 14:07:18 +
schrieb "Atila Neves" :

> Start of the two week process, folks.
> 
> Code: https://github.com/s-ludwig/std_data_json
> Docs: http://s-ludwig.github.io/std_data_json/
> 
> Atila

There is one thing I noticed today that I personally feel
strongly about: Serialized double values are not restored
accurately. That is, when I send a double value via JSON and
use enough digits to represent it accurately, it may not be
decoded to the same value. `std.json` does not have this
problem with the random values from [0..1) I tested with.
I also tried `LexOptions.useBigInt/.useLong` to no avail.

Looking at the unittests it seems the decision was deliberate,
as `approxEqual` is used in parsing tests. JSON specs don't
enforce any specific accuracy, but they say that you can
arrange for a lossless transmission of the widely supported
IEEE double values, by using up to 17 significant digits.

-- 
Marco



Re: Mac IDE with Intellisense

2015-10-01 Thread Marco Leise via Digitalmars-d-learn
Am Sat, 26 Sep 2015 10:38:25 +
schrieb Gary Willoughby :

> Auto-complete in D is tricky because of this feature and no-one 
> has invested any time to figure out a nice way to provide 
> auto-complete for this.

Mono-D does have UFCS auto-complete. The plugin is going to
bit-rot though, since its only developer is done studying.

-- 
Marco



Re: WTF does "Enforcement failed" actually mean?

2015-10-01 Thread Marco Leise via Digitalmars-d-learn
Am Thu, 01 Oct 2015 08:52:43 +
schrieb John Colvin :

> On Thursday, 1 October 2015 at 07:08:00 UTC, Russel Winder wrote:
> > On Wed, 2015-09-30 at 23:35 -0700, Ali Çehreli via 
> > Digitalmars-d-learn wrote:
> >> On 09/30/2015 10:46 PM, Russel Winder via Digitalmars-d-learn 
> >> wrote:
> >> > [...]
> >> 
> >> It's coming from the following no-message enforce():
> >> 
> >>  enforce(!r.empty);
> >> 
> >> 
> >> https://github.com/D-Programming-Language/phobos/blob/master/std/algo
> >> rithm/iteration.d#L2481
> >> 
> >> You are using the no-seed version of reduce(), which uses the 
> >> first element as seed, which means that the range cannot be 
> >> empty.
> >
> > Well that explanation (*) makes it abundantly clear that the 
> > error reporting from this part of Phobos is distinctly 
> > substandard, let alone below par.
> >
> >
> > (*) Which is clear and informative!
> 
> Bug report? Then it'll get fixed.

The problem is that in out minds addition has an implicit seed
value of 0 and multiplication has 1, so a potentially empty
range doesn't immediately raise a red flag.

The correct thing to use, following this train of thought, is
http://dlang.org/phobos/std_algorithm_iteration.html#.sum
(Additionally it provides better accuracy when summing up
floating-point values.)

-- 
Marco



Re: Interval Arithmetic

2015-10-01 Thread Marco Leise via Digitalmars-d-learn
Am Tue, 29 Sep 2015 21:04:00 +
schrieb Wulfrick :

> Is there an interval arithmetic library in D? I couldn’t find one.
> 
> In case I had to write my own, I understand that the IEEE 
> standard floating point arithmetic provides operations for 
> rounding up or down certain operations like summing, subtracting, 
> etc. (thus overriding the default behavior of rounding to nearest 
> representable).
> 
> How do I access this functionality in D? At first I thought that 
> std.math.nextDown and nextUp is what I needed, but not so. 
> Apparently these functions return the previous or next 
> representable *after* the calculation has been done.
> 
> For example, I would like the value of x+y rounded in the 
> arithmetic towards -\infty, which may or may not be nextDown(x+y).
> 
> Any luck?
> Thanks for reading!

Yes, Phobos provides you with this thing:
http://dlang.org/phobos/std_math.html#.FloatingPointControl
Read the help carefully. End of the scope generally means "}".

You can also use the C standard library from D and use:
http://www.cplusplus.com/reference/cfenv/fesetround/

  import core.stdc.fenv;
  fesetround( FE_DOWNWARD );
  auto z = x + y;

And if all that still isn't enough you can write it in inline
assembler using the `fldcw` mnemonic.

Note that the FP control word is per thread and any external
code you call or even buggy interrupt handlers could change or
reset it to defaults. Known cases include a faulty printer
driver and Delphi's runtime, which enables FP exceptions to
throw exceptions on division by 0. Just saying this so if it
ever happens you have it in the back of your mind. Against
interrupt handlers you probably cannot protect, but when
calling other people's code it would be best not depend on
what the FP control word is set to on return.
`FloatingPointControl` is nice here, because you can
temporarily set the rounding mode directly for a block of FP
instructions where no external libraries are involved.

-- 
Marco



Re: Interval Arithmetic

2015-10-01 Thread Marco Leise via Digitalmars-d-learn
Am Thu, 01 Oct 2015 12:03:10 +
schrieb ponce :

> I have a RAII struct to save/restore the FP control word.
> It also handle the SSE control word which unfortunately exist.
> 
> https://github.com/p0nce/dplug/blob/master/plugin/dplug/plugin/fpcontrol.d

Nice to have in Phobos. I assume you have to set the correct
control word depending on whether you perform math on the FPU
or via SSE (as is standard for x86_64)? And I assume further
that DMD always uses FPU math and other compilers provide
flags to switch between FPU and SSE?

-- 
Marco


Re: std.data.json formal review

2015-09-30 Thread Marco Leise via Digitalmars-d
Am Tue, 29 Sep 2015 11:06:01 +
schrieb Marc Schütz :

> No, the JSON type should just store the raw unparsed token and 
> implement:
> 
>  struct JSON {
>  T to(T) if(isNumeric!T && is(typeof(T("" {
>  return T(this.raw);
>  }
>  }
> 
> The end user can then call:
> 
>  auto value = json.to!BigInt;

Ah, the duck typing approach of accepting any numeric type
constructible from a string.

Still: You need to parse the number first to know how long
the digit string is that you pass to T's ctor. And then you
have two sets of syntaxes for numbers: JSON and T's ctor. T
could potentially parse numbers with the system locale's
setting for the decimal point which may be ',' while JSON uses
'.' or support hexadecimal numbers which are also invalid JSON.
On the other hand, a ctor for some integral type may not
support the exponential notation "2e10", which could
legitimately be used by JSON writers (Ruby's uses shortest way
to store numbers) to save on bandwidth.

-- 
Marco



Re: std.data.json formal review

2015-09-28 Thread Marco Leise via Digitalmars-d
Am Tue, 18 Aug 2015 09:05:32 +
schrieb "Marc Schütz" :

> Or, as above, leave it to the end user and provide a `to(T)` 
> method that can support built-in types and `BigInt` alike.

You mean the user should write a JSON number parsing routine
on their own? Then which part is responsible for validation of
JSON contraints? If it is the to!(T) function, then it is
code duplication with chances of getting something wrong,
if it is the JSON parser, then the number is parsed twice.
Besides, there is a lot of code to be shared for every T.

-- 
Marco



Re: pragma(inline, true) not very useful in its current state?

2015-09-27 Thread Marco Leise via Digitalmars-d
Am Sat, 26 Sep 2015 19:58:14 +0200
schrieb Artur Skawina via Digitalmars-d
:

> `allow` is the default state and always safe; for the cases
> where it's /undesirable/, there is noinline.
> 
> artur

No what I meant was when the compiler sees inline assembly or
anything else it deems not safe to inline, you can convice it
anyways. It is more like @trusted on otherwise @system
functions. I just mentioned it so it is under consideration,
as I have seen it in LLVM.

-- 
Marco



<    1   2   3   4   5   6   >