Re: Help optimizing UnCompress for gzipped files

2018-01-02 Thread Christian Köstlin via Digitalmars-d-learn
On 02.01.18 21:13, Steven Schveighoffer wrote:
> Well, you don't need to use appender for that (and doing so is copying a
> lot of the data an extra time). All you need is to extend the pipe until
> there isn't any more new data, and it will all be in the buffer.
> 
> // almost the same line from your current version
> auto mypipe = openDev("../out/nist/2011.json.gz")
>   .bufd.unzip(CompressionFormat.gzip);
> 
> // This line here will work with the current release (0.0.2):
> while(mypipe.extend(0) != 0) {}
Thanks for this input, I updated the program to make use of this method
and compare it to the appender thing as well.

>> I will give the direct gunzip calls a try ...
I added direct gunzip calls as well... Those are really good, as long as
I do not try to get the data into ram :) then it is "bad" again.
I wonder what the real difference to the lowlevel solution with own
appender and the c version is. For me they look almost the same (ugly,
only the performance seems to be nice).

Funny thing is, that if I add the clang address sanitizer things to the
c program, I get almost the same numbers as for java :)


> Yeah, with jsoniopipe being very raw, I wouldn't be sure it was usable
> in your case. The end goal is to have something fast, but very easy to
> construct. I wasn't planning on focusing on the speed (yet) like other
> libraries do, but ease of writing code to use it.
> 
> -Steve

--
Christian Köstlin


Re: Is it bad form to put code in package.d other than imports?

2018-01-02 Thread Jonathan M Davis via Digitalmars-d-learn
On Wednesday, January 03, 2018 06:10:10 Soulsbane via Digitalmars-d-learn 
wrote:
> I've only understood that imports should go in package.d. I'm
> seeing more and more packages on code.dlang.org using it for the
> packages primary code. Is this alright? As far as I can tell it's
> just bad form. It would be nice to have one of the maintainers
> higher up the food chain comment on this!
>
> Thanks!

The entire reason that the package.d feature was added was so that it would
be possible to split a module into a package without breaking code. Anything
beyond that was beyond the scope of the purpose of the feature, albeit not
necessarily in conflict with its original purpose.

For better or worse, a number of folks like being able to just import an
entire package at once using it, so it's not uncommon for folks to set it up
to do that, though personally, I'm not a fan of that.

All Phobos has done with package.d thus far is split modules into packages,
not add it for importing packages that already exist. As for leaving code in
package.d, the only package in Phobos that I'm aware of doing that on a
permanent basis is std.range, and I have no idea why that was done. When
std.algorithm and std.container were split, package.d was used for
documentation purposes (which makes a lot of sense), but there's no real
code in there. std.datetime's package.d has some now-deprecated
functionality that was left in there rather than being put in a module when
std.datetime was split, because there was no point in putting it in a
module. In the long run though, all of that will be gone from std.datetime's
package.d, and it too will only be used for documentation. I have no idea
what folks have been doing on code.dlang.org.

In terms of functionality, there really isn't much special about package.d.
If there were, we probably wouldn't have been able to talk Walter into it.
We were able to precisely because public imports already worked in a way
that allowed package.d to work. We just needed the feature to be added to
make it possible to import a package. So, package.d allows that, but beyond
that, it's just a normal module. It typically containts public imports for
the rest of the package, but it doesn't have to, and it can contain whatever
code you want to put in there. You can do whatever you want with it, though
really, using it for much of anything at all beyond splitting up a package
in place is beyond the scope of why it was introduced. But ultimately,
there's nothing really special about package.d, and different folks have
different ideas about it and how it should be used.

Personally, I don't think that it's very clean to be putting stuff other
than documentation in package.d, and I don't really like the idea of setting
it up to import the entire package under normal circumstances; I see it
simply as a way to split up a module in place without breaking code, but
that's just how I feel about it. I don't know that there are much in the way
of objective arguments about package.d and how it should be used.

However, at this point, the trend in best practices is towards using scoped,
selective imports as much as possible (since that reduces compilation times
and more closely associates the imports with what's using them), and that
pretty much flies in the face of the idea of importing an entire package at
once. You _can_ use selective imports and package.d together, but it also
makes the compiler do more work than if you just selectively imported from
the module that the symbol is actually in. And if you're putting code
directly in package.d, then it's not possible to import just that module
unless it doesn't have any public imports in it. So, from a flexibility
standpoint, it makes more sense to avoid putting symbols directly in
package.d.

However, AFAIK, there really isn't much in the way of best practices with
regards to package.d. All Phobos has done with it has been to split modules
into packages like it was originally intended, and beyond that, folks have
kind of done whatever they want. You may find a bunch of people agreeing
with you that folks shouldn't be putting normal code in package.d, but
AFAIK, there's no real general agreement on the matter one way or the other.
There isn't really even agreement about whether package.d should be used
when it isn't needed (and it's pretty much only needed when splitting up a
module in-place).

- Jonathan M Davis



Re: Help optimizing UnCompress for gzipped files

2018-01-02 Thread Christian Köstlin via Digitalmars-d-learn
On 02.01.18 21:48, Steven Schveighoffer wrote:
> On 1/2/18 3:13 PM, Steven Schveighoffer wrote:
>> // almost the same line from your current version
>> auto mypipe = openDev("../out/nist/2011.json.gz")
>>    .bufd.unzip(CompressionFormat.gzip);
> 
> Would you mind telling me the source of the data? When I do get around
> to it, I want to have a good dataset to test things against, and would
> be good to use what others reach for.
> 
> -Steve
Hi Steve,

thanks for looking into this.
I use data from nist.gov, the Makefile includes these download instructions:
curl -s
https://static.nvd.nist.gov/feeds/json/cve/1.0/nvdcve-1.0-2011.json.gz >
out/nist/2011.json.gz`

--
Christian Köstlin



Re: Efficient way to pass struct as parameter

2018-01-02 Thread Tim Hsu via Digitalmars-d-learn

On Tuesday, 2 January 2018 at 22:49:20 UTC, Adam D. Ruppe wrote:

On Tuesday, 2 January 2018 at 22:17:14 UTC, Johan Engelen wrote:

Pass the Vector3f by value.


This is very frequently the correct answer to these questions! 
Never assume ref is faster if speed matters - it may not be.


However speed really matters for me. I am writing a path tracing 
program. Ray will be constructed million of times during 
computation. And will be passed to functions to test intersection 
billion of times. After Reading comments here, it seems ray will 
be passed by value to the intersection testing function. I am not 
sure if ray is small enough to be passed by value. It needs some 
experiment.


Re: how to localize console and GUI apps in Windows

2018-01-02 Thread Andrei via Digitalmars-d-learn

On Friday, 29 December 2017 at 11:14:39 UTC, zabruk70 wrote:

On Friday, 29 December 2017 at 10:35:53 UTC, Andrei wrote:
Though it is not suitable for GUI type of a Windows 
application.


AFAIK, Windows GUI have no ANSI/OEM problem.
You can use Unicode.


Partly, yes. Just for a test I tried to "russify" the example 
Windows GUI program that comes with D installation pack 
(samples\d\winsamp.d). Window captions, button captions, message 
box texts written in UTF8 all shows fine. But direct text output 
functions CreateFont()/TextOut() render all Cyrillic from UTF8 
strings into garbage.



For Windows ANSI/OEM problem you can use also
https://dlang.org/phobos/std_windows_charset.html


Thank you very much, toMBSz() makes requisite translation for  
TextOut() function with some workarounds.






Is it bad form to put code in package.d other than imports?

2018-01-02 Thread Soulsbane via Digitalmars-d-learn
I've only understood that imports should go in package.d. I'm 
seeing more and more packages on code.dlang.org using it for the 
packages primary code. Is this alright? As far as I can tell it's 
just bad form. It would be nice to have one of the maintainers 
higher up the food chain comment on this!


Thanks!


Re: Efficient way to pass struct as parameter

2018-01-02 Thread H. S. Teoh via Digitalmars-d-learn
On Tue, Jan 02, 2018 at 10:17:14PM +, Johan Engelen via Digitalmars-d-learn 
wrote:
[...]
> Passing by pointer (ref is the same) has large downsides and is
> certainly not always fastest. For small structs and if copying is not
> semantically wrong, just pass by value.

+1.


> More important: measure what bottlenecks your program has and optimize
> there.
[...]

It cannot be said often enough: premature optimization is the root of
all evils.  It makes your code less readable, less maintainable, more
bug-prone, and makes you spend far too much time and energy fiddling
with details that ultimately may not even matter, and worst of all, it
may not even be a performance win in the end, e.g., if you end up with
CPU cache misses / excessive RAM roundtrips because of too much
indirection, where you could have passed the entire struct in registers.

When it comes to optimization, there are 3 rules: profile, profile,
profile.  I used to heavily hand-"optimize" my code a lot (I come from a
strong C/C++ background -- premature optimization seems to be a common
malady among us in that crowd).  Then I started using a profiler, and I
suddenly had that sinking realization that all those countless hours of
tweaking my code to be "optimal" were wasted, because the *real*
bottleneck was somewhere else completely.  From many such experiences,
I've learned that (1) the real bottleneck is rarely where you predict it
to be, and (2) most real bottlenecks can be fixed with very simple
changes (sometimes even a 1-line change) with very big speed gains,
whereas (3) fixing supposed "inefficiencies" that aren't founded on real
evidence (i.e., using a profiler) usually cost many hours of time, add
tons of complexity, and rarely give you more than 1-2% speedups (and
sometimes can actually make your code perform *worse*: your code can
become so complicated the compiler's optimizer is unable to generate
optimal code for it).


T

-- 
MSDOS = MicroSoft's Denial Of Service


Re: Efficient way to pass struct as parameter

2018-01-02 Thread Jonathan M Davis via Digitalmars-d-learn
On Tuesday, January 02, 2018 22:49:20 Adam D. Ruppe via Digitalmars-d-learn 
wrote:
> On Tuesday, 2 January 2018 at 22:17:14 UTC, Johan Engelen wrote:
> > Pass the Vector3f by value.
>
> This is very frequently the correct answer to these questions!
> Never assume ref is faster if speed matters - it may not be.

It also makes for much cleaner code if you pretty much always pass by value
and then only start dealing with ref or auto ref when you know that you need
it - especially if you're going to need to manually overload the function on
refness.

But for better or worse, a lot of this sort of thing ultimately depends on
what the optimizer does to a particular piece of code, and that's far from
easy to predict given everything that an optimizer can do these days -
especially if you're using ldc rather than dmd.

- Jonathan M Davis



Re: Efficient way to pass struct as parameter

2018-01-02 Thread Johan Engelen via Digitalmars-d-learn

On Tuesday, 2 January 2018 at 18:21:13 UTC, Tim Hsu wrote:
I am creating Vector3 structure. I use struct to avoid GC. 
However, struct will be copied when passed as parameter to 
function



struct Ray {
Vector3f origin;
Vector3f dir;

@nogc @system
this(Vector3f *origin, Vector3f *dir) {
this.origin = *origin;
this.dir = *dir;
}
}

How can I pass struct more efficiently?


Pass the Vector3f by value.

There is not one best solution here: it depends on what you are 
doing with the struct, and how large the struct is. It depends on 
whether the function will be inlined. It depends on the CPU. And 
probably 10 other things.
Vector3f is a small struct (I'm guessing it's 3 floats?), pass it 
by value and it will be passed in registers. This "copy" costs 
nothing on x86, the CPU will have to load the floats from memory 
and store them in a register anyway, before it can write it to 
the target Vector3f, regardless of how you pass the Vector3f.


You can play with some code here: https://godbolt.org/g/w56jmA

Passing by pointer (ref is the same) has large downsides and is 
certainly not always fastest. For small structs and if copying is 
not semantically wrong, just pass by value.
More important: measure what bottlenecks your program has and 
optimize there.


- Johan




Filling up a FormatSpec

2018-01-02 Thread rumbu via Digitalmars-d-learn

Is there any way to parse a format string into a FormatSpec?

FormatSpec has a private function "fillUp" which is not 
accessible.


I need to provide formatting capabilities to a custom data type, 
I've already written the standard function:


void toString(C)(scope void delegate(const(C)[]) sink, 
FormatSpec!C fs) const


but I want to provide another function:

string toString(C)(const(C)[] fmt) and forward it to the standard 
one.


The problem is that initializing FormatSpec with fmt does not 
parse it:


auto fs = FormatSpec!C(fmt); //this will just put fmt in 
fs.trailing


I know that I can use format() for this, but I don't want to 
import 6000 code lines, I would prefer to interpret myself 
FormatSpec once parsed.


Re: Help optimizing UnCompress for gzipped files

2018-01-02 Thread Steven Schveighoffer via Digitalmars-d-learn

On 1/2/18 3:13 PM, Steven Schveighoffer wrote:

// almost the same line from your current version
auto mypipe = openDev("../out/nist/2011.json.gz")
   .bufd.unzip(CompressionFormat.gzip);


Would you mind telling me the source of the data? When I do get around 
to it, I want to have a good dataset to test things against, and would 
be good to use what others reach for.


-Steve


Re: Help optimizing UnCompress for gzipped files

2018-01-02 Thread Steven Schveighoffer via Digitalmars-d-learn

On 1/2/18 1:01 PM, Christian Köstlin wrote:

On 02.01.18 15:09, Steven Schveighoffer wrote:

On 1/2/18 8:57 AM, Adam D. Ruppe wrote:

On Tuesday, 2 January 2018 at 11:22:06 UTC, Stefan Koch wrote:

You can make it much faster by using a sliced static array as buffer.


Only if you want data corruption! It keeps a copy of your pointer
internally: https://github.com/dlang/phobos/blob/master/std/zlib.d#L605

It also will always overallocate new buffers on each call


There is no efficient way to use it. The implementation is substandard
because the API limits the design.


iopipe handles this quite well. And deals with the buffers properly
(yes, it is very tricky. You have to ref-count the zstream structure,
because it keeps internal pointers to *itself* as well!). And no, iopipe
doesn't use std.zlib, I use the etc.zlib functions (but I poached some
ideas from std.zlib when writing it).

https://github.com/schveiguy/iopipe/blob/master/source/iopipe/zip.d

I even wrote a json parser for iopipe. But it's far from complete. And
probably needs updating since I changed some of the iopipe API.

https://github.com/schveiguy/jsoniopipe

Depending on the use case, it might be enough, and should be very fast.


Thanks Steve for this proposal (actually I already had an iopipe version
on my harddisk that I applied to this problem) Its more or less your
unzip example + putting the data to an appender (I hope this is how it
should be done, to get the data to RAM).


Well, you don't need to use appender for that (and doing so is copying a 
lot of the data an extra time). All you need is to extend the pipe until 
there isn't any more new data, and it will all be in the buffer.


// almost the same line from your current version
auto mypipe = openDev("../out/nist/2011.json.gz")
  .bufd.unzip(CompressionFormat.gzip);

// This line here will work with the current release (0.0.2):
while(mypipe.extend(0) != 0) {}

//But I have a fix for a bug that hasn't been released yet, this would 
work if you use iopipe-master:

mypipe.ensureElems();

// getting the data is as simple as looking at the buffer.
auto data = mypipe.window; // ubyte[] of the data


iopipe is already better than the normal dlang version, almost like
java, but still far from the solution. I updated
https://github.com/gizmomogwai/benchmarks/tree/master/gunzip

I will give the direct gunzip calls a try ...

In terms of json parsing, I had really nice results with the fast.json
pull parser, but its comparing a little bit apples with oranges, because
I did not pull out all the data there.


Yeah, with jsoniopipe being very raw, I wouldn't be sure it was usable 
in your case. The end goal is to have something fast, but very easy to 
construct. I wasn't planning on focusing on the speed (yet) like other 
libraries do, but ease of writing code to use it.


-Steve


Re: Efficient way to pass struct as parameter

2018-01-02 Thread Jonathan M Davis via Digitalmars-d-learn
On Tuesday, January 02, 2018 19:27:50 Igor Shirkalin via Digitalmars-d-learn 
wrote:
> On Tuesday, 2 January 2018 at 18:45:48 UTC, Jonathan M Davis
>
> wrote:
> > [...]
>
> Smart optimizer should think for you without any "auto" private
> words if function is inlined. I mean LDC compiler first of all.

A smart optimizer may very well optimize out a number of copies. The fact
that D requires that structs be moveable opens up all kinds of optimization
opportunities - even more so when stuff gets inlined. However, if you want
to guarantee that unnecessary copies aren't happening, you have to ensure
that ref gets used with lvalues and does not get used with rvalues, and that
tends to mean either using auto ref or overloading functions on ref.

- Jonathan M Davis



Re: Slices and Dynamic Arrays

2018-01-02 Thread Ali Çehreli via Digitalmars-d-learn

On 01/02/2018 11:17 AM, Jonathan M Davis wrote:
> On Tuesday, January 02, 2018 10:37:17 Ali Çehreli via Digitalmars-d-learn
> wrote:
>> For these reasons, the interface that the program is using is a "slice".
>> Dynamic array is a different concept owned and implemented by the GC.
>
> Except that from the standpoint of the API, T[] _is_ the dynamic array -
> just like std::vector is the dynamic array and not whatever its guts 
are -


I understand your point but I think it's more confusing to call it a 
dynamic array in the following code:


int[42] array;
int[] firstHalf = array[0..$/2];

I find it simpler to see it as a slice of existing elements.

In contrast, calling it a dynamic array would require explaining not to 
worry, no memory is being allocated; the dynamic array is backed by the 
stack. Not very different from calling it a slice and then explaining 
the GC involvement in the case of append.


> Regardless, the fact that they're a container/range hybrid is what makes
> this such a mess to understand. The semantics actually work 
fantastically if

> you understand them, but it sure makes understanding them annoyingly
> difficult.
>
> - Jonathan M Davis

Agreed.

Ali



Re: Efficient way to pass struct as parameter

2018-01-02 Thread Igor Shirkalin via Digitalmars-d-learn
On Tuesday, 2 January 2018 at 18:45:48 UTC, Jonathan M Davis 
wrote:

[...]


Smart optimizer should think for you without any "auto" private 
words if function is inlined. I mean LDC compiler first of all.


Re: Slices and Dynamic Arrays

2018-01-02 Thread Jonathan M Davis via Digitalmars-d-learn
On Tuesday, January 02, 2018 10:37:17 Ali Çehreli via Digitalmars-d-learn 
wrote:
> As soon as we call it "dynamic array", I can't help but think "adding
> elements". Since GC is in the picture when that happens, it's essential
> to think GC when adding an element is involved.
>
> Further, evident from your description it's a "slice" until you add
> elements because the underlying memory e.g. can be a stack-allocated
> fixed-length array.
>
> For these reasons, the interface that the program is using is a "slice".
> Dynamic array is a different concept owned and implemented by the GC.

Except that from the standpoint of the API, T[] _is_ the dynamic array -
just like std::vector is the dynamic array and not whatever its guts are -
and the semantics are the same whether it's backed by the GC or by a static
array or by malloc-ed memory or whatever. Appending works exactly the same.
Reallocation works the same. None of that changes based on whether the
dynamic array is backed by GC-allocated memory or not. It's just that the
capacity is guaranteed to be 0 if it isn't GC-allocated and so the first
append operation is guaranteed to reallocate. The semantics of T[] itself
don't change regardless, and most code doesn't need to care one whit about
what kind of memory backs the dynamic array. No matter what memory backed it
to start with, you get the same appending semantics. You get the same
semantics when accessing the data. You get the same semantics when passing
the dynamic array around. None of that depends on what kind of memory the
dynamic array is a slice of. T[] functions as a dynamic array regardless of
what memory backed it to start with, and as such, I completely agree with
the spec calling it the dynamic array.

And as soon as you start talking about T[] not being a dynamic array, you
get this weird situation where T[] has all of the operations and semantics
of a dynamic array, but you're not calling it a dynamic array simply because
it happens to be a slice of memory that wasn't GC-allocated. So, you have
this type in the type system whose semantics don't care what memory
currently backs it and where code will act on it identically whether it's
GC-backed or not, but folks want to then act like it's something different
and treat it differently just because it happens to not be GC-backed at the
moment - and the same function could be called with both GC-backed and
non-GC-backed dynamic arrays. The type and its semantics are the same
regardless.

Of course, understanding how and when reallocation occurs matters if you
want to understand the exact semantics of copying a dynamic array around or
when appending or reserve is going to result in a reallocation, but that
doesn't necessitate calling the GC-managed buffer the dynamic array. It just
requires understanding how it's the GC that manages capacity, reserve, and
appending rather than the dynamic array itself. But the API is that of a
dynamic array regardless. If it weren't, you couldn't append to T[] any more
than you can append to an arbitrary range. As soon as you insist on calling
them slices, you're basically talking about them as if they were simply
ranges rather that than the container/range hybrid that they are.

Regardless, the fact that they're a container/range hybrid is what makes
this such a mess to understand. The semantics actually work fantastically if
you understand them, but it sure makes understanding them annoyingly
difficult.

- Jonathan M Davis




Re: Efficient way to pass struct as parameter

2018-01-02 Thread Seb via Digitalmars-d-learn

On Tuesday, 2 January 2018 at 18:21:13 UTC, Tim Hsu wrote:
I am creating Vector3 structure. I use struct to avoid GC. 
However, struct will be copied when passed as parameter to 
function



struct Ray {
Vector3f origin;
Vector3f dir;

@nogc @system
this(Vector3f *origin, Vector3f *dir) {
this.origin = *origin;
this.dir = *dir;
}
}

How can I pass struct more efficiently?



If you want the compiler to ensure that a struct doesn't get 
copied, you can disable its postblit:


 @disable this(this);

Now, there are a couple of goodies in std.typecons like 
RefCounted or Unique that allow you to pass struct around without 
needing to worry about memory allocation:


https://dlang.org/phobos/std_typecons.html#RefCounted
https://dlang.org/phobos/std_typecons.html#Unique

Example: https://run.dlang.io/is/3rbqpn

Of course, you can always roll your own allocator:

https://run.dlang.io/is/uNmn0d


Re: why @property cannot be pass as ref ?

2018-01-02 Thread Ali Çehreli via Digitalmars-d-learn

On 12/29/2017 07:49 PM, ChangLong wrote:

On Wednesday, 20 December 2017 at 18:43:21 UTC, Ali Çehreli wrote:
Thanks to Mengü for linking to that section. I have to make 
corrections below.


Ali



Thanks for explain, Ali And Mengu.

What I am try to do is implement a unique data type. (the ownership auto 
moved into new handle)


consider this code:

import std.stdio;

struct S {
     @disable this(this);
     void* socket;
     this (void* i) {
     socket = i;
     }

     void opAssign()(auto ref S s ){
     socket = s.socket ;
     s.socket = null ;
     }

     @nogc @safe
     ref auto byRef() const pure nothrow return
     {
     return this;
     }
}


void main() {
     static __gshared size_t socket;
     auto lvalue = S();   // pass rvalue into lvalue, working
     S l2 =  void;
     l2 = lvalue;    // pass lvalue into lvalue, working
     auto l3 = l2.byRef; // pass lvalue into lvalue, not working
}



I can not assign  l2 to l3 because "Error: struct app.S is not copyable 
because it is annotated with @disable", but it working if I init l3 with 
void.


I hope others can answer that. For what it's worth, here is an earlier 
experiment that Vittorio Romeo and I had played with at C++Now 2017. It 
uses std.algorithm.move:


import std.stdio;
import std.algorithm;

int allocate() {
static int i = 42;
writeln("allocating ", i);
return i++;
}

void deallocate(int i) {
writeln("deallocating ", i);
}

struct UniquePtr {
int i = 666;// To easily differentiate UniquePtr.init from 0
this(int i) {
this.i = i;
}

~this() {
deallocate(i);
}

@disable this(this);
}

void use(UniquePtr p) {
writeln("using ", p.i);
}

UniquePtr producer_rvalue(int i) {
return i % 2 ? UniquePtr(allocate()) : UniquePtr(allocate());
}

UniquePtr producer_lvalue() {
writeln("producer_lvalue");
auto u = UniquePtr(allocate());
writeln("allocated lvalue ", u.i);
return u;
}

void main() {
use(UniquePtr(allocate()));

auto u = UniquePtr(allocate());
use(move(u));

auto p = producer_rvalue(0);
use(move(p));
auto p2 = producer_lvalue();
use(move(p2));
}

Ali




Re: Efficient way to pass struct as parameter

2018-01-02 Thread Jonathan M Davis via Digitalmars-d-learn
On Tuesday, January 02, 2018 18:21:13 Tim Hsu via Digitalmars-d-learn wrote:
> I am creating Vector3 structure. I use struct to avoid GC.
> However, struct will be copied when passed as parameter to
> function
>
>
> struct Ray {
>  Vector3f origin;
>  Vector3f dir;
>
>  @nogc @system
>  this(Vector3f *origin, Vector3f *dir) {
>  this.origin = *origin;
>  this.dir = *dir;
>  }
> }
>
> How can I pass struct more efficiently?

When passing a struct to a funtion, if the argument is an rvalue, it will be
moved rather than copied, but if it's an lvalue, it will be copied. If the
parameter is marked with ref, then the lvalue will be passed by reference
and not copied, but rvalues will not be accepted (and unlike with C++,
tacking on const doesn't affect that). Alternatively, if the function is
templated (and you can add empty parens to templatize a function if you want
to), then an auto ref parameter will result in different template
instantiations depending on whether the argument is an lvalue or rvalue. If
it's an lvalue, then the template will be instantiated with that parameter
as ref, so the argument will be passed by ref and no copy will be made,
whereas if it's an rvalue, then the parameter will end up without having
ref, so the argument will be moved. If the function isn't templated and
can't be templated (e.g. if its a member function of a class and you want it
to be virtual), then you'd need to overload the function with overloads that
have ref and don't have ref in order to get the same effect (though the
non-ref overload can simply forward to the ref overload). That does get a
bit tedious though if you have several parameters.

If you want to guarantee that no copy will ever be made, then you will have
to either use ref or a pointer, which could get annoying with rvalues (since
you'd have to assign them to a variable) and could actually result in more
copies, because it would restrict the compiler's abilities to use moves
instead of copies.

In general, the best way is likely going to be to use auto ref where
possible and overload functions where not. Occasionally, there is talk of
adding something similar to C++'s const& to D, but Andrei does not want to
add rvalue references to the language, and D's const is restrictive enough
that requiring const to avoid the copy would arguably be overly restrictive.
It may be that someone will eventually propose a feature with semantics that
Andrei will accept that acts similarly to const&, but it has yet to happen.
auto ref works for a lot of cases though, and D's ability to do moves
without a move constructor definitely reduces the number of unnecessary
copies.

See also:

https://stackoverflow.com/questions/35120474/does-d-have-a-move-constructor

- Jonathan M Davis



Re: Adding Toc for the "longish" spec pages.

2018-01-02 Thread Seb via Digitalmars-d-learn

On Friday, 29 December 2017 at 12:17:47 UTC, Seb wrote:
On Friday, 29 December 2017 at 12:13:02 UTC, Mike Franklin 
wrote:

[...]


Yes, opening an issue or PR is the best way to move things 
forward.
In this case, this is already on my radar and will happen soon 
(since a couple of weeks we already got a footer pagination). 
It's pretty ugly because Ddoc is used for dlang.org which makes 
trivial stuff like this rather complicated :/


TOC generation is there -> 
https://github.com/dlang/dlang.org/pull/2043


Re: Slices and Dynamic Arrays

2018-01-02 Thread Ali Çehreli via Digitalmars-d-learn


First, I'm in complete agreement with Steve on this. I wrote a response 
to you yesterday, which I decided to not send after counting to ten 
because despite being much more difficult, I see that your view can also 
be aggreable.


On 01/02/2018 10:02 AM, Jonathan M Davis wrote:
> On Tuesday, January 02, 2018 07:53:00 Steven Schveighoffer via 
Digitalmars-

> d-learn wrote:
>> On 1/1/18 12:18 AM, Jonathan M Davis wrote:
>>> A big problem with the term slice though is that it means more than 
just

>>> dynamic arrays - e.g. you slice a container to get a range over it, so
>>> that range is a slice of the container even though no arrays are
>>> involved at all. So, you really can't rely on the term slice meaning
>>> dynamic array. Whether it does or not depends on the context. That
>>> means that the fact that a number of folks have taken to using the term
>>> slice to mean T[] like the D Slices article talks about tends to create
>>> confusion when the context is not clear. IMHO, the D Slices article
>>> should be updated to use the correct terminology, but I don't think
>>> that the author is willing to do that.
>> The problem with all of this is that dynamic array is a defined term
>> *outside* of D [1]. And it doesn't mean exactly what D calls dynamic
>> arrays.
>>
>> This is why it's confusing to outsiders, because they are expecting the
>> same thing as a C++ std::vector, or a Java/.Net ArrayList, etc.

My view as well.

>> And D
>> "array slices" (the proper term IMO) are not the same.

Exactly!

>> I'm willing to change the article to mention "Array slices" instead of
>> just "slices", because that is a valid criticism. But I don't want to
>> change it from slices to dynamic arrays, since the whole article is
>> written around the subtle difference. I think the difference is 
important.

>>
>> -Steve
>>
>> [1] https://en.wikipedia.org/wiki/Dynamic_array
>
> I completely agree that the distinction between the dynamic array and the
> memory that backs it is critical to understanding the semantics when 
copying
> arrays around, and anyone who thinks that the dynamic array itself 
directly

> controls and owns the memory is certainly going to have some problems
> understanding the full semantics, but I don't agree that it's required to
> talk about the underlying GC-allocated memory buffer as being the dynamic
> array for that to be understood - especially when the dynamic array 
can be

> backed with other memory to begin with and still have the same semantics
> (just with a capacity of 0 and thus guaranteed reallocation upon 
appending

> or calling reserve). That distinction can be made just fine using the
> official D terminology.

As soon as we call it "dynamic array", I can't help but think "adding 
elements". Since GC is in the picture when that happens, it's essential 
to think GC when adding an element is involved.


Further, evident from your description it's a "slice" until you add 
elements because the underlying memory e.g. can be a stack-allocated 
fixed-length array.


For these reasons, the interface that the program is using is a "slice". 
Dynamic array is a different concept owned and implemented by the GC.


> I also don't agree that the way that D uses the term dynamic array
> contradicts the wikipedia article. What it describes is very much how D's
> dynamic arrays behave. It's just that D's dynamic arrays are a bit 
special
> in that they let the GC manage the memory instead of encapsulating it 
all in

> the type itself, and copying them slices the memory instead of copying it
> and thus causing an immediate reallocation like you would get with
> std::vector or treating it as a full-on reference type like Java 
does. But
> the semantics of what happens when you append to a D dynamic array 
are the
> same as appending to something like std::vector save for the fact 
that you

> might end up having the capacity filled sooner, because another dynamic
> array referring to the same memory grew into that space, resulting in a
> reallocation - but std::vector would have reallocated as soon as you 
copied

> it. So, some of the finer details get a bit confusing if you expect a
> dynamic array to behave _exactly_ like std::vector, but at a high 
level, the

> semantics are basically the same.

You seem to anchor your view of array slices on appending elements to 
them. I see them mainly as accessors into existing elements. Add to that 
the fact that a slice does not have instruments itself to manage its 
memory, it remains a slice for me. Again, dynamic array is a GC thing 
that works behind the scenes.


I can understand your point of view but I find it more confusing.

> On the basis that you seem to be arguing that D's dynamic arrays aren't
> really dynamic arrays, I could see someone arguing that std::vector 
isn't a
> dynamic array, because unlike ArrayList, it isn't a reference type 
and thus

> appending to the copy doesn't append to the original - or the other way
> around; 

Efficient way to pass struct as parameter

2018-01-02 Thread Tim Hsu via Digitalmars-d-learn
I am creating Vector3 structure. I use struct to avoid GC. 
However, struct will be copied when passed as parameter to 
function



struct Ray {
Vector3f origin;
Vector3f dir;

@nogc @system
this(Vector3f *origin, Vector3f *dir) {
this.origin = *origin;
this.dir = *dir;
}
}

How can I pass struct more efficiently?


Re: Get aliased type

2018-01-02 Thread Jonathan M Davis via Digitalmars-d-learn
On Tuesday, January 02, 2018 11:31:28 Steven Schveighoffer via Digitalmars-
d-learn wrote:
> On 1/2/18 7:45 AM, John Chapman wrote:
> > On Tuesday, 2 January 2018 at 12:19:19 UTC, David Nadlinger wrote:
> >> There is indeed no way to do this; as you say, aliases are just names
> >> for a particular reference to a symbol. Perhaps you don't actually
> >> need the names in your use case, though?
> >>
> >>  — David
> >
> > The idea was to distinguish between a BSTR (an alias for wchar* from
> > core.sys.windows.wtypes used widely with COM) and wchar* itself, chiefly
> > so that I could call the appropriate Windows SDK functions on them to
> > convert them to and from D strings. Although BSTRs look like wchar*s to
> > the end user they are not really interchangable - for example, calling
> > SysFreeString on a regular wchar* will cause a crash.
> >
> > According to the docs, a BSTR is prefixed with its length and ends in a
> > null character, but I'm not sure if checking for the existence of those
> > is going to be good enough
>
> Hm... perhaps the correct path is to define a BSTR as a struct with a
> wchar in it. I wouldn't even use Typedef since they aren't the same
> thing (one doesn't convert to the other).
>
> But I don't know how that would affect existing code.

Well, regardless, if he wants to treat BSTR and wchar* as different, he's
going to need to use different types for them, because aliases are literally
just for the programmer's benefit and are not distinguishable via any kind
of traits. So, ultimately, he either needs separate types, or he's going to
have to leave it up to the programmer to deal with the differences
themselves.

- Jonathan M Davis




Re: Help optimizing UnCompress for gzipped files

2018-01-02 Thread Christian Köstlin via Digitalmars-d-learn
On 02.01.18 15:09, Steven Schveighoffer wrote:
> On 1/2/18 8:57 AM, Adam D. Ruppe wrote:
>> On Tuesday, 2 January 2018 at 11:22:06 UTC, Stefan Koch wrote:
>>> You can make it much faster by using a sliced static array as buffer.
>>
>> Only if you want data corruption! It keeps a copy of your pointer
>> internally: https://github.com/dlang/phobos/blob/master/std/zlib.d#L605
>>
>> It also will always overallocate new buffers on each call
>> 
>>
>> There is no efficient way to use it. The implementation is substandard
>> because the API limits the design.
> 
> iopipe handles this quite well. And deals with the buffers properly
> (yes, it is very tricky. You have to ref-count the zstream structure,
> because it keeps internal pointers to *itself* as well!). And no, iopipe
> doesn't use std.zlib, I use the etc.zlib functions (but I poached some
> ideas from std.zlib when writing it).
> 
> https://github.com/schveiguy/iopipe/blob/master/source/iopipe/zip.d
> 
> I even wrote a json parser for iopipe. But it's far from complete. And
> probably needs updating since I changed some of the iopipe API.
> 
> https://github.com/schveiguy/jsoniopipe
> 
> Depending on the use case, it might be enough, and should be very fast.
> 
> -Steve
Thanks Steve for this proposal (actually I already had an iopipe version
on my harddisk that I applied to this problem) Its more or less your
unzip example + putting the data to an appender (I hope this is how it
should be done, to get the data to RAM).

iopipe is already better than the normal dlang version, almost like
java, but still far from the solution. I updated
https://github.com/gizmomogwai/benchmarks/tree/master/gunzip

I will give the direct gunzip calls a try ...

In terms of json parsing, I had really nice results with the fast.json
pull parser, but its comparing a little bit apples with oranges, because
I did not pull out all the data there.

---
Christian


Re: Slices and Dynamic Arrays

2018-01-02 Thread Jonathan M Davis via Digitalmars-d-learn
On Tuesday, January 02, 2018 07:53:00 Steven Schveighoffer via Digitalmars-
d-learn wrote:
> On 1/1/18 12:18 AM, Jonathan M Davis wrote:
> > A big problem with the term slice though is that it means more than just
> > dynamic arrays - e.g. you slice a container to get a range over it, so
> > that range is a slice of the container even though no arrays are
> > involved at all. So, you really can't rely on the term slice meaning
> > dynamic array. Whether it does or not depends on the context. That
> > means that the fact that a number of folks have taken to using the term
> > slice to mean T[] like the D Slices article talks about tends to create
> > confusion when the context is not clear. IMHO, the D Slices article
> > should be updated to use the correct terminology, but I don't think
> > that the author is willing to do that.
> The problem with all of this is that dynamic array is a defined term
> *outside* of D [1]. And it doesn't mean exactly what D calls dynamic
> arrays.
>
> This is why it's confusing to outsiders, because they are expecting the
> same thing as a C++ std::vector, or a Java/.Net ArrayList, etc. And D
> "array slices" (the proper term IMO) are not the same.
>
> I'm willing to change the article to mention "Array slices" instead of
> just "slices", because that is a valid criticism. But I don't want to
> change it from slices to dynamic arrays, since the whole article is
> written around the subtle difference. I think the difference is important.
>
> -Steve
>
> [1] https://en.wikipedia.org/wiki/Dynamic_array

I completely agree that the distinction between the dynamic array and the
memory that backs it is critical to understanding the semantics when copying
arrays around, and anyone who thinks that the dynamic array itself directly
controls and owns the memory is certainly going to have some problems
understanding the full semantics, but I don't agree that it's required to
talk about the underlying GC-allocated memory buffer as being the dynamic
array for that to be understood - especially when the dynamic array can be
backed with other memory to begin with and still have the same semantics
(just with a capacity of 0 and thus guaranteed reallocation upon appending
or calling reserve). That distinction can be made just fine using the
official D terminology.

I also don't agree that the way that D uses the term dynamic array
contradicts the wikipedia article. What it describes is very much how D's
dynamic arrays behave. It's just that D's dynamic arrays are a bit special
in that they let the GC manage the memory instead of encapsulating it all in
the type itself, and copying them slices the memory instead of copying it
and thus causing an immediate reallocation like you would get with
std::vector or treating it as a full-on reference type like Java does. But
the semantics of what happens when you append to a D dynamic array are the
same as appending to something like std::vector save for the fact that you
might end up having the capacity filled sooner, because another dynamic
array referring to the same memory grew into that space, resulting in a
reallocation - but std::vector would have reallocated as soon as you copied
it. So, some of the finer details get a bit confusing if you expect a
dynamic array to behave _exactly_ like std::vector, but at a high level, the
semantics are basically the same.

On the basis that you seem to be arguing that D's dynamic arrays aren't
really dynamic arrays, I could see someone arguing that std::vector isn't a
dynamic array, because unlike ArrayList, it isn't a reference type and thus
appending to the copy doesn't append to the original - or the other way
around; ArrayList isn't a dynamic array, because appending to a "copy"
affects the original. The semantics of what happens when copying the array
around are secondary to what being a dynamic array actually means, much as
they obviously have a significant effect on how you write your code. The
critical bits are how the memory is continguous and how appending is
amortized to O(1). The semantics of copying clearly vary considerably
depending on the exact implementation even if you ignore what D has done.

I think that your article has been a great help, and the fact that you do a
good job of describing the distinction between T[] and the memory behind it
is critical. I just disagree with the terminology that you used when you did
it, and I honestly think that the terminology used has a negative effect on
understanding and dealing with dynamic arrays backed by non-GC-allocated
memory, because the result seems to be that folks think that there's
something different about them and how they behave (since they don't point
to a "dynamic array" as your article uses the term), when in reality,
there's really no difference in the semantics aside from the fact that their
capacity is guaranteed to be 0 and thus reallocation is guaranteed upon
appending or calling reserve, whereas for GC-backed dynamic arrays, 

Re: structs inheriting from and implementing interfaces

2018-01-02 Thread Chris M. via Digitalmars-d-learn

On Tuesday, 2 January 2018 at 00:54:13 UTC, Laeeth Isharc wrote:

On Friday, 29 December 2017 at 12:59:21 UTC, rjframe wrote:

On Fri, 29 Dec 2017 12:39:25 +, Nicholas Wilson wrote:


[...]


I've actually thought about doing this to get rid of a bunch 
of if qualifiers in my function declarations. `static 
interface {}` compiles but doesn't [currently] seem to mean 
anything to the compiler, but could be a hint to the 
programmer that nothing will directly implement it; it's a 
compile-time interface. This would provide a more generic way 
of doing stuff like `isInputRange`, etc.


Atila does something like this

https://code.dlang.org/packages/concepts


Glad you brought this up, looks quite useful.


Re: C++ interfaces and D dynamic arrays

2018-01-02 Thread Jacob Carlborg via Digitalmars-d-learn

On 2018-01-02 17:48, Void-995 wrote:

Hi, everyone.

I would like to have an interface that can be implemented and/or used 
from C++ in D. One of the things I would like to keep is the nice 
feature of D dynamic arrays in terms of bounding checks and "length" 
property.


Let's assume:

extern (C++) interface ICppInterfaceInD {
   ref const(int[]) indices() const;
}

class A: ICppInterfaceInD {
   private int[] m_indices;

   extern (C++) ref const(int[]) indices() const {
     return m_indices;
   }
}

All I want is keeping const correctness like in C++ so no one can modify 
m_indices and use that as property within const pointer. I thought it 
may be passed to C++ as some struct of sort:


struct wrappedArrray(T) {
   size_t length;
   T* ptr;
}

but it just don't want to be friendly with me.

How should I rethink the interface with being D-way efficient when using 
that interface inside of D?


I would recommend using a struct as above or pass the pointer and length 
separately. Then create a function on the D side that coverts between 
the struct and a D array.


Note that you cannot use a D array in a C++ interface, it will fail to 
compile.


--
/Jacob Carlborg


Re: structs inheriting from and implementing interfaces

2018-01-02 Thread flamencofantasy via Digitalmars-d-learn
On Saturday, 30 December 2017 at 16:23:05 UTC, Steven 
Schveighoffer wrote:

On 12/29/17 7:03 AM, Mike Franklin wrote:



Is that simply because it hasn't been implemented or suggested 
yet for D, or was there a deliberate design decision?
It was deliberate, but nothing says it can't actually be done. 
All an interface call is, is a thunk to grab the actual object, 
and then a call to the appropriate function from a static 
vtable. It's pretty doable to make a fake interface. In fact, 
I'm pretty sure someone did just this, I have no idea how far 
back in the forums to search, but you can probably find it.


Now, it would be nicer if the language itself supported it. And 
I don't see any reason why it couldn't be supported. The only 
issue I think could be an ABI difference between member calls 
of structs and classes, but I think they are the same.


-Steve


Don't forget to implement boxing/unboxing and then write a long 
blog post on all the dangers and negative effects of using 
interfaces with structs.




C++ interfaces and D dynamic arrays

2018-01-02 Thread Void-995 via Digitalmars-d-learn

Hi, everyone.

I would like to have an interface that can be implemented and/or 
used from C++ in D. One of the things I would like to keep is the 
nice feature of D dynamic arrays in terms of bounding checks and 
"length" property.


Let's assume:

extern (C++) interface ICppInterfaceInD {
  ref const(int[]) indices() const;
}

class A: ICppInterfaceInD {
  private int[] m_indices;

  extern (C++) ref const(int[]) indices() const {
return m_indices;
  }
}

All I want is keeping const correctness like in C++ so no one can 
modify m_indices and use that as property within const pointer. I 
thought it may be passed to C++ as some struct of sort:


struct wrappedArrray(T) {
  size_t length;
  T* ptr;
}

but it just don't want to be friendly with me.

How should I rethink the interface with being D-way efficient 
when using that interface inside of D?




Re: Get aliased type

2018-01-02 Thread Steven Schveighoffer via Digitalmars-d-learn

On 1/2/18 7:45 AM, John Chapman wrote:

On Tuesday, 2 January 2018 at 12:19:19 UTC, David Nadlinger wrote:
There is indeed no way to do this; as you say, aliases are just names 
for a particular reference to a symbol. Perhaps you don't actually 
need the names in your use case, though?


 — David


The idea was to distinguish between a BSTR (an alias for wchar* from 
core.sys.windows.wtypes used widely with COM) and wchar* itself, chiefly 
so that I could call the appropriate Windows SDK functions on them to 
convert them to and from D strings. Although BSTRs look like wchar*s to 
the end user they are not really interchangable - for example, calling 
SysFreeString on a regular wchar* will cause a crash.


According to the docs, a BSTR is prefixed with its length and ends in a 
null character, but I'm not sure if checking for the existence of those 
is going to be good enough
Hm... perhaps the correct path is to define a BSTR as a struct with a 
wchar in it. I wouldn't even use Typedef since they aren't the same 
thing (one doesn't convert to the other).


But I don't know how that would affect existing code.

-Steve


Re: Help optimizing UnCompress for gzipped files

2018-01-02 Thread Steven Schveighoffer via Digitalmars-d-learn

On 1/2/18 8:57 AM, Adam D. Ruppe wrote:

On Tuesday, 2 January 2018 at 11:22:06 UTC, Stefan Koch wrote:

You can make it much faster by using a sliced static array as buffer.


Only if you want data corruption! It keeps a copy of your pointer 
internally: https://github.com/dlang/phobos/blob/master/std/zlib.d#L605


It also will always overallocate new buffers on each call 



There is no efficient way to use it. The implementation is substandard 
because the API limits the design.


iopipe handles this quite well. And deals with the buffers properly 
(yes, it is very tricky. You have to ref-count the zstream structure, 
because it keeps internal pointers to *itself* as well!). And no, iopipe 
doesn't use std.zlib, I use the etc.zlib functions (but I poached some 
ideas from std.zlib when writing it).


https://github.com/schveiguy/iopipe/blob/master/source/iopipe/zip.d

I even wrote a json parser for iopipe. But it's far from complete. And 
probably needs updating since I changed some of the iopipe API.


https://github.com/schveiguy/jsoniopipe

Depending on the use case, it might be enough, and should be very fast.

-Steve


Re: Help optimizing UnCompress for gzipped files

2018-01-02 Thread Adam D. Ruppe via Digitalmars-d-learn

On Tuesday, 2 January 2018 at 11:22:06 UTC, Stefan Koch wrote:
You can make it much faster by using a sliced static array as 
buffer.


Only if you want data corruption! It keeps a copy of your pointer 
internally: 
https://github.com/dlang/phobos/blob/master/std/zlib.d#L605


It also will always overallocate new buffers on each call 



There is no efficient way to use it. The implementation is 
substandard because the API limits the design.


If we really want a fast std.zlib, the API will need to be 
extended with new functions to fix these. Those new functions 
will probably look a LOT like the underlying C functions... which 
is why I say just use them right now.



I suspect that most of the slowdown is caused by the gc.
As there should be only calls to the gzip library


plz measure before spreading FUD about the GC.


Re: Help optimizing UnCompress for gzipped files

2018-01-02 Thread Adam D. Ruppe via Digitalmars-d-learn
On Tuesday, 2 January 2018 at 10:27:11 UTC, Christian Köstlin 
wrote:
After this I analyzed the first step of the process (gunzipping 
the data from a file to memory), and found out, that dlangs 
UnCompress is much slower than java, and ruby and plain c.


Yeah, std.zlib is VERY poorly written. You can get much better 
performance by just calling the C functions yourself instead. 
(You can just import import etc.c.zlib; it is included still)


Improving it would mean changing the public API. I think the 
one-shot compress/uncompress functions are ok, but the streaming 
class does a lot of unnecessary work inside like copying stuff 
around.


Re: Slices and Dynamic Arrays

2018-01-02 Thread Steven Schveighoffer via Digitalmars-d-learn

On 1/1/18 12:18 AM, Jonathan M Davis wrote:


A big problem with the term slice though is that it means more than just
dynamic arrays - e.g. you slice a container to get a range over it, so that
range is a slice of the container even though no arrays are involved at all.
So, you really can't rely on the term slice meaning dynamic array. Whether
it does or not depends on the context. That means that the fact that a
number of folks have taken to using the term slice to mean T[] like the D
Slices article talks about tends to create confusion when the context is not
clear. IMHO, the D Slices article should be updated to use the correct
terminology, but I don't think that the author is willing to do that.
The problem with all of this is that dynamic array is a defined term 
*outside* of D [1]. And it doesn't mean exactly what D calls dynamic arrays.


This is why it's confusing to outsiders, because they are expecting the 
same thing as a C++ std::vector, or a Java/.Net ArrayList, etc. And D 
"array slices" (the proper term IMO) are not the same.


I'm willing to change the article to mention "Array slices" instead of 
just "slices", because that is a valid criticism. But I don't want to 
change it from slices to dynamic arrays, since the whole article is 
written around the subtle difference. I think the difference is important.


-Steve

[1] https://en.wikipedia.org/wiki/Dynamic_array


Re: Get aliased type

2018-01-02 Thread John Chapman via Digitalmars-d-learn

On Tuesday, 2 January 2018 at 12:19:19 UTC, David Nadlinger wrote:
There is indeed no way to do this; as you say, aliases are just 
names for a particular reference to a symbol. Perhaps you don't 
actually need the names in your use case, though?


 — David


The idea was to distinguish between a BSTR (an alias for wchar* 
from core.sys.windows.wtypes used widely with COM) and wchar* 
itself, chiefly so that I could call the appropriate Windows SDK 
functions on them to convert them to and from D strings. Although 
BSTRs look like wchar*s to the end user they are not really 
interchangable - for example, calling SysFreeString on a regular 
wchar* will cause a crash.


According to the docs, a BSTR is prefixed with its length and 
ends in a null character, but I'm not sure if checking for the 
existence of those is going to be good enough.


Re: Get aliased type

2018-01-02 Thread David Nadlinger via Digitalmars-d-learn

On Tuesday, 2 January 2018 at 11:42:49 UTC, John Chapman wrote:
Because an alias of a type is just another name for the same 
thing you can't test if they're different. I wondered if there 
was a way to get the aliased name, perhaps via traits? 
(.stringof returns the original type.)


There is indeed no way to do this; as you say, aliases are just 
names for a particular reference to a symbol. Perhaps you don't 
actually need the names in your use case, though?


 — David


Get aliased type

2018-01-02 Thread John Chapman via Digitalmars-d-learn
Because an alias of a type is just another name for the same 
thing you can't test if they're different. I wondered if there 
was a way to get the aliased name, perhaps via traits? (.stringof 
returns the original type.)


I can't use Typedef because I'm inspecting types from sources I 
don't control.


Re: Help optimizing UnCompress for gzipped files

2018-01-02 Thread Stefan Koch via Digitalmars-d-learn
On Tuesday, 2 January 2018 at 10:27:11 UTC, Christian Köstlin 
wrote:

Hi all,

over the holidays, I played around with processing some gzipped 
json data. First version was implemented in ruby, but took too 
long, so I tried, dlang. This was already faster, but not 
really satisfactory fast. Then I wrote another version in java, 
which was much faster.


After this I analyzed the first step of the process (gunzipping 
the data from a file to memory), and found out, that dlangs 
UnCompress is much slower than java, and ruby and plain c.


There was some discussion on the forum a while ago: 
http://forum.dlang.org/thread/pihxxhjgnveulcdta...@forum.dlang.org


The code I used and the numbers I got are here: 
https://github.com/gizmomogwai/benchmarks/tree/master/gunzip


I used an i7 macbook with os x 10.13.2, ruby 2.5.0 built via 
rvm, python3 installed by homebrew, builtin clang compiler, 
ldc-1.7.0-beta1, java 1.8.0_152.


Is there anything I can do to speed up the dlang stuff?

Thanks in advance,
Christian


Yes indeed. You can make it much faster by using a sliced static 
array as buffer.

I suspect that most of the slowdown is caused by the gc.
As there should be only calls to the gzip library


Help optimizing UnCompress for gzipped files

2018-01-02 Thread Christian Köstlin via Digitalmars-d-learn
Hi all,

over the holidays, I played around with processing some gzipped json
data. First version was implemented in ruby, but took too long, so I
tried, dlang. This was already faster, but not really satisfactory fast.
Then I wrote another version in java, which was much faster.

After this I analyzed the first step of the process (gunzipping the data
from a file to memory), and found out, that dlangs UnCompress is much
slower than java, and ruby and plain c.

There was some discussion on the forum a while ago:
http://forum.dlang.org/thread/pihxxhjgnveulcdta...@forum.dlang.org

The code I used and the numbers I got are here:
https://github.com/gizmomogwai/benchmarks/tree/master/gunzip

I used an i7 macbook with os x 10.13.2, ruby 2.5.0 built via rvm,
python3 installed by homebrew, builtin clang compiler, ldc-1.7.0-beta1,
java 1.8.0_152.

Is there anything I can do to speed up the dlang stuff?

Thanks in advance,
Christian