Re: We need to enhance the standard library!

2016-09-06 Thread Daniel Kozak via Digitalmars-d

Dne 7.9.2016 v 07:58 Brian via Digitalmars-d napsal(a):

Standard library thin! The lack of a lot of commonly used functions! 
For example, HTTP client, database abstraction layer, mail delivery, 
Mathematical calculation standard library.


We can refer to the implementation of some of the standard library 
golang/java/rust, so that the whole language active up:


Https://golang.org/pkg/
Https://docs.oracle.com/javase/8/docs/api/
Https://doc.rust-lang.org/std/


No, I do not agree that we need HTTP client, database abstraction layer 
and mail delivery in standard library, those shoud be just packages 
idealy available from code.dlang.org, but what I think should be part of 
standard library is some form of async IO




Re: CompileTime performance measurement

2016-09-06 Thread Rory McGuire via Digitalmars-d
On Tue, Sep 6, 2016 at 7:42 PM, Stefan Koch via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> On Tuesday, 6 September 2016 at 10:42:00 UTC, Martin Nowak wrote:
>
>> On Sunday, 4 September 2016 at 00:04:16 UTC, Stefan Koch wrote:
>>
>>> I recently implemented __ctfeWriteln.
>>>
>>
>> Nice, is it only for your interpreter or can we move
>> https://trello.com/c/6nU0lbl2/24-ctfewrite to done? I think __ctfeWrite
>> would be a better primitive. And we could actually consider to specialize
>> std.stdio.write* for CTFE.
>>
>
> It's only for the current engine and only for Strings!
> See: https://github.com/dlang/druntime/pull/1643
> and https://github.com/dlang/dmd/pull/6101
>



Seriously Stefan, you make my day!

My libraries will be so much easier to write!


Re: Usability of "allMembers and derivedMembers traits now only return visible symbols"

2016-09-06 Thread Ali Çehreli via Digitalmars-d

On 09/04/2016 05:37 AM, David Nadlinger wrote:
> On Sunday, 4 September 2016 at 12:33:07 UTC, Andrei Alexandrescu wrote:
>> Thanks for answering. Yes, we really need introspection of private
>> members. One way or another we need to support that.
>
> Do we, though? It's easy to state a general claim like this, but I find
> it hard to actually justify.
>
>  — David

Let me flip the question: What harm can there be when I pass my type to 
a template and that template accesses the private members?


Ali



Re: Usability of "allMembers and derivedMembers traits now only return visible symbols"

2016-09-06 Thread Ali Çehreli via Digitalmars-d

On 09/04/2016 05:51 AM, Andrei Alexandrescu wrote:

* Tracing how and when a member mutates.

Ali



Re: Usability of "allMembers and derivedMembers traits now only return visible symbols"

2016-09-06 Thread Ali Çehreli via Digitalmars-d

On 09/04/2016 05:37 AM, David Nadlinger wrote:
> On Sunday, 4 September 2016 at 12:33:07 UTC, Andrei Alexandrescu wrote:
>> Thanks for answering. Yes, we really need introspection of private
>> members. One way or another we need to support that.
>
> Do we, though? It's easy to state a general claim like this, but I find
> it hard to actually justify.
>
>  — David

Let me try to reword what I've already said elsewhere in this thread.

As the user of a library I shouldn't need to know whether a template of 
that library uses __traits(allMembers) or not. Unfortunately, if 
__traits(allMembers) depends on access rights, I am forced to mixin 
*every* template in my code because I don't and should not know the 
template's implementation. If I don't know their implementation, I have 
to prepare myself for the worst and mixin *every* template. That's insanity.


Even if I know that they don't use __traits(allMembers) today, they may 
change their implementation in the future. So, I really have to mixin 
every template today.


Further, I have to be on my toes and watch every feature of every 
library in case they changed e.g. a function to a function template in a 
new release. Oh yes, I have to mixin that new template as well! (Again, 
because I can't be sure whether they use __traits(allMembers) or not.)


This new behavior and its mixin workaround is so insane to me that I 
can't even find ways to spell it out clearly. This behavior kills IFTI 
altogether because I don't know who uses __traits(allMembers). I can't 
rely on IFTI. I have to mixin every template instantiation myself. Crazy.


Ali



We need to enhance the standard library!

2016-09-06 Thread Brian via Digitalmars-d
Standard library thin! The lack of a lot of commonly used 
functions! For example, HTTP client, database abstraction layer, 
mail delivery, Mathematical calculation standard library.


We can refer to the implementation of some of the standard 
library golang/java/rust, so that the whole language active up:


Https://golang.org/pkg/
Https://docs.oracle.com/javase/8/docs/api/
Https://doc.rust-lang.org/std/


Re: Promotion rules ... why no float?

2016-09-06 Thread Daniel Kozak via Digitalmars-d

Dne 6.9.2016 v 22:51 deadalnix via Digitalmars-d napsal(a):


On Tuesday, 6 September 2016 at 07:52:47 UTC, Daniel Kozak wrote:
No, it is really important rule. If there will be automatic promotion 
to float for auto it will hurt performance

in cases when you want int and it will break things.



The performance have nothing to do with it. In fact float division is 
way faster than integer division, try it. It is all about correctness. 
Integer and floating point division have different semantic.




You are right, on my pc speed is same, but I am remember that there has 
been some performance problems last time i checked (something about only 
one FPU on my bulldozer cpu)




Re: associative arrays: how to insert key and return pointer in 1 step to avoid searching twice

2016-09-06 Thread Jon Degenhardt via Digitalmars-d

On Tuesday, 6 September 2016 at 04:32:52 UTC, Daniel Kozak wrote:

Dne 6.9.2016 v 06:14 mogu via Digitalmars-d napsal(a):

On Tuesday, 6 September 2016 at 01:17:00 UTC, Timothee Cour 
wrote:

is there a way to do this efficiently with associative arrays:

aa[key]=value;
auto ptr=key in aa;

without suffering the cost of the 2nd search (compiler should 
know ptr during aa[key]=value but it's not exposed it seems)


auto pa = &(aa[key] = value);


Yep, but this is a implementation detail, so be careful


My question as well. Occurs often when I use AAs. The above 
technique works in cases I've tried. However, to Daniel's point, 
from the spec I don't find it clear if it's expected to work. It 
would be useful to have better clarity on this. Anyone have more 
details?


Below is a test for simple class and struct cases, they work at 
present. The template is the type of helper I've wanted. I don't 
trust this particular template, but it'd be useful to know if 
there is a way to get something like this.


--Jon

/* Note: Not general template. Fails for nested classes (compile 
error). */

T* getOrInsertNew(T, K)(ref T[K] aa, K key)
if (is(T == class) || is(T == struct))
{
T* p = (key in aa);
static if (is (T == class))
return (p !is null) ? p : &(aa[key] = new T());
else static if (is(T == struct))
return (p !is null) ? p : &(aa[key] = T());
else
static assert(0, "Invalid object type");
}

class  FooClass  { int x = 0; }
struct BarStruct { int x = 0; }

void main(string[] args)
{
FooClass[string] aaFoo;
BarStruct[string] aaBar;

/* Class is reference type. Pointer should be to instance in 
AA. */

auto foo1 = aaFoo.getOrInsertNew("foo1");
foo1.x = 100;
auto foo1b = getOrInsertNew(aaFoo, "foo1");
assert(foo1 == foo1b && foo1.x == foo1b.x && foo1b.x == 100);

/* Struct is value type. Will pointer be to instance in AA? */
auto bar1 = aaBar.getOrInsertNew("bar1");
bar1.x = 100;
auto bar1b = getOrInsertNew(aaBar, "bar1");
assert(bar1 == bar1b && bar1.x == bar1b.x && bar1b.x == 100);

import std.stdio;
writeln("Success");
}


Re: Taking pipeline processing to the next level

2016-09-06 Thread Manu via Digitalmars-d
On 7 September 2016 at 12:00, finalpatch via Digitalmars-d
 wrote:
> On Wednesday, 7 September 2016 at 01:38:47 UTC, Manu wrote:
>>
>> On 7 September 2016 at 11:04, finalpatch via Digitalmars-d
>>  wrote:
>>>
>>>
>>> It shouldn't be hard to have the framework look at the buffer size and
>>> choose the scalar version when number of elements are small, it wasn't done
>>> that way simply because we didn't need it.
>>
>>
>> No, what's hard is working this into D's pipeline patterns seamlessly.
>
>
> The lesson I learned from this is that you need the user code to provide a
> lot of extra information about the algorithm at compile time for the
> templates to work out a way to fuse pipeline stages together efficiently.
>
> I believe it is possible to get something similar in D because D has more
> powerful templates than C++ and D also has some type introspection which C++
> lacks.  Unfortunately I'm not as good on D so I can only provide some ideas
> rather than actual working code.
>
> Once this problem is solved, the benefit is huge.  It allowed me to perform
> high level optimizations (streaming load/save, prefetching, dynamic
> dispatching depending on data alignment etc.) in the main loop which
> automatically benefits all kernels and pipelines.

Exactly!


Re: Taking pipeline processing to the next level

2016-09-06 Thread finalpatch via Digitalmars-d

On Wednesday, 7 September 2016 at 01:38:47 UTC, Manu wrote:
On 7 September 2016 at 11:04, finalpatch via Digitalmars-d 
 wrote:


It shouldn't be hard to have the framework look at the buffer 
size and choose the scalar version when number of elements are 
small, it wasn't done that way simply because we didn't need 
it.


No, what's hard is working this into D's pipeline patterns 
seamlessly.


The lesson I learned from this is that you need the user code to 
provide a lot of extra information about the algorithm at compile 
time for the  templates to work out a way to fuse pipeline stages 
together efficiently.


I believe it is possible to get something similar in D because D 
has more powerful templates than C++ and D also has some type 
introspection which C++ lacks.  Unfortunately I'm not as good on 
D so I can only provide some ideas rather than actual working 
code.


Once this problem is solved, the benefit is huge.  It allowed me 
to perform high level optimizations (streaming load/save, 
prefetching, dynamic dispatching depending on data alignment 
etc.) in the main loop which automatically benefits all kernels 
and pipelines.




Re: Taking pipeline processing to the next level

2016-09-06 Thread Manu via Digitalmars-d
On 7 September 2016 at 11:04, finalpatch via Digitalmars-d
 wrote:
>
> It shouldn't be hard to have the framework look at the buffer size and
> choose the scalar version when number of elements are small, it wasn't done
> that way simply because we didn't need it.

No, what's hard is working this into D's pipeline patterns seamlessly.


Re: Taking pipeline processing to the next level

2016-09-06 Thread finalpatch via Digitalmars-d

On Wednesday, 7 September 2016 at 00:21:23 UTC, Manu wrote:
The end of a scan line is special cased . If I need 12 pixels 
for the last iteration but there are only 8 left, an instance 
of Kernel::InputVector is allocated on stack, 8 remaining 
pixels are memcpy into it then send to the kernel. Output from 
kernel are also assigned to a stack variable first, then 
memcpy 8 pixels to the output buffer.


Right, and this is a classic problem with this sort of 
function; it is

only more efficient if numElements is suitable long.
See, I often wonder if it would be worth being able to provide 
both
functions, a scalar and array version, and have the algorithms 
select

between them intelligently.


We normally process full HD or higher resolution images so the 
overhead of having to copy the last iteration was negligible.


It was fairly easy to put together a scalar version as they are 
much easier to write than the SIMD ones.  In fact I had scalar 
version for every SIMD kernel,  and use them for unit testing.


It shouldn't be hard to have the framework look at the buffer 
size and choose the scalar version when number of elements are 
small, it wasn't done that way simply because we didn't need it.




Re: Taking pipeline processing to the next level

2016-09-06 Thread Manu via Digitalmars-d
On 7 September 2016 at 07:11, finalpatch via Digitalmars-d
 wrote:
> On Tuesday, 6 September 2016 at 14:47:21 UTC, Manu wrote:
>
>>> with a main loop that reads the source buffer in *12* pixels step, call
>>> MySimpleKernel 3 times, then call AnotherKernel 4 times.
>>
>>
>> It's interesting thoughts. What did you do when buffers weren't multiple
>> of the kernels?
>
>
> The end of a scan line is special cased . If I need 12 pixels for the last
> iteration but there are only 8 left, an instance of Kernel::InputVector is
> allocated on stack, 8 remaining pixels are memcpy into it then send to the
> kernel. Output from kernel are also assigned to a stack variable first, then
> memcpy 8 pixels to the output buffer.

Right, and this is a classic problem with this sort of function; it is
only more efficient if numElements is suitable long.
See, I often wonder if it would be worth being able to provide both
functions, a scalar and array version, and have the algorithms select
between them intelligently.


Re: Taking pipeline processing to the next level

2016-09-06 Thread Manu via Digitalmars-d
On 7 September 2016 at 01:54, Wyatt via Digitalmars-d
 wrote:
> On Monday, 5 September 2016 at 05:08:53 UTC, Manu wrote:
>>
>>
>> A central premise of performance-oriented programming which I've
>> employed my entire career, is "where there is one, there is probably
>> many", and if you do something to one, you should do it to many.
>
>
> From a conceptual standpoint, this sounds like the sort of thing array
> languages like APL and J thrive on, so there's solid precedent for the
> concept.  I might suggest looking into optimising compilers in that space
> for inspiration and such; APEX, for example:
> http://www.snakeisland.com/apexup.htm

Thanks, that's really interesting, I'll check it out.


> Of course, this comes with the caveat that this is (still!) some relatively
> heavily-academic stuff.  And I'm not sure to what extent that can help
> mitigate the problem of relaxing type requirements such that you can e.g.
> efficiently ,/⍉ your 4 2⍴"LR" vector for SIMD on modern processors.

That's not what I want though.
I intend to hand-write that function (I was just giving examples of
how auto-vectorisation almost always fails), the question here is, how
to work that new array function into our pipelines transparently...



Re: Taking pipeline processing to the next level

2016-09-06 Thread finalpatch via Digitalmars-d

On Tuesday, 6 September 2016 at 14:47:21 UTC, Manu wrote:

with a main loop that reads the source buffer in *12* pixels 
step, call

MySimpleKernel 3 times, then call AnotherKernel 4 times.


It's interesting thoughts. What did you do when buffers weren't 
multiple of the kernels?


The end of a scan line is special cased . If I need 12 pixels for 
the last iteration but there are only 8 left, an instance of 
Kernel::InputVector is allocated on stack, 8 remaining pixels are 
memcpy into it then send to the kernel. Output from kernel are 
also assigned to a stack variable first, then memcpy 8 pixels to 
the output buffer.


Re: Promotion rules ... why no float?

2016-09-06 Thread deadalnix via Digitalmars-d

On Tuesday, 6 September 2016 at 07:52:47 UTC, Daniel Kozak wrote:
No, it is really important rule. If there will be automatic 
promotion to float for auto it will hurt performance

in cases when you want int and it will break things.



The performance have nothing to do with it. In fact float 
division is way faster than integer division, try it. It is all 
about correctness. Integer and floating point division have 
different semantic.




Re: Valid to assign to field of struct in union?

2016-09-06 Thread Johan Engelen via Digitalmars-d

On Tuesday, 6 September 2016 at 17:58:44 UTC, Timon Gehr wrote:


I don't think so (although your case could be made to work 
easily enough). This seems to be accepts-invalid.


What do you think of the original example [1] in the bug report 
that uses

`mixin Proxy!i;` ?

[1] https://issues.dlang.org/show_bug.cgi?id=16471#c0


Re: @property Incorrectly Implemented?

2016-09-06 Thread Steven Schveighoffer via Digitalmars-d

On 9/6/16 3:18 PM, John wrote:


Currently it seems that @property isn't implemented correctly. For some
reason when you try to get the pointer of a property it returns a
delegate for some reason. This just seems plain wrong, the whole purpose
of a property is for a function to behave as if it weren't a function.
There's also some inconsistencies in the behavior, "get" is implemented
one way and then "set" is implemented another.

http://ideone.com/ZgGT2G

&t.x // returns "ref int delegate()"
&t.x()   // ok returns "int*", but defeats purpose of @property
&(t.j = 10)  // shouldn't this return "ref int delegate(int)" ?

It would be nice to get this behavior fixed, so that it doesn't become
set in stone. I think returning a delegate pointer is not what people
would except nor is there really any use case for it.


Just FYI, at the moment I believe the only difference (aside from the 
defunct -property switch) between a @property function and a 
non-@property function is:


int foo1() { return 1; }
@property int foo2() { return 1; }

pragma(msg, typeof(foo1)); // int function()
pragma(msg, typeof(foo2)); // int

That's it. typeof returns a different thing. All @property functions act 
just like normal functions when used in all other cases, and property 
syntax (assignment and getting) work on non-@property functions.


This situation is less than ideal. But it's been battled about dozens of 
times on the forums (including the very reasonable points you are 
making). It hasn't been solved, and the cynic in me says it won't ever be.


-Steve


Re: @property Incorrectly Implemented?

2016-09-06 Thread Lodovico Giaretta via Digitalmars-d

On Tuesday, 6 September 2016 at 19:18:11 UTC, John wrote:


Currently it seems that @property isn't implemented correctly. 
For some reason when you try to get the pointer of a property 
it returns a delegate for some reason. This just seems plain 
wrong, the whole purpose of a property is for a function to 
behave as if it weren't a function. There's also some 
inconsistencies in the behavior, "get" is implemented one way 
and then "set" is implemented another.


http://ideone.com/ZgGT2G

&t.x // returns "ref int delegate()"
&t.x()   // ok returns "int*", but defeats purpose of 
@property
&(t.j = 10)  // shouldn't this return "ref int 
delegate(int)" ?


It would be nice to get this behavior fixed, so that it doesn't 
become set in stone. I think returning a delegate pointer is 
not what people would except nor is there really any use case 
for it.


With properties, the & operator is the only way to have the 
function itself and not it's return value. The reason is that the 
return value of a function is not necessarily an lvalue, so 
taking its address is not always correct. Imagine this:


@property int x() { return 3; }

As 3 is an rvalue, you cannot take its address. That's the 
difference between a true field and a computed one.


The purpose of properties is the following:

struct S
{
@property int x() { /* whatever */ }
int y() { /* whatever */ }
}

writeln(typeof(S.x).stringof); // prints int
writeln(typeof(S.y).stringof); // prints int delegate()


Re: @property Incorrectly Implemented?

2016-09-06 Thread Jonathan M Davis via Digitalmars-d
On Tuesday, September 06, 2016 19:18:11 John via Digitalmars-d wrote:
> Currently it seems that @property isn't implemented correctly.
> For some reason when you try to get the pointer of a property it
> returns a delegate for some reason. This just seems plain wrong,
> the whole purpose of a property is for a function to behave as if
> it weren't a function. There's also some inconsistencies in the
> behavior, "get" is implemented one way and then "set" is
> implemented another.
>
> http://ideone.com/ZgGT2G
>
>  &t.x // returns "ref int delegate()"
>  &t.x()   // ok returns "int*", but defeats purpose of
> @property
>  &(t.j = 10)  // shouldn't this return "ref int delegate(int)"
> ?
>
> It would be nice to get this behavior fixed, so that it doesn't
> become set in stone. I think returning a delegate pointer is not
> what people would except nor is there really any use case for it.

Okay. How would it work to safely get a pointer to anything but the
@property function when taking its address? &t.x() is just going to give you
the address of the return value - which in most cases, is going to be a
temporary, so if that even compiles in most cases, it's a bit scary. Sure,
if the @property function returns by ref, then it can work, but an @property
function which returns by ref isn't worth much, since you're then giving
direct access to the member variable which is what an @property function is
usually meant to avoid. If you wanted to do that, you could just use a
public member variable.

So, given that most @property functions are not going to return by ref, how
does it make any sense at all for taking the address of an @property
function to do anything different than give you a delegate? Sure, that's not
what happens when you take the address of a variable, but you're not dealing
with a variable. You're dealing with an @property function which is just
trying to emulate a variable.

The reality of the matter is that an @property function is _not_ a variable.
It's just trying to emulate one, and that abstraction falls apart _really_
fast once you try and do much beyond getting and setting the value - e.g.
passing by ref falls flat on its face. We could do better than currently do
(e.g. making += lower to code that uses both the getter and the setter when
the getter doesn't return by ref), but there are some areas where a property
function simply can't act like a variable, because it isn't one. There isn't
even a guarantee that an @property function is backed by memory. It could be
a completely calculated value, in which case, expecting to get an address of
a variable when taking the address of the @property function makes even less
sense.

- Jonathan M Davis



Re: @property Incorrectly Implemented?

2016-09-06 Thread ag0aep6g via Digitalmars-d

On 09/06/2016 09:18 PM, John wrote:

&(t.j = 10)  // shouldn't this return "ref int delegate(int)" ?


`&t.j` should and does. With `= 10`, it's definitely a call, just like 
`&t.x()`.



It would be nice to get this behavior fixed, so that it doesn't become
set in stone.


Unfortunately, it already kinda is. Just flipping the switch would break 
circa all D code in existence. That's deemed unacceptable by the 
leadership, as far as I know.


If this can even be fixed, it must be done very carefully. The -property 
compiler switch is currently being deprecated. Maybe it can be 
repurposed later on to change behavior to your liking. But that's at 
least a couple releases in the future, i.e. months, maybe years.


Re: Usability of D for Visually Impaired Users

2016-09-06 Thread ag0aep6g via Digitalmars-d

On 09/06/2016 08:47 PM, Sai wrote:

1. The "Jump to" section at the top lists all the items available in
that module nicely, but the layout could be improved if it were listed
as a bunch of columns instead of one giant list with flow layout.


That's a solid idea. Unfortunately, variance in name length is large. 
Might be hard to find a good column width. But it's definitely worth 
exploring.



Even better if they are listed as the "cheat sheet" available in
algorithm module (which is lovely BTW). Can this cheat sheet be
automated for all modules?


The newer, DDOX-based version of the documentation has generated tables 
like that. You can find those docs here:


http://dlang.org/library-prerelease/index.html

It's supposed to become the main/default form of documentation 
soonishly. One thing we have to figure out is how to consolidate the 
hand-written cheat sheets with the generated ones. For example, 
std.algorithm modules currently have both. That's confusing for the reader.



2. I know red is the color of Mars, is there any way to change the theme
to blue or something soft?


Principally, we could offer an alternative stylesheet with another 
color. That's going to be forgotten during maintenance, though, 
especially if the demand for it is low.



Since we can download the documentation, is
there an easy way to do it myself maybe?


You can of course edit the stylesheet. In the zip, that's 
dmd2/html/d/css/style.css.


The color codes for the different reds are mentioned in a comment at the 
top. I hope they're up to date. A couple search/replace operations 
should take care of most of it.


The logo is colored independently. It's a relatively simple SVG file. 
Just edit the "background" color in dmd2/html/d/images/dlogo.svg.


I'm not sure if this qualifies as "easy".


@property Incorrectly Implemented?

2016-09-06 Thread John via Digitalmars-d


Currently it seems that @property isn't implemented correctly. 
For some reason when you try to get the pointer of a property it 
returns a delegate for some reason. This just seems plain wrong, 
the whole purpose of a property is for a function to behave as if 
it weren't a function. There's also some inconsistencies in the 
behavior, "get" is implemented one way and then "set" is 
implemented another.


http://ideone.com/ZgGT2G

&t.x // returns "ref int delegate()"
&t.x()   // ok returns "int*", but defeats purpose of 
@property
&(t.j = 10)  // shouldn't this return "ref int delegate(int)" 
?


It would be nice to get this behavior fixed, so that it doesn't 
become set in stone. I think returning a delegate pointer is not 
what people would except nor is there really any use case for it.


Re: Usability of D for Visually Impaired Users

2016-09-06 Thread Sai via Digitalmars-d
I have few suggestions, especially for people like me with 
migraine, it could be a bit easy eyes and overall less stressful.


1. The "Jump to" section at the top lists all the items available 
in that module nicely, but the layout could be improved if it 
were listed as a bunch of columns instead of one giant list with 
flow layout.


Even better if they are listed as the "cheat sheet" available in 
algorithm module (which is lovely BTW). Can this cheat sheet be 
automated for all modules?



2. I know red is the color of Mars, is there any way to change 
the theme to blue or something soft? Since we can download the 
documentation, is there an easy way to do it myself maybe?



PS: As many people have already said, documentation has improved 
very very much recently. Thank you for all the people working on 
it.





Re: Valid to assign to field of struct in union?

2016-09-06 Thread Timon Gehr via Digitalmars-d

On 06.09.2016 14:56, Johan Engelen wrote:

Hi all,
  I have a question about the validity of this code:
```
void main()
{
struct A {
int i;
}
struct S
{
union U
{
A first;
A second;
}
U u;

this(A val)
{
u.second = val;
assign(val);
}

void assign(A val)
{
u.first.i = val.i+1;
}
}
enum a = S(A(1));

assert(a.u.first.i == 2);
}
```

My question is: is it allowed to assign to a field of a struct inside a
union, without there having been an assignment to the (full) struct before?
...


I don't think so (although your case could be made to work easily 
enough). This seems to be accepts-invalid. Another case, perhaps 
demonstrating more clearly what is going on in the compiler:


float foo(){
union U{
int a;
float b;
}
U u;
u.b=1234;
u.a=3;
return u.b; // error
}
pragma(msg, foo());


float bar(){
struct A{ int a; }
struct B{ float b; }
union U{
A f;
B s;
}
U u;
u.s.b=1234;
u.f.a=0;
return u.s.b; // ok
}
pragma(msg, bar()); // 1234.00F


The compiler allows it, but it leads to a bug with CTFE of this code:
the assert fails.
(changing `enum` to `auto` moves the evaluation to runtime, and all
works fine)

Reported here: https://issues.dlang.org/show_bug.cgi?id=16471.





Re: CompileTime performance measurement

2016-09-06 Thread Stefan Koch via Digitalmars-d

On Tuesday, 6 September 2016 at 10:42:00 UTC, Martin Nowak wrote:

On Sunday, 4 September 2016 at 00:04:16 UTC, Stefan Koch wrote:

I recently implemented __ctfeWriteln.


Nice, is it only for your interpreter or can we move 
https://trello.com/c/6nU0lbl2/24-ctfewrite to done? I think 
__ctfeWrite would be a better primitive. And we could actually 
consider to specialize std.stdio.write* for CTFE.


It's only for the current engine and only for Strings!
See: https://github.com/dlang/druntime/pull/1643
and https://github.com/dlang/dmd/pull/6101


Re: DIP1001: DoExpression

2016-09-06 Thread Timon Gehr via Digitalmars-d

On 06.09.2016 17:23, Steven Schveighoffer wrote:

On 9/6/16 10:17 AM, Timon Gehr wrote:

On 06.09.2016 16:12, Steven Schveighoffer wrote:


I'm not sure I agree with the general principal of the DIP though. I've
never liked comma expressions, and this seems like a waste of syntax.
Won't tuples suffice here when they take over the syntax? e.g. (x, y,
z)[$-1]


(Does not work if x, y or z is of type 'void'.)


Let's first stipulate that z cannot be void here, as the context is you
want to evaluate to the result of some expression.

But why wouldn't a tuple of type (void, void, T) be valid?


Because 'void' is special. (Language design error imported from C.)

struct S{
void x; // does not work.
}

There can be no field (or variables) of type 'void'. (void,void,T) has 
two fields of type 'void'.


Just fixing the limitations is also not really possible, as e.g. void* 
and void[] exploit that 'void' is special and have a non-compositional 
meaning.



It could also auto-reduce to just (T).

-Steve


That would fix the limitation, but it is also quite surprising behaviour.


Re: DIP1001: DoExpression

2016-09-06 Thread Steven Schveighoffer via Digitalmars-d

On 9/6/16 1:01 PM, Timon Gehr wrote:

On 06.09.2016 17:23, Steven Schveighoffer wrote:

On 9/6/16 10:17 AM, Timon Gehr wrote:

On 06.09.2016 16:12, Steven Schveighoffer wrote:


I'm not sure I agree with the general principal of the DIP though. I've
never liked comma expressions, and this seems like a waste of syntax.
Won't tuples suffice here when they take over the syntax? e.g. (x, y,
z)[$-1]


(Does not work if x, y or z is of type 'void'.)


Let's first stipulate that z cannot be void here, as the context is you
want to evaluate to the result of some expression.

But why wouldn't a tuple of type (void, void, T) be valid?


Because 'void' is special. (Language design error imported from C.)


builtin tuples can be special too...

-Steve


Re: Taking pipeline processing to the next level

2016-09-06 Thread Wyatt via Digitalmars-d

On Monday, 5 September 2016 at 05:08:53 UTC, Manu wrote:


A central premise of performance-oriented programming which I've
employed my entire career, is "where there is one, there is 
probably

many", and if you do something to one, you should do it to many.


From a conceptual standpoint, this sounds like the sort of thing 
array languages like APL and J thrive on, so there's solid 
precedent for the concept.  I might suggest looking into 
optimising compilers in that space for inspiration and such; 
APEX, for example: http://www.snakeisland.com/apexup.htm


Of course, this comes with the caveat that this is (still!) some 
relatively heavily-academic stuff.  And I'm not sure to what 
extent that can help mitigate the problem of relaxing type 
requirements such that you can e.g. efficiently ,/⍉ your 4 2⍴"LR" 
vector for SIMD on modern processors.


-Wyatt


Re: Promotion rules ... why no float?

2016-09-06 Thread Daniel Kozak via Digitalmars-d

Dne 6.9.2016 v 17:26 Steven Schveighoffer via Digitalmars-d napsal(a):


On 9/6/16 11:00 AM, Sai wrote:

Thanks for the replies.

I tend to use a lot of float math (robotics and automation) so I almost
always want float output in case of division. And once in a while I bump
into this issue.

I am wondering what are the best ways to work around it.

float c = a / b; // a and b could be integers.

Some solutions:

float c = cast!float(a) / b;


auto c = float(a) / b;

-Steve


another solution is use own function for div operation

auto c = div(a,b);


Re: Promotion rules ... why no float?

2016-09-06 Thread Andrea Fontana via Digitalmars-d

On Tuesday, 6 September 2016 at 15:00:48 UTC, Sai wrote:

Thanks for the replies.

I tend to use a lot of float math (robotics and automation) so 
I almost always want float output in case of division. And once 
in a while I bump into this issue.


I am wondering what are the best ways to work around it.

float c = a / b; // a and b could be integers.

Some solutions:

float c = cast!float(a) / b;
float c = 1f * a / b;


Any less verbose ways to do it?

Another solution I am thinking is to write a user defined 
integer type with an overloaded division to return a float 
instead and use it everywhere in place of integers. I am 
curious how this will work out.


Exotic way:

import std.stdio;

float div(float a, float b) { return a / b; }

void main()
{

auto c = 3.div(4);
writeln(c);
}





Re: Promotion rules ... why no float?

2016-09-06 Thread Steven Schveighoffer via Digitalmars-d

On 9/6/16 11:00 AM, Sai wrote:

Thanks for the replies.

I tend to use a lot of float math (robotics and automation) so I almost
always want float output in case of division. And once in a while I bump
into this issue.

I am wondering what are the best ways to work around it.

float c = a / b; // a and b could be integers.

Some solutions:

float c = cast!float(a) / b;


auto c = float(a) / b;

-Steve


Re: Promotion rules ... why no float?

2016-09-06 Thread Daniel Kozak via Digitalmars-d

Dne 6.9.2016 v 17:00 Sai via Digitalmars-d napsal(a):


Thanks for the replies.

I tend to use a lot of float math (robotics and automation) so I 
almost always want float output in case of division. And once in a 
while I bump into this issue.


I am wondering what are the best ways to work around it.

float c = a / b; // a and b could be integers.

Some solutions:

float c = cast!float(a) / b;
float c = 1f * a / b;


Any less verbose ways to do it?

Another solution I am thinking is to write a user defined integer type 
with an overloaded division to return a float instead and use it 
everywhere in place of integers. I am curious how this will work out.
Because of alias this it works quite well for me in many cases. However 
one unplesant situation is with method parameters


Re: DIP1001: DoExpression

2016-09-06 Thread Steven Schveighoffer via Digitalmars-d

On 9/6/16 10:17 AM, Timon Gehr wrote:

On 06.09.2016 16:12, Steven Schveighoffer wrote:


I'm not sure I agree with the general principal of the DIP though. I've
never liked comma expressions, and this seems like a waste of syntax.
Won't tuples suffice here when they take over the syntax? e.g. (x, y,
z)[$-1]


(Does not work if x, y or z is of type 'void'.)


Let's first stipulate that z cannot be void here, as the context is you 
want to evaluate to the result of some expression.


But why wouldn't a tuple of type (void, void, T) be valid? It could also 
auto-reduce to just (T).


-Steve



Re: Promotion rules ... why no float?

2016-09-06 Thread Sai via Digitalmars-d

Thanks for the replies.

I tend to use a lot of float math (robotics and automation) so I 
almost always want float output in case of division. And once in 
a while I bump into this issue.


I am wondering what are the best ways to work around it.

float c = a / b; // a and b could be integers.

Some solutions:

float c = cast!float(a) / b;
float c = 1f * a / b;


Any less verbose ways to do it?

Another solution I am thinking is to write a user defined integer 
type with an overloaded division to return a float instead and 
use it everywhere in place of integers. I am curious how this 
will work out.





Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Ethan Watson via Digitalmars-d

On Tuesday, 6 September 2016 at 13:44:37 UTC, Ethan Watson wrote:

Suggestions?


Forgot to mention in OP that I had tried this( void* pArg = null 
); to no avail:


mutex.d(19): Deprecation: constructor mutex.Mutex.this all 
parameters have default arguments, but structs cannot have 
default constructors.


It's deprecated and the constructor doesn't get called. So no 
egregious sploits for me.


Re: Taking pipeline processing to the next level

2016-09-06 Thread Manu via Digitalmars-d
On 7 September 2016 at 00:26, finalpatch via Digitalmars-d
 wrote:
> On Tuesday, 6 September 2016 at 14:21:01 UTC, finalpatch wrote:
>>
>> Then some template magic will figure out the LCM of the 2 kernels' pixel
>> width is 3*4=12 and therefore they are fused together into a composite
>> kernel of pixel width 12.  The above line compiles down into a single
>> function invokation, with a main loop that reads the source buffer in 4
>> pixels step, call MySimpleKernel 3 times, then call AnotherKernel 4 times.
>
>
> Correction:
> with a main loop that reads the source buffer in *12* pixels step, call
> MySimpleKernel 3 times, then call AnotherKernel 4 times.

It's interesting thoughts. What did you do when buffers weren't
multiple of the kernels?


Re: Taking pipeline processing to the next level

2016-09-06 Thread finalpatch via Digitalmars-d

On Tuesday, 6 September 2016 at 14:26:22 UTC, finalpatch wrote:

On Tuesday, 6 September 2016 at 14:21:01 UTC, finalpatch wrote:
Then some template magic will figure out the LCM of the 2 
kernels' pixel width is 3*4=12 and therefore they are fused 
together into a composite kernel of pixel width 12.  The above 
line compiles down into a single function invokation, with a 
main loop that reads the source buffer in 4 pixels step, call 
MySimpleKernel 3 times, then call AnotherKernel 4 times.


Correction:
with a main loop that reads the source buffer in *12* pixels 
step, call MySimpleKernel 3 times, then call AnotherKernel 4 
times.


And of course the key to the speed is all function calls get 
inlined by the compiler.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Ethan Watson via Digitalmars-d
On Tuesday, 6 September 2016 at 14:27:49 UTC, Lodovico Giaretta 
wrote:
That's because it doesn't initialize (with static opCall) the 
fields of SomeOtherClass, right? I guess that could be solved 
once and for all with some template magic of the binding system.


Correct for the first part. The second part... not so much. Being 
all value types, there's nothing stopping you instantiating the 
example Mutex on the stack in a function in D - and no way of 
enforcing the user to go through a custom construction path 
either.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Lodovico Giaretta via Digitalmars-d

On Tuesday, 6 September 2016 at 14:10:43 UTC, Ethan Watson wrote:
@disable this() will hide the static opCall and the compiler 
will throw an error.


Yes, I realized that. My bad.

As @disable this is not actually defining a ctor, it should not 
be signaled as hiding the opCall. To me, this looks like an 
oversight in the frontend that should be fixed.


static opCall doesn't work for the SomeOtherClass example 
listed in OP.


That's because it doesn't initialize (with static opCall) the 
fields of SomeOtherClass, right? I guess that could be solved 
once and for all with some template magic of the binding system.


Re: Taking pipeline processing to the next level

2016-09-06 Thread finalpatch via Digitalmars-d

On Tuesday, 6 September 2016 at 14:21:01 UTC, finalpatch wrote:
Then some template magic will figure out the LCM of the 2 
kernels' pixel width is 3*4=12 and therefore they are fused 
together into a composite kernel of pixel width 12.  The above 
line compiles down into a single function invokation, with a 
main loop that reads the source buffer in 4 pixels step, call 
MySimpleKernel 3 times, then call AnotherKernel 4 times.


Correction:
with a main loop that reads the source buffer in *12* pixels 
step, call MySimpleKernel 3 times, then call AnotherKernel 4 
times.




Re: Return type deduction

2016-09-06 Thread Steven Schveighoffer via Digitalmars-d

On 9/5/16 5:59 AM, Andrea Fontana wrote:

I asked this some time (years?) ago. Time for a second try :)

Consider this:

---

T simple(T)() { return T.init; }


void main()
{
int test = simple!int(); // it compiles
int test2 = simple();// it doesn't


  auto test3 = simple!int();

Granted, you are still typing "auto", but only specify the type once.

-Steve


Re: Taking pipeline processing to the next level

2016-09-06 Thread finalpatch via Digitalmars-d

On Tuesday, 6 September 2016 at 03:08:43 UTC, Manu wrote:

I still stand by this, and I listed some reasons above.
Auto-vectorisation is a nice opportunistic optimisation, but it 
can't
be relied on. The key reason is that scalar arithmetic 
semantics are

different than vector semantics, and auto-vectorisation tends to
produce a whole bunch of extra junk code to carefully (usually
pointlessly) preserve the scalar semantics that it's trying to
vectorise. This will never end well.
But the vectorisation isn't the interesting problem here, I'm 
really
just interested in how to work these batch-processing functions 
into
our nice modern pipeline statements without placing an 
unreasonable
burden on the end-user, who shouldn't be expected to go out of 
their

way. If they even have to start manually chunking, I think we've
already lost; they won't know optimal chunk-sizes, or anything 
about

alignment boundaries, cache, etc.


In a previous job I had successfully created a small c++ library 
to perform pipelined SIMD image processing. Not sure how relevant 
it is but think I'd share the design here, perhaps it'll give you 
guys some ideas.


Basically the users of this library only need to write simple 
kernel classes, something like this:


// A kernel that processes 4 pixels at a time
struct MySimpleKernel : Kernel<4>
{
// Tell the library the input and output type
using InputVector  = Vector<__m128, 1>;
using OutputVector = Vector<__m128, 2>;

template
OutputVector apply(const T& src)
{
// T will be deduced to Vector<__m128, 1>
// which is an array of one __m128 element
// Awesome SIMD code goes here...
// And return the output vector
return OutputVector(...);
}
};

Of course the InputVector and OutputVector do not have to be 
__m128, they can totally be other types like int or float.


The cool thing is kernels can be chained together with >> 
operators.


So assume we have another kernel:

struct AnotherKernel : Kernel<3>
{
...
}

Then we can create a processing pipeline with these 2 kernels:

InputBuffer(...) >> MySimpleKernel() >> AnotherKernel() >> 
OutputBuffer(...);


Then some template magic will figure out the LCM of the 2 
kernels' pixel width is 3*4=12 and therefore they are fused 
together into a composite kernel of pixel width 12.  The above 
line compiles down into a single function invokation, with a main 
loop that reads the source buffer in 4 pixels step, call 
MySimpleKernel 3 times, then call AnotherKernel 4 times.


Any number of kernels can be chained together in this way, as 
long as your compiler doesn't explode.


At that time, my benchmarks showed pipelines generated in this 
way often rivals the speed of hand tuned loops.




Re: DIP1001: DoExpression

2016-09-06 Thread Timon Gehr via Digitalmars-d

On 06.09.2016 16:12, Steven Schveighoffer wrote:


I'm not sure I agree with the general principal of the DIP though. I've
never liked comma expressions, and this seems like a waste of syntax.
Won't tuples suffice here when they take over the syntax? e.g. (x, y,
z)[$-1]


(Does not work if x, y or z is of type 'void'.)


Re: DIP1001: DoExpression

2016-09-06 Thread Steven Schveighoffer via Digitalmars-d

On 9/3/16 12:03 PM, Jonathan M Davis via Digitalmars-d wrote:

On Saturday, September 03, 2016 14:42:34 Cauterite via Digitalmars-d wrote:

On Saturday, 3 September 2016 at 14:25:49 UTC, rikki cattermole

wrote:

I propose a slight change:
do(x, y, return z)


Hmm, I suppose I should mention one other motivation behind this
DIP:

I really like to avoid using the 'return' keyword inside
expressions, because I find it visually confusing - hear me out
here -
When you're reading a function and trying to understand its
control flow, one of the main elements you're searching for is
all the places the function can return from.
If the code has a lot of anonymous functions with return
statements this can really slow down the process as you have to
more carefully inspect every return to see if it's a 'real'
return or inside an anonymous function.

Also, in case it wasn't obvious, the do() syntax was inspired by
Clojure:
http://clojure.github.io/clojure/clojure.core-api.html#clojure.core/do


So, instead of having the return statement which everyone knows to look for
and is easy to grep for, you want to add a way to return _without_ a return
statement?


No, the amendment is to show that z is the "return" of the do 
expression. It doesn't return from the function the do expression is in.


I also think that a) we shouldn't have a requirement, or support for, 
return inside the expression -- return is not actually an expression, 
it's a statement. This would be very confusing. b) I like the idea of 
the semicolon to show that the last expression is different.


I'm not sure I agree with the general principal of the DIP though. I've 
never liked comma expressions, and this seems like a waste of syntax. 
Won't tuples suffice here when they take over the syntax? e.g. (x, y, 
z)[$-1]


-Steve


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Ethan Watson via Digitalmars-d
On Tuesday, 6 September 2016 at 13:57:27 UTC, Lodovico Giaretta 
wrote:
Of course I don't know which level of usability you want to 
achieve, but I think that in this case your bind system, when 
binding a default ctor, could use @disable this() and define a 
factory method (do static opCall work?) that calls the C++ ctor.


static opCall doesn't work for the SomeOtherClass example listed 
in OP. @disable this() will hide the static opCall and the 
compiler will throw an error.


Somewhat related: googling "factory method dlang" doesn't provide 
any kind of clarity on what exactly is a factory method. 
Documentation for factory methods/functions could probably be 
improved on this front.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Lodovico Giaretta via Digitalmars-d
On Tuesday, 6 September 2016 at 13:57:27 UTC, Lodovico Giaretta 
wrote:
On Tuesday, 6 September 2016 at 13:44:37 UTC, Ethan Watson 
wrote:

[...]
Suggestions?


Of course I don't know which level of usability you want to 
achieve, but I think that in this case your bind system, when 
binding a default ctor, could use @disable this() and define a 
factory method (do static opCall work?) that calls the C++ ctor.


It's not as good-looking as a true default ctor, but it doesn't 
provide any way to introduce bugs and it's not that bad (just a 
couple key strokes).


Correcting my answer. The following code compiles fine:

struct S
{
static S opCall()
{
S res = void;
// call C++ ctor
return res;
}
}

void main()
{
S s = S();
}

But introduces the possibility of using the default ctor 
inadvertitely.

Sadly, the following does not compile:

struct S
{
@disable this();
static S opCall()
{
S res = void;
// call C++ ctor
return res;
}
}

Making this compile would solve your issues.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Lodovico Giaretta via Digitalmars-d

On Tuesday, 6 September 2016 at 13:44:37 UTC, Ethan Watson wrote:

[...]
Suggestions?


Of course I don't know which level of usability you want to 
achieve, but I think that in this case your bind system, when 
binding a default ctor, could use @disable this() and define a 
factory method (do static opCall work?) that calls the C++ ctor.


It's not as good-looking as a true default ctor, but it doesn't 
provide any way to introduce bugs and it's not that bad (just a 
couple key strokes).


Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Ethan Watson via Digitalmars-d
Alright, so now I've definitely come up across something with 
Binderoo that has no easy solution.


For the sake of this example, I'm going to use the class I'm 
binary-matching with a C++ class and importing functionality with 
C++ function pointers to create a 100% functional match - our 
Mutex class. It doesn't have to be a mutex, it just needs to be 
any C++ class where a default constructor is non-trivial.


In C++, it looks much like what you'd expect:

class Mutex
{
public:
  Mutex();
  ~Mutex();
  void lock();
  bool tryLock();
  void unlock();

private:
  CRITICAL_SECTION  m_criticalSection;
};

Cool. Those functions call the exact library functions you'd 
expect, the constructor does an InitializeCriticalSection and the 
destructor does a DeleteCriticalSection.


Now, with Binderoo aiming to provide complete C++ matching to the 
point where it doesn't matter whether a class was allocated in 
C++ or D, this means I've chosen to make every C++-matching class 
a value type rather than a reference type. The reasoning is 
pretty simple:


class SomeOtherClass
{
private:
  SomeVitalObject m_object;
  Mutex   m_mutex;
};

This is a pretty common pattern. Other C++ classes will embed 
mutex instances inside them. A reference type for matching in 
this case is right out of the question. Which then leads to a 
major conundrum - default constructing this object in D.


D structs have initialisers, but you're only allowed constructors 
if you pass arguments. With a Binderoo matching struct 
declaration, it would basically look like this:


struct Mutex
{
  @BindConstructor void __ctor();
  @BindDestructor void __dtor();

  @BindMethod void lock();
  @BindMethod bool tryLock();
  @BindMethod void unlock();

  private CRITICAL_SECTION m_criticalSection;
}

After mixin expansion, it would look come out looking something 
like this:


struct Mutex
{
  pragma( inline ) this() { __methodTable.function0(); }
  pragma( inline ) ~this() { __methodTable.function1(); }

  pragma( inline ) void lock() { __methodTable.function2(); }
  pragma( inline ) bool tryLock() { return 
__methodTable.function3(); }

  pragma( inline ) void unlock() { __methodTable.function4(); }

  private CRITICAL_SECTION m_criticalSection;
}

(Imagine __methodTable is a gshared object with the relevant 
function pointers imported from C++.)


Of course, it won't compile. this() is not allowed for obvious 
reasons. But in this case, we need to call a corresponding 
non-trivial constructor in C++ code to get the functionality 
match.


Of course, given the simplicity of the class, I don't need to 
import C++ code to provide exact functionality at all. But I 
still need to call InitializeCriticalSection somehow whenever 
it's instantiated anywhere. This pattern of non-trivial default 
constructors is certainly not limited to mutexes, not in our 
codebase or wider C++ practices at all.


So now I'm in a bind. This is one struct I need to construct 
uniquely every time. And I also need to keep the usability up to 
not require calling some other function since this is matching a 
C++ class's functionality, including its ability to instantiate 
anywhere.


Suggestions?


Re: Valid to assign to field of struct in union?

2016-09-06 Thread Johan Engelen via Digitalmars-d

On Tuesday, 6 September 2016 at 12:56:24 UTC, Johan Engelen wrote:


The compiler allows it, but it leads to a bug with CTFE of this 
code: the assert fails.


Before someone smart tries it, yes the code works with LDC, but 
wait... swap the order of `first` and `second` in the union, and 
BOOM!
Internally, CTFE of the code leads to a corrupt union initializer 
array. LDC and DMD do things a little differently in codegen. 
Oversimplified: LDC will use the first member of the union, DMD 
the last.


Valid to assign to field of struct in union?

2016-09-06 Thread Johan Engelen via Digitalmars-d

Hi all,
  I have a question about the validity of this code:
```
void main()
{
struct A {
int i;
}
struct S
{
union U
{
A first;
A second;
}
U u;

this(A val)
{
u.second = val;
assign(val);
}

void assign(A val)
{
u.first.i = val.i+1;
}
}
enum a = S(A(1));

assert(a.u.first.i == 2);
}
```

My question is: is it allowed to assign to a field of a struct 
inside a union, without there having been an assignment to the 
(full) struct before?


The compiler allows it, but it leads to a bug with CTFE of this 
code: the assert fails.
(changing `enum` to `auto` moves the evaluation to runtime, and 
all works fine)


Reported here: https://issues.dlang.org/show_bug.cgi?id=16471.



Re: Taking pipeline processing to the next level

2016-09-06 Thread Jerry via Digitalmars-d

On Monday, 5 September 2016 at 05:08:53 UTC, Manu wrote:

I mostly code like this now:
  data.map!(x => transform(x)).copy(output);


So you basicly want to make the lazy computation eager and store 
the result?


data.map!(x => transform(x)).array

Will allocate a new array and fill it with the result of map.
And if you want to recycle the buffer I guess writing a buffer 
function would be trivial.






Re: Why is this?

2016-09-06 Thread Manu via Digitalmars-d
On 6 September 2016 at 21:28, Timon Gehr via Digitalmars-d
 wrote:
> On 06.09.2016 08:07, Manu via Digitalmars-d wrote:
>>
>> I have weird thing:
>>
>> template E(F){
>> enum E {
>> K = F(1)
>> }
>> }
>>
>> struct S(F = float, alias e_ = E!double.K) {}
>> S!float x; // Error: E!double.K is used as a type
>>
>> alias T = E!double.K;
>> struct S2(F = float, alias e_ = T) {}
>> S2!float y; // alias makes it okay...
>>
>> struct S3(F = float, alias e_ = (E!double.K)) {}
>> S3!float z; // just putting parens make it okay as well... wat!?
>>
>>
>> This can't be right... right?
>>
>> No problem if E is not a template.
>>
>
> Bug.

https://issues.dlang.org/show_bug.cgi?id=16472


Re: Why is this?

2016-09-06 Thread Timon Gehr via Digitalmars-d

On 06.09.2016 08:07, Manu via Digitalmars-d wrote:

I have weird thing:

template E(F){
enum E {
K = F(1)
}
}

struct S(F = float, alias e_ = E!double.K) {}
S!float x; // Error: E!double.K is used as a type

alias T = E!double.K;
struct S2(F = float, alias e_ = T) {}
S2!float y; // alias makes it okay...

struct S3(F = float, alias e_ = (E!double.K)) {}
S3!float z; // just putting parens make it okay as well... wat!?


This can't be right... right?

No problem if E is not a template.



Bug.


Re: CompileTime performance measurement

2016-09-06 Thread Jonathan M Davis via Digitalmars-d
On Tuesday, September 06, 2016 10:46:11 Martin Nowak via Digitalmars-d wrote:
> On Sunday, 4 September 2016 at 00:04:16 UTC, Stefan Koch wrote:
> > Hi Guys.
> >
> > I recently implemented __ctfeWriteln.
> > Based on that experience I have now implemented another pseudo
> > function called __ctfeTicksMs.
> > That evaluates to a uint representing the number of
> > milliseconds elapsed between the start of dmd and the time of
> > semantic evaluation of this expression.
>
> For bigger CTFE programs it might be helpful.
> Milliseconds are a fairly low resolution, would think with hnsec
> or so makes a better unit. Using core.time.TickDuration for that
> would make sense.

If you're doing to do that use core.time.Duration. TickDuration is slated to
be deprecated once the functionality in Phobos that uses it has been
deprecated. Duration replaces its functionality as a duration, and MonoTime
replaces its functionality as a timestamp of the monotonic clock.

- Jonathan M Davis



Re: C# 7 Features - Tuples

2016-09-06 Thread Nick Treleaven via Digitalmars-d
On Monday, 5 September 2016 at 15:50:31 UTC, Lodovico Giaretta 
wrote:
On Monday, 5 September 2016 at 15:43:43 UTC, Nick Treleaven 
wrote:

We can already (almost do that):


import std.stdio, std.typecons;

void unpack(T...)(Tuple!T tup, out T decls)
{
static if (tup.length > 0)
{
decls[0] = tup[0];
tuple(tup[1..$]).unpack(decls[1..$]);
}
}

void main()
{
auto t = tuple(1, "a", 3.0);
int i;
string s;
double d;
t.unpack(i, s, d);
writeln(i);
writeln(s);
writeln(d);
}


The main benefit of supporting tuple syntax is unpacking into new 
declarations (writing Tuple!(...) or tuple!(...) isn't that 
significant IMO). I was suggesting that out argument 
*declarations* actually provides this and is a more general 
feature.


Re: D Meetup in Hamburg?

2016-09-06 Thread Dgame via Digitalmars-d
On Tuesday, 6 September 2016 at 09:42:12 UTC, Martin Tschierschke 
wrote:

Hi All,
anybody interested to meet in Hamburg, Germany?

Time and location will be found!

Regards mt.


Yes, I would be interested.


Re: dlang-vscode

2016-09-06 Thread Andrej Mitrovic via Digitalmars-d
On 9/6/16, John Colvin via Digitalmars-d  wrote:
> I've used it a bit. See also:

VS code is pretty solid!

I'm too used to Sublime to start using it now, but the fact it's
open-source is a huge plus. Some of its addons are pretty great, for
example you can run an opengl shader and have its output display in
the editor.


Re: CompileTime performance measurement

2016-09-06 Thread Martin Nowak via Digitalmars-d

On Sunday, 4 September 2016 at 00:04:16 UTC, Stefan Koch wrote:

Hi Guys.

I recently implemented __ctfeWriteln.
Based on that experience I have now implemented another pseudo 
function called __ctfeTicksMs.
That evaluates to a uint representing the number of 
milliseconds elapsed between the start of dmd and the time of 
semantic evaluation of this expression.


For bigger CTFE programs it might be helpful.
Milliseconds are a fairly low resolution, would think with hnsec 
or so makes a better unit. Using core.time.TickDuration for that 
would make sense.


Re: CompileTime performance measurement

2016-09-06 Thread Martin Nowak via Digitalmars-d

On Sunday, 4 September 2016 at 00:04:16 UTC, Stefan Koch wrote:

I recently implemented __ctfeWriteln.


Nice, is it only for your interpreter or can we move 
https://trello.com/c/6nU0lbl2/24-ctfewrite to done? I think 
__ctfeWrite would be a better primitive. And we could actually 
consider to specialize std.stdio.write* for CTFE.


Re: CompileTime performance measurement

2016-09-06 Thread Martin Nowak via Digitalmars-d
On Sunday, 4 September 2016 at 00:08:14 UTC, David Nadlinger 
wrote:

Please don't. This makes CTFE indeterministic.


Well we already have __TIMESTAMP__, though I think it doesn't 
change during compilation.




Re: CompileTime performance measurement

2016-09-06 Thread Martin Tschierschke via Digitalmars-d

On Sunday, 4 September 2016 at 19:36:16 UTC, Stefan Koch wrote:
On Sunday, 4 September 2016 at 12:38:05 UTC, Andrei 
Alexandrescu wrote:

On 9/4/16 6:14 AM, Stefan Koch wrote:
writeln and __ctfeWriteln are to be regarded as completely 
different

things.
__ctfeWriteln is a debugging tool only!
It should not be used in any production code.


Well I'm not sure how that would be reasonably enforced. -- 
Andrei


One could enforce it by defining it inside a version or debug 
block.
The reason I do not want to see this in production code is as 
follows:


In the engine I am working on, communication between it and the 
rest of dmd is kept to a minimum, because :


"The new CTFE engine abstracts away everything into bytecode,
there is no guarantee that the bytecode-evaluator is run in the 
same process or even on the same machine."


An alternative might be, to save your ctfe values in an static 
array and output them on startup of the compiled program. Same 
idea is used in vibe.d to make a caching of the templates 
evaluation possible. See: http://code.dlang.org/packages/diet-ng 
Experimental HTML template caching





Simplifying conversion and formatting code in Phobos

2016-09-06 Thread Andrei Alexandrescu via Digitalmars-d
We've learned a lot about good D idioms since std.conv was initiated. 
And of course it was always the case that writing libraries is quite 
different from writing code to be used within one sole application. 
Consider:


* to!T(x) must work for virtually all types T and typeof(x) that are 
sensible. The natural way of doing so is to have several 
semi-specialized overloads.


* Along the way it makes sense to delegate work from the user-level 
convenient syntax to a more explicit but less convenient syntax. Hence 
the necessity of toImpl.


* The need to "please format all arguments as a string" was a natural 
necessity e.g. as a second argument to assert or enforce. Hence the 
necessity of text(x, y, z) as the concatenation of to!string(x), 
to!string(y), and to!string(z).


* FormattedWrite was necessary for fwriteln and related.

* All of these have similarities and distinctions so they may use one 
another opportunistically. The alternative is to write the same code in 
different parts for the sake of artificially separating things that in 
fact are related.


The drawback of this is taking this in as a reader and maintainer. We 
have the 'text' template which calls the 'textImpl' template which calls 
the 'to' template which calls the 'toImpl' template which calls the 
'parse' template which calls the 'FormattedWrite' template which calls 
the 'to' template. Not easy to find where the work is ultimately done.


It is a challenge to find the right balance among everything. But I'm 
sure we can do better than what we have now because of the experience 
we've gained.


If anyone would like to take a fresh look at simplifying the code 
involved, it would be quite interesting. The metrics here are simpler 
code, fewer names, simpler documentation (both internal and external), 
and less code.



Andrei


D Meetup in Hamburg?

2016-09-06 Thread Martin Tschierschke via Digitalmars-d

Hi All,
anybody interested to meet in Hamburg, Germany?

Time and location will be found!

Regards mt.


Re: dlang-vscode

2016-09-06 Thread mogu via Digitalmars-d

On Tuesday, 6 September 2016 at 05:38:28 UTC, Manu wrote:
On 6 September 2016 at 14:22, Daniel Kozak via Digitalmars-d 
 wrote:

Dne 6.9.2016 v 03:41 Manu via Digitalmars-d napsal(a):


On 6 September 2016 at 09:51, John Colvin via Digitalmars-d 
 wrote:


On Monday, 5 September 2016 at 22:17:59 UTC, Andrei 
Alexandrescu wrote:


Google Alerts just found this:

https://marketplace.visualstudio.com/items?itemName=dlang-vscode.dlang

Is anyone here familiar with that work?


Andrei



I've used it a bit. See also:


https://marketplace.visualstudio.com/search?term=Dlang&target=VSCode&sortBy=Relevance


I used it, but then switched to code-d which seemed more 
mature (see

John's link above).
The problem with code-d is it's dependency workspace-d has a 
really
painful installation process. It needs to be available in 
d-apt.


It is OK on ArchLinux ;)

pacaur -S workspace-d

or

packer -S workspace-d

or

yaourt -S workspace-d


Yup, it works well on my arch machine, but it doesn't work on 
my ubuntu machines at work. Ubuntu is the most popular distro; 
it really needs to work well ;)


clone and use workspaced-installer in that repo, worked for me in 
ubuntu 16.04lts.


Re: Usability of D for Visually Impaired Users

2016-09-06 Thread Chris via Digitalmars-d

On Monday, 5 September 2016 at 20:59:46 UTC, Walter Bright wrote:

On 9/5/2016 2:14 AM, Chris wrote:
A blind user I worked with used D for a term paper and he 
could find his way
around on dlang.org. So it seems to be pretty ok already. We 
should only be

careful with new stuff like language tours and tutorials.


This is good to hear. But with constant changes to dlang.org, 
it can be very easy to slip away from that, especially with all 
the pressure to "modernize" the look-and-feel with crap. We'll 
need constant vigilance!


I agree, the webpage should comply with accessibility rules 
consistently and not fall foul of flashy design when adding to 
the homepage.


As regards the zoom: most browsers handle the zoom very well 
(Ctrl+'+') these days. Visually impaired users often use the zoom 
that comes with the screen reading software or the inbuilt, 
OS-specific zoom (cf. Apple's Voice Over, Windows also a zoom as 
far as I know).


Re: Quality of errors in DMD

2016-09-06 Thread Laeeth Isharc via Digitalmars-d
On Monday, 5 September 2016 at 15:55:16 UTC, Dominikus Dittes 
Scherkl wrote:
On Sunday, 4 September 2016 at 20:14:37 UTC, Walter Bright 
wrote:

On 9/4/2016 10:56 AM, David Nadlinger wrote:
The bug report I need is the assert location, and a test case 
that causes it. Users do not need to supply any other 
information.


So, if we assume the user cannot debug if he hit an compiler 
bug, I as a compiler developer would at least like to receive a 
report containing a simple number, to identify which of the 830 
assert(0)'s in the code that I deemed to be unreachable was 
actually hit.


Because even if I don't receive a reduced testcase, I have a 
strong hint what assumption I should re-think, now that I know 
that it is effectively NOT unreachable.


Could we agree so far?

SO what problem would it be to give the assert(0)'s a number 
each and print out a message:

"Compiler bug: assert #xxx was hit, please send a bug report"
?


I wonder what people think of opt in automatic statistic 
collecting.  Not a substitute for a bug report, as one doesn't 
want source code being shipped off, but suppose a central server 
at dlang.org tracks internal compiler errors for those who have 
opted in. At least it will be more obvious more quickly which 
parts of code seem to be asserting.




Re: ADL

2016-09-06 Thread Guillaume Boucher via Digitalmars-d

On Monday, 5 September 2016 at 23:50:33 UTC, Timon Gehr wrote:
One hacky way is to provide a mixin template to create a 
wrapper type within each module that needs it, with 
std.typecons.Proxy. Proxy picks up UFCS functions in addition 
to member functions and turns them into member functions. But 
this leads to a lot of template bloat, because callers that 
share the same added UFCS functions don't actually share the 
instantiation. Also, it only works one level deep and 
automatically generated Wrapper types are generally prone to be 
somewhat brittle.


I don't think cloning a type just to add functionality can 
possibly be the right way.


A C++-style of customizing behavior is using traits. Those traits 
would be a compile time argument to the algorithm function.  
Instead of arg.addone() one would use trait.addone(arg).  It is 
not hard to write a proxy that merges trait and arg into one 
entity, but this should to be done from the callee.


The default trait would be type.addone_trait if it exists, or 
else some default trait that uses all available functions and 
member function from the module of the type.  In most of the 
cases this is enough, but it enables adding traits to existing 
types and also different implementations of the same traits.


This gets really bloaty in C++, and that's why usually ADL is 
preferred, but D has the capability to reduce the overhead to a 
minimum.


It doesn't quite make it possible to separate the implementation 
of types, algorithms and traits (UFCS) into different modules 
such that they don't know each other.  Either the user has to 
specify the trait each call or either the type's module or the 
algorithm's module has to import the traits.


What I call traits is very similar to type classes in other 
languages where (among other features) the traits are 
automatically being attached to the type.  (Type classes are also 
what C++ concepts originally wanted to be.)


Re: Promotion rules ... why no float?

2016-09-06 Thread Jonathan M Davis via Digitalmars-d
On Tuesday, September 06, 2016 07:26:37 Andrea Fontana via Digitalmars-d 
wrote:
> On Tuesday, 6 September 2016 at 07:04:24 UTC, Sai wrote:
> > Consider this:
> >
> > import std.stdio;
> > void main()
> > {
> >
> > byte a = 6, b = 7;
> > auto c = a + b;
> > auto d = a / b;
> > writefln("%s, %s", typeof(c).stringof, c);
> > writefln("%s, %s", typeof(d).stringof, d);
> >
> > }
> >
> > Output :
> > int, 13
> > int, 0
> >
> > I really wish d gets promoted to a float. Besides C
> > compatibility, any reason why d got promoted only to int even
> > at the risk of serious bugs and loss of precision?
> >
> > I know I could have typed "auto a = 6.0" instead, but still it
> > feels like an half-baked promotion rules.
>
> Integer division and modulo are not bugs.

Indeed. There are a lot of situations where they are exactly what you want.
Floating point adds imprecision to calculations that you often want to
avoid. Also, if I understand correctly, floating point operations are more
expensive that the corresponding integer operations. So, it would be very
bad if dividing integers resulted in a floating point value by default. And
if you _do_ want a floating point value, then you can cast one of the
operands to the floating point type you want, and you'll get floating point
arithmetic. Personally, I find that I rarely need floating point values at
all.

Another thing to take into account is that you don't normally want stuff
like arithmetic operations to do type conversion. We already have enough
confusion about byte and short being converted to int when you do arithmetic
on them. Having other types suddenly start changing depending on the
operations involved would just make things worse.

- Jonathan M Davis



Re: Promotion rules ... why no float?

2016-09-06 Thread Daniel Kozak via Digitalmars-d


Dne 6.9.2016 v 09:04 Sai via Digitalmars-d napsal(a):

Consider this:

import std.stdio;
void main()
{
byte a = 6, b = 7;
auto c = a + b;
auto d = a / b;
writefln("%s, %s", typeof(c).stringof, c);
writefln("%s, %s", typeof(d).stringof, d);
}

Output :
int, 13
int, 0

I really wish d gets promoted to a float. Besides C compatibility, any 
reason why d got promoted only to int even at the risk of serious bugs 
and loss of precision?


I know I could have typed "auto a = 6.0" instead, but still it feels 
like an half-baked promotion rules.
No, it is really important rule. If there will be automatic promotion to 
float for auto it will hurt performance

in cases when you want int and it will break things.

But maybe in below case it could make more sense:

float d = a / b; // or it could print a warning because there is a high 
probability this is an error


ok maybe something like linter could be use to find those places



Re: Promotion rules ... why no float?

2016-09-06 Thread Andrea Fontana via Digitalmars-d

On Tuesday, 6 September 2016 at 07:04:24 UTC, Sai wrote:

Consider this:

import std.stdio;
void main()
{
byte a = 6, b = 7;
auto c = a + b;
auto d = a / b;
writefln("%s, %s", typeof(c).stringof, c);
writefln("%s, %s", typeof(d).stringof, d);
}

Output :
int, 13
int, 0

I really wish d gets promoted to a float. Besides C 
compatibility, any reason why d got promoted only to int even 
at the risk of serious bugs and loss of precision?


I know I could have typed "auto a = 6.0" instead, but still it 
feels like an half-baked promotion rules.


Integer division and modulo are not bugs.

Andrea


Re: Promotion rules ... why no float?

2016-09-06 Thread Stefan Koch via Digitalmars-d

On Tuesday, 6 September 2016 at 07:04:24 UTC, Sai wrote:

Consider this:

import std.stdio;
void main()
{
byte a = 6, b = 7;
auto c = a + b;
auto d = a / b;
writefln("%s, %s", typeof(c).stringof, c);
writefln("%s, %s", typeof(d).stringof, d);
}

Output :
int, 13
int, 0

I really wish d gets promoted to a float. Besides C 
compatibility, any reason why d got promoted only to int even 
at the risk of serious bugs and loss of precision?


I know I could have typed "auto a = 6.0" instead, but still it 
feels like an half-baked promotion rules.


Because implicit conversion to float on every division is bad bad 
bad.


Promotion rules ... why no float?

2016-09-06 Thread Sai via Digitalmars-d

Consider this:

import std.stdio;
void main()
{
byte a = 6, b = 7;
auto c = a + b;
auto d = a / b;
writefln("%s, %s", typeof(c).stringof, c);
writefln("%s, %s", typeof(d).stringof, d);
}

Output :
int, 13
int, 0

I really wish d gets promoted to a float. Besides C 
compatibility, any reason why d got promoted only to int even at 
the risk of serious bugs and loss of precision?


I know I could have typed "auto a = 6.0" instead, but still it 
feels like an half-baked promotion rules.