Re: Time for 2.067

2015-02-15 Thread Martin Nowak via Digitalmars-d
On Saturday, 31 January 2015 at 02:22:59 UTC, Vladimir Panteleev 
wrote:
I don't know how the .html files that are included in dmd.zip 
are generated. Please try and see if it works for you. If not, 
I can look into it if you open-source your process.


https://github.com/D-Programming-Language/installer/blob/4d57e861a2b15767f5e004b74a4578205dc03e3e/create_dmd_release/create_dmd_release.d#L784

Not sure if anyone is actually using those HTML files.


Re: Time for 2.067

2015-02-05 Thread via Digitalmars-d
On Thursday, 5 February 2015 at 03:00:53 UTC, Andrei Alexandrescu 
wrote:
On 2/2/15 2:42 PM, Ulrich =?UTF-8?B?S8O8dHRsZXIi?= 
kuett...@gmail.com wrote:
On Friday, 30 January 2015 at 23:17:09 UTC, Andrei 
Alexandrescu wrote:
Sorry, I thought that was in the bag. Keep current semantics, 
call it
chunkBy. Add the key to each group when the predicate is 
unary. Make

sure aggregate() works nice with chunkBy().


I might miss some information on this, so please forgive my 
naive

question. Your requirements seem to be contradictory to me.

1. aggregate expects a range of ranges


Probably we need to change that because aggregate should 
integrate seamlessly with chunkBy.


2. you ask chunkBy to return something that is not a range of 
ranges


Yah.


3. you ask chunkBy to play along nicely with aggregate


Yah.

There are certainly ways to make this work. Adding a special 
version of
aggregate comes to mind. However, I fail to see the rational 
behind this.


Rationale as discussed is that the key value for each group is 
useful information. Returning a range of ranges would waste 
that information forcing e.g. its recomputation.




I understand and agree. My suggestion aims to avoid this 
particular waste. See below.


To me the beauty of range is the composibility of simple 
constructs to
create complex behavior. The current chunkBy does not need to 
be changed

to add the key to each group when the predicate is unary:

 r.map!(pred, a)
  .chunkBy!(a[0])
  .map!(inner = tuple(inner.front[0], inner.map!a[1]));

So I'd like to know why the above is inferior to a rework of 
the

chunkBy's implementation. Maybe this is a question for D.learn.


Wouldn't that force recomputation if a more complex expression 
replaced a[0]?


I do not think you ever want to replace a[0] here. In the code 
above the (original) predicate to chunkBy is pred. The idea is to 
evaluate the predicate outside of chunkBy. Create a range of 
tuples from the original range, chunk the range of tuples and 
construct the desired result from the chunked range of tuples.


// create a range of `tuple(pred(a), a)`
r.map!(pred, a)

// chunk the range of tuples based of the first tuple element
// this results in a range of ranges of tuples
   .chunkBy!(a[0])

// convert the inner ranges of tuples to a tuple of the predicate 
applied and the appropriate range

   .map!(inner = tuple(inner.front[0], inner.map!a[1]));

The construction of a range of tuples is not for free. On the 
bright side:


* you only do it when you need it
* if your predicate is that heavy, you might want to precompute 
it anyway
* a modified chunkBy is not exactly free either (and you pay the 
price even if you do not need the key value)


Now I learned that map is very lazy and applies the function 
inside front(). Thus, the above might actually result in multiple 
evaluations of the predicate. Luckily, there is the new cache 
function:


auto chunkByStar(alias pred, Range)(Range r)
{
return r.map!(pred, a)
   .cache
   .chunkBy!(a[0])
   .map!(inner = tuple(inner.front[0], inner.map!a[1]));
}

My point here is, we can construct a version of chunkBy that does 
not waste the key value with modest means. With great power comes 
great flexibility. I wanted to sneak this in as an example, 
because it is not clear what eventual users might actually need.


On the other hand there is no limit to the special cases we could 
add. aggregate might not be the only function to work with 
chunkBy. And even an aggregate function that takes a tuple of a 
range and something else and only uses the range seems wrong to 
me, given expressive the power D has. The transformation of the 
range is just on map away:


chunkByStar!(...)(r).map!a[1].aggregate!max

Then again, I might be missing something huge here.



Re: Time for 2.067

2015-02-04 Thread Puneet Goel via Digitalmars-d

On Saturday, 31 January 2015 at 09:14:14 UTC, Martin Nowak wrote:
Can we commit to having stuff done by Feb 15 for a release on 
Mar 1? -- Andrei


Sounds good, work on regressions should start soon and we 
should no longer add features.


Martin

Can you take a look at 
https://issues.dlang.org/show_bug.cgi?id=14126 ?
Not sure if this regression is related to recent changes in the 
GC. But if it is, it might be prudent to postpone merging in the 
changes into 2.067.


- Puneet


Re: Time for 2.067

2015-02-04 Thread Andrei Alexandrescu via Digitalmars-d
On 2/2/15 2:42 PM, Ulrich =?UTF-8?B?S8O8dHRsZXIi?= 
kuett...@gmail.com wrote:

On Friday, 30 January 2015 at 23:17:09 UTC, Andrei Alexandrescu wrote:

Sorry, I thought that was in the bag. Keep current semantics, call it
chunkBy. Add the key to each group when the predicate is unary. Make
sure aggregate() works nice with chunkBy().


I might miss some information on this, so please forgive my naive
question. Your requirements seem to be contradictory to me.

1. aggregate expects a range of ranges


Probably we need to change that because aggregate should integrate 
seamlessly with chunkBy.



2. you ask chunkBy to return something that is not a range of ranges


Yah.


3. you ask chunkBy to play along nicely with aggregate


Yah.


There are certainly ways to make this work. Adding a special version of
aggregate comes to mind. However, I fail to see the rational behind this.


Rationale as discussed is that the key value for each group is useful 
information. Returning a range of ranges would waste that information 
forcing e.g. its recomputation.



To me the beauty of range is the composibility of simple constructs to
create complex behavior. The current chunkBy does not need to be changed
to add the key to each group when the predicate is unary:

  r.map!(pred, a)
   .chunkBy!(a[0])
   .map!(inner = tuple(inner.front[0], inner.map!a[1]));

So I'd like to know why the above is inferior to a rework of the
chunkBy's implementation. Maybe this is a question for D.learn.


Wouldn't that force recomputation if a more complex expression replaced 
a[0]?



Andrei



Re: Time for 2.067

2015-02-02 Thread via Digitalmars-d
On Friday, 30 January 2015 at 23:17:09 UTC, Andrei Alexandrescu 
wrote:
Sorry, I thought that was in the bag. Keep current semantics, 
call it chunkBy. Add the key to each group when the predicate 
is unary. Make sure aggregate() works nice with chunkBy().


I might miss some information on this, so please forgive my naive 
question. Your requirements seem to be contradictory to me.


1. aggregate expects a range of ranges
2. you ask chunkBy to return something that is not a range of 
ranges

3. you ask chunkBy to play along nicely with aggregate

There are certainly ways to make this work. Adding a special 
version of aggregate comes to mind. However, I fail to see the 
rational behind this.


To me the beauty of range is the composibility of simple 
constructs to create complex behavior. The current chunkBy does 
not need to be changed to add the key to each group when the 
predicate is unary:


 r.map!(pred, a)
  .chunkBy!(a[0])
  .map!(inner = tuple(inner.front[0], inner.map!a[1]));

So I'd like to know why the above is inferior to a rework of the 
chunkBy's implementation. Maybe this is a question for D.learn.


Re: Time for 2.067

2015-02-01 Thread Dicebot via Digitalmars-d

On Saturday, 31 January 2015 at 09:14:14 UTC, Martin Nowak wrote:
Sounds good, work on regressions should start soon and we 
should no longer add features.


You are not going to do a release branch?


Re: Time for 2.067

2015-01-31 Thread Martin Nowak via Digitalmars-d
Can we commit to having stuff done by Feb 15 for a release on 
Mar 1? -- Andrei


Sounds good, work on regressions should start soon and we should 
no longer add features.


Re: Time for 2.067

2015-01-31 Thread zeljkog via Digitalmars-d
On 30.01.15 23:24, AndyC wrote:
 On Friday, 30 January 2015 at 22:06:34 UTC, Walter Bright wrote:
 Time to button this up and release it. Remaining regressions:

 https://issues.dlang.org/buglist.cgi?bug_severity=regressionbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDlist_id=192294query_format=advanced
  

 
 How about this one: https://issues.dlang.org/show_bug.cgi?id=7762
 
 Not sure if its supposed to compile, but it was reported master wouldn't.
 
 -Andy

It is related to new rules with -property flag.

import std.stdio;

int f(int a){
return a + 10;
}

void main() {
auto x = 20;
writeln(x.f);
}

This compiles with 2.66.1, but with master:
Error: not a property x.f

There are such things in Phobos (std\functional and ?).








Re: Time for 2.067

2015-01-30 Thread Walter Bright via Digitalmars-d

On 1/30/2015 2:24 PM, AndyC wrote:

On Friday, 30 January 2015 at 22:06:34 UTC, Walter Bright wrote:

Time to button this up and release it. Remaining regressions:

https://issues.dlang.org/buglist.cgi?bug_severity=regressionbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDlist_id=192294query_format=advanced



How about this one: https://issues.dlang.org/show_bug.cgi?id=7762

Not sure if its supposed to compile, but it was reported master wouldn't.


It's resolves as worksforme. If someone reports that it is still causing 
problems, it should be reopened.




Re: Time for 2.067

2015-01-30 Thread Andrei Alexandrescu via Digitalmars-d

On 1/30/15 2:35 PM, H. S. Teoh via Digitalmars-d wrote:

On Fri, Jan 30, 2015 at 02:05:52PM -0800, Walter Bright via Digitalmars-d wrote:

Time to button this up and release it. Remaining regressions:

https://issues.dlang.org/buglist.cgi?bug_severity=regressionbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDlist_id=192294query_format=advanced


Have we reached a final decision about what exactly groupBy should
return? (Or whether it should even be called 'groupBy'?) Last I heard,
there wasn't a firm decision. We should not release with an undecided-on
API, because that will make it much harder to change later (we will need
to go through a full deprecation cycle, and possibly waste the name
'groupBy'). If we can't decide before release, we might have to revert
groupBy for the time being.


Sorry, I thought that was in the bag. Keep current semantics, call it 
chunkBy. Add the key to each group when the predicate is binary. Make 
sure aggregate() works nice with chunkBy().


Stuff that can wait: grouping and aggregation for SortedRange.


There's also the [$] issue: are we keeping it or dumping it?


I think we can at least delay it until (a) the partial deduction is 
clearly defined, and (b) we figure whether a library solution is enough. 
I need more signal from our brass please.



Andrei



Re: Time for 2.067

2015-01-30 Thread AndyC via Digitalmars-d

On Friday, 30 January 2015 at 22:06:34 UTC, Walter Bright wrote:

Time to button this up and release it. Remaining regressions:

https://issues.dlang.org/buglist.cgi?bug_severity=regressionbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDlist_id=192294query_format=advanced


How about this one: https://issues.dlang.org/show_bug.cgi?id=7762

Not sure if its supposed to compile, but it was reported master 
wouldn't.


-Andy


Re: Time for 2.067

2015-01-30 Thread Rikki Cattermole via Digitalmars-d

On 31/01/2015 11:05 a.m., Walter Bright wrote:

Time to button this up and release it. Remaining regressions:

https://issues.dlang.org/buglist.cgi?bug_severity=regressionbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDlist_id=192294query_format=advanced


Can I please have another review of my PR?

https://github.com/D-Programming-Language/dmd/pull/3921



Time for 2.067

2015-01-30 Thread Walter Bright via Digitalmars-d

Time to button this up and release it. Remaining regressions:

https://issues.dlang.org/buglist.cgi?bug_severity=regressionbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDlist_id=192294query_format=advanced


Re: Time for 2.067

2015-01-30 Thread H. S. Teoh via Digitalmars-d
On Fri, Jan 30, 2015 at 02:05:52PM -0800, Walter Bright via Digitalmars-d wrote:
 Time to button this up and release it. Remaining regressions:
 
 https://issues.dlang.org/buglist.cgi?bug_severity=regressionbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDlist_id=192294query_format=advanced

Have we reached a final decision about what exactly groupBy should
return? (Or whether it should even be called 'groupBy'?) Last I heard,
there wasn't a firm decision. We should not release with an undecided-on
API, because that will make it much harder to change later (we will need
to go through a full deprecation cycle, and possibly waste the name
'groupBy'). If we can't decide before release, we might have to revert
groupBy for the time being.

There's also the [$] issue: are we keeping it or dumping it?

There may be one or two other issues that we need to iron out before
release, but they just slipped my mind now.


T

-- 
Shin: (n.) A device for finding furniture in the dark.


Re: Time for 2.067

2015-01-30 Thread Vladimir Panteleev via Digitalmars-d

On Friday, 30 January 2015 at 22:06:34 UTC, Walter Bright wrote:

Time to button this up and release it. Remaining regressions:

https://issues.dlang.org/buglist.cgi?bug_severity=regressionbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDlist_id=192294query_format=advanced


The recent website overhaul broke a few documentation-related 
things.


CHM generation is currently broken. Fix:
https://github.com/D-Programming-Language/dlang.org/pull/873

I don't know how the .html files that are included in dmd.zip are 
generated. Please try and see if it works for you. If not, I can 
look into it if you open-source your process.


There is also the issue that links to e.g. std_algorithm.html 
will be broken, because that module no longer exists. Currently, 
the Makefiles generate std_algorithm_package.html, which will 
likely cause old links to point to stale versions of the 
std.algorithm (the module)'s documentation. I think 
std.algorithm.package documentation should be written to 
std_algorithm.html, or some sort of redirect be set up.


Re: Time for 2.067

2015-01-30 Thread Andrei Alexandrescu via Digitalmars-d

On 1/30/15 6:39 PM, Martin Nowak wrote:

On Friday, 30 January 2015 at 22:06:34 UTC, Walter Bright wrote:

Time to button this up and release it. Remaining regressions:

https://issues.dlang.org/buglist.cgi?bug_severity=regressionbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDlist_id=192294query_format=advanced



Please let's finish the GC work for 2.067, will take about 1-1.5 week.


Can we commit to having stuff done by Feb 15 for a release on Mar 1? -- 
Andrei


Re: Time for 2.067

2015-01-30 Thread Andrei Alexandrescu via Digitalmars-d

On 1/30/15 3:17 PM, Andrei Alexandrescu wrote:

Add the key to each group when the predicate is binary.


s/binary/unary/


Re: Time for 2.067

2015-01-30 Thread Martin Nowak via Digitalmars-d

On Friday, 30 January 2015 at 22:06:34 UTC, Walter Bright wrote:

Time to button this up and release it. Remaining regressions:

https://issues.dlang.org/buglist.cgi?bug_severity=regressionbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDlist_id=192294query_format=advanced


Please let's finish the GC work for 2.067, will take about 1-1.5 
week.