On Sunday, 29 April 2012 at 11:23:17 UTC, Timon Gehr wrote:
On 04/29/2012 11:31 AM, foobar wrote:
On Sunday, 29 April 2012 at 08:58:24 UTC, Timon Gehr wrote:
[...]
Indeed but I'd go even further by integrating it with ranges
so that
ranges would provide an opApply like method e.g.
auto r = BinaryTree!T.preOrder(); // returns range
r.each( (T elem) { ...use elem...}); // each method a-la Ruby
Well, I don't think this is better than built-in foreach
(with full
break and continue and goto even for user-defined opApply!)
I think we reached a matter of taste here.
Certainly, and this applies to the other issues as well.
How often do you use these features anyway in your regular
code?
Not too often, but it is awesome that it actually works. ;)
I prefer a more functional style
with higher order functions (map/reduce/filter/etc..) so for
me foreach
is about applying something to all elements and doesn't entail
usage of
break/continue/etc..
Some algorithms are better expressed in functional terms, some
algorithms are better expressed in imperative terms. I think a
combination of the two usually is the best choice.
I agree and indeed I haven't argued to remove break/continue from
the language. Imperative style loops are already expressible with
for/while/etc where break/continue work as advertizes. IMO a
foreach loop is a higher level concept more suitable for
functional style loops.
In any case, break/continue is implemented via opApply's return
values and as such doesn't require anything special from the
compiler to implement a library based foreach.
I'll use these constructs in a for loop but not a foreach loop.
break can be used as an optimisation to stop execution of a
loop that performs a 'reduce' if the result cannot change after
a certain point. I use continue mostly for 'filter'-ing out
elements from consideration.
Well, I'll use a filter to filter out elements.... :)
Usually there is not a huge difference between imperative style
and functional style loops.
* enum - enum should be completely redesigned to only
implement
what it's named after: enumerations.
What is the benefit?
On the one hand the current enum for manifest constants is a
hack due to
weaknesses of the toolchain
I think that is actually not true. It might have been the
original
motivation, but it has gone beyond that. Which weaknesses in
particular? I don't think that the toolchain can be improved
in any
way in this regard.
The weakness as far as I know is about link time optimization
of constants.
But regardless, my ideal implementation of so called
"compile-time"
features, including compile time constants, would be very
different anyway.
Well, you never elaborate on these things. BTW, what is your
stance on template haskell?
I discussed this many times in the past...
I don't really know haskell. But I do like ML.
and on the other hand it doesn't provide
properly encapsulated enums
Those could in theory be added without removing the manifest
constant
usage.
such as for instance the Java 5.0 ones or
the functional kind.
An algebraic data type is not an 'enumeration', so this is a
moot point.
I disagree. They are a generalization of the concept. In fact,
functional languages such as ML implement c style enums as an
algebraic
data type.
The current way enums can be used as manifest constants is a
generalization as well. The generalization takes place on the
static semantics level instead of on the conceptual level
though.
A language is the interface between a human programmer and a
computer and should IMO provide clear conceptual level
abstractions for the benefit of the human. I realize that using
enum for manifest constants makes sense on the implementation
level but I feel the compiler should work for me and not the
other way around.
[...]
I should be able to use a *very* minimalistic system to
write completely
_regular_ D code and run it at different times.
Examples in concrete syntax? How would you replace eg. string
mixin
functionality?
?
macro testMacro() {
std.writeln("Hello world!");
<| std.writeln("Hello world!"); |>
}
macro is a syntactic sugar on top of a regular function. You can
call it just like you call a regular function. The first line is
executed regularly and the second one is mixed-in [returned token
stream from the macro]
since the macro is evaluated by the compiler, the first line
would generate compile-time output. the second line would be part
of the generated code and would be thus executed during run-time
of my code.
Regarding syntax, the main difference is that it's a token stream
and not text but otherwise pretty much the same as current CTFE.
The important difference here is the execution model which is
different from CTFE.
This is a simple matter
of separation of concerns: what we want to execute (what
code) is
separate to the concern of when we want to execute it.
It is not. For example, code that is only executed during
CTFE does
never have to behave gracefully if the input is ill-formed.
I disagree - you should make sure the input is valid or all
sorts of bad
things could potentially happen such as a compiler can get
stuck in an
infinite loop.
It could fail in a number of other ways. I don't think that
this example can be used to invalidate the statement.
If you only use a batch mode compiler you can simply kill
the process which btw applies just the same to your user
program.
Maybe the user program should not be killed. See your IDE
example.
However, if you use an integrated compiler in your IDE that
could cause
me to lose part of my work if the IDE crashes.
Why would the IDE crash?
My example illustrates that the same considerations should be
given to "compile-time" code as well as to the client application
and it all depends on what you're trying to achieve.