On Wednesday, 22 October 2014 at 08:27:53 UTC, bearophile wrote:
Ola Fosheim Grøstad:

2. Easy to write ugly code: It suffers from the same issues as macros.

Do you mean C macros? I think this is not true.

Not C macros, because they are free form and happen before parsing.

On the other hand, C macros can be expanded and then you do source-to-source translation. With D mixins you might have to evaluate CTFE first? That's a source-to-source killer.

I also think AST macros is a bad idea. It works well for Lisp where you have a clean minimal language.

3. Language integration: It is desirable to have an application level language that can integrate well with a low level language when calling out to system level libraries.

I don't understand.

Let's say you create a tight new language "APP-C" that is easy to write application code in, but you use libraries written in a system level language "D" when the more restricted new language falls short. Then you want to call into the D libraries using lambdas written in "APP-C". If those libraries (built on top of phobos) take string mixins, then you have to know both "APP-C" and "D" in order to write an application.

What you want is this:

1. Compile APP-C => common IR
2. Resolve dependencies
3. Compile D => common IR
4. Resolve lambda optimization/inlining on the common IR

With string mixins you get: eh…?

4. Uniform conventions: a lambda is more generic.

What's bad in using something less generic?

It is bad when you get the disadvantages that lambdas do not have (in an optimizing compiler), but don't get advantages?!

Reply via email to