Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-05-02 Thread 12345swordy via Digitalmars-d-announce

On Tuesday, 27 April 2021 at 08:12:57 UTC, FeepingCreature wrote:

On Monday, 26 April 2021 at 14:01:37 UTC, sighoya wrote:
On Monday, 26 April 2021 at 13:17:49 UTC, FeepingCreature 
wrote:

On Sunday, 25 April 2021 at 21:27:55 UTC, sighoya wrote:
On Monday, 19 April 2021 at 06:37:03 UTC, FeepingCreature 
wrote:

Native CTFE and macros are a beautiful thing though.


What did you mean with native?


When cx needs to execute a function at compiletime, it links 
it into a shared object and loads it back with dlsym/dlopen. 
So while you get a slower startup speed (until the cache is 
filled), any further calls to a ctfe function run at native 
performance.


Ah okay, but can't Dlang runtime functions not anyway called 
at compile time with native performance too?


So generally, cx first parses the program, then filters out 
what is a macro, then compiles all macro/ctfe functions into 
shared lib and execute these macros from that lib?




Sorta: when we hit a macro declaration, "the module at this 
point" (plus transitive imports) is compiled as a complete 
unit. This is necessary cause parser macros can change the 
interpretation of later code. Then the generated macro object 
is added to the module state going forward, and that way it can 
be imported by other modules.


Isn't it better to use the cx compiler as a service at compile 
time and compile code in-memory in the executable segment 
(some kind of jiting I think) in order to execute it then.

I think the cling repl does it like that.


That would also work, I just went the path of least resistance. 
I already had an llvm backend, so I just reused it. Adding a 
JIT backend would be fairly easy, except for the part of 
writing and debugging a JIT. :P




And how does cx pass type objects?


By reference. :) Since the compiler is in the search path, you 
can just import cx.base and get access to the same Type class 
that the compiler uses internally. In that sense, macros have 
complete parity with the compiler itself. There's no attempt to 
provide any sort of special interface for the macro that 
wouldn't also be used by compiler internal functions. (There's 
some class gymnastics to prevent module loops, ie. cx.base 
defines an interface for the compiler as a whole, that is 
implemented in main, but that is indeed also used by the 
compiler's internal modules themselves.)


The downside of all this is that you need to parse and process 
the entire compiler to handle a macro import. But DMD gives me 
hope that this too can be made fast. (RN compiling anything 
that pulls in a macro takes about a second even with warm 
object cache.)


It is better to use an existing jit framework then to build one 
on your own with regards to the JIT backend.


-Alex


Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-04-27 Thread sighoya via Digitalmars-d-announce

On Tuesday, 27 April 2021 at 08:12:57 UTC, FeepingCreature wrote:

[...]


Nice, thanks.


Generally, I think providing a meta programming framework by the 
language/compiler is like any decision equipped with tradeoffs.


Technically, more power is better than providing a simple 
language with limitations and upgrades which fragment the 
language more and more over time.


However, too much power exceeds human- and compiler's semantic 
reasoning. For instance allowing macros to mutate non-locally Ast 
Nodes in the whole project or even in downstream code is a 
powerful yet horrible utility to assist developing a productive 
software.




Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-04-27 Thread FeepingCreature via Digitalmars-d-announce

On Monday, 26 April 2021 at 14:01:37 UTC, sighoya wrote:

On Monday, 26 April 2021 at 13:17:49 UTC, FeepingCreature wrote:

On Sunday, 25 April 2021 at 21:27:55 UTC, sighoya wrote:
On Monday, 19 April 2021 at 06:37:03 UTC, FeepingCreature 
wrote:

Native CTFE and macros are a beautiful thing though.


What did you mean with native?


When cx needs to execute a function at compiletime, it links 
it into a shared object and loads it back with dlsym/dlopen. 
So while you get a slower startup speed (until the cache is 
filled), any further calls to a ctfe function run at native 
performance.


Ah okay, but can't Dlang runtime functions not anyway called at 
compile time with native performance too?


So generally, cx first parses the program, then filters out 
what is a macro, then compiles all macro/ctfe functions into 
shared lib and execute these macros from that lib?




Sorta: when we hit a macro declaration, "the module at this 
point" (plus transitive imports) is compiled as a complete unit. 
This is necessary cause parser macros can change the 
interpretation of later code. Then the generated macro object is 
added to the module state going forward, and that way it can be 
imported by other modules.


Isn't it better to use the cx compiler as a service at compile 
time and compile code in-memory in the executable segment (some 
kind of jiting I think) in order to execute it then.

I think the cling repl does it like that.


That would also work, I just went the path of least resistance. I 
already had an llvm backend, so I just reused it. Adding a JIT 
backend would be fairly easy, except for the part of writing and 
debugging a JIT. :P




And how does cx pass type objects?


By reference. :) Since the compiler is in the search path, you 
can just import cx.base and get access to the same Type class 
that the compiler uses internally. In that sense, macros have 
complete parity with the compiler itself. There's no attempt to 
provide any sort of special interface for the macro that wouldn't 
also be used by compiler internal functions. (There's some class 
gymnastics to prevent module loops, ie. cx.base defines an 
interface for the compiler as a whole, that is implemented in 
main, but that is indeed also used by the compiler's internal 
modules themselves.)


The downside of all this is that you need to parse and process 
the entire compiler to handle a macro import. But DMD gives me 
hope that this too can be made fast. (RN compiling anything that 
pulls in a macro takes about a second even with warm object 
cache.)


Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-04-26 Thread sighoya via Digitalmars-d-announce

On Thursday, 15 April 2021 at 04:01:23 UTC, Ali Çehreli wrote:

We will talk about compile time function execution (CTFE).

Although this is announced on Meetup[1] as well, you can 
connect directly at


  
https://us04web.zoom.us/j/2248614462?pwd=VTl4OXNjVHNhUTJibms2NlVFS3lWZz09


April 15, 2021
Thursday
19:00 Pacific Time

Ali

[1] 
https://www.meetup.com/D-Lang-Silicon-Valley/events/kmqcvqyccgbtb/


What was the outcome of this meeting?


Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-04-26 Thread sighoya via Digitalmars-d-announce

On Monday, 26 April 2021 at 13:17:49 UTC, FeepingCreature wrote:

On Sunday, 25 April 2021 at 21:27:55 UTC, sighoya wrote:
On Monday, 19 April 2021 at 06:37:03 UTC, FeepingCreature 
wrote:

Native CTFE and macros are a beautiful thing though.


What did you mean with native?


When cx needs to execute a function at compiletime, it links it 
into a shared object and loads it back with dlsym/dlopen. So 
while you get a slower startup speed (until the cache is 
filled), any further calls to a ctfe function run at native 
performance.


Ah okay, but can't Dlang runtime functions not anyway called at 
compile time with native performance too?


So generally, cx first parses the program, then filters out what 
is a macro, then compiles all macro/ctfe functions into shared 
lib and execute these macros from that lib?


Isn't it better to use the cx compiler as a service at compile 
time and compile code in-memory in the executable segment (some 
kind of jiting I think) in order to execute it then.

I think the cling repl does it like that.

And how does cx pass type objects?





Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-04-26 Thread FeepingCreature via Digitalmars-d-announce

On Sunday, 25 April 2021 at 21:27:55 UTC, sighoya wrote:

On Monday, 19 April 2021 at 06:37:03 UTC, FeepingCreature wrote:

Native CTFE and macros are a beautiful thing though.


What did you mean with native?


When cx needs to execute a function at compiletime, it links it 
into a shared object and loads it back with dlsym/dlopen. So 
while you get a slower startup speed (until the cache is filled), 
any further calls to a ctfe function run at native performance. 
Plus, it means the macro is ABI compatible with the running 
compiler, so the compiler can pass objects back and forth without 
a glue layer.


Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-04-25 Thread sighoya via Digitalmars-d-announce

On Monday, 19 April 2021 at 06:37:03 UTC, FeepingCreature wrote:

Native CTFE and macros are a beautiful thing though.


What did you mean with native?



Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-04-19 Thread Ola Fosheim Grøstad via Digitalmars-d-announce

On Monday, 19 April 2021 at 09:06:17 UTC, FeepingCreature wrote:
Right, I agree with all of this. I just think the way to get to 
it is to first allow everything, and then in a second step pare 
it down to something that does what people need while also 
being monitorable. This is as "simple" as merging every IO call 
the program does into its state count.


I am not saying it is wrong for a new language to try out this 
philosophy, assuming you are willing to go through a series of 
major revisions of the language. But I think for a language where 
"breaking changes" is a made a big deal of, you want to stay 
conservative.


In that regard, I agree it would be social, as in, if you clearly 
state upfront that your new language will come in major versions 
with major breakage at regular intervals then you should have 
more freedom to explore and end up with something much better. 
Which is not necessarily a deal breaker if you also support 
stable versions with some clear time window, like "version 2 is 
supported until 2030".


So, yeah, it is possible. But you have to teach the programmers 
of your philosophy and make them understand that some versions of 
the language has a long support window and other versions are 
more short-lived. Maybe make it clear in the versioning 
naming-scheme perhaps.


D's problem is that there is only one stable version, and that is 
the most recent one... that makes changes more difficult. Also, 
there are not enough users to get significant experience with 
"experimental features". What works for C++ when providing 
experimental features, might not work for smaller languages.



- or if not, if D can make it work. At any rate, with a lot of 
features like implicit conversions, I think people would find 
that they're harmless and highly useful if they'd just try them 
for a while.


A lot of features are harmless on a small scale. Python is a very 
flexible language and you can do a lot of stuff in it that you 
should not do. Despite this it works very well on a small scale. 
However, for Python to work on a larger scale it takes a lot of 
discipline (social constraints in place of technical constraints) 
and carefully chosen libraries etc. The use context matters, when 
discussing what is acceptable and what isn't. As such, people 
might have different views on language features and there might 
be no right/wrong solution.


Implicit conversions is good for custom ADTs, but the interaction 
with overloading can be problematic, so it takes a lot of 
foresight to get it right. A geometry library can benefit greatly 
from implicit conversions, but you can run into problems when 
mixing libraries that overuse implicit conversions... So, it 
isn't only a question of whether implicit conversions is a bad 
thing or not, but how the language limits "chains of conversion" 
and overloads and makes it easy to predict for the programmer 
when looking a piece of code.







Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-04-19 Thread FeepingCreature via Digitalmars-d-announce
On Monday, 19 April 2021 at 08:46:05 UTC, Ola Fosheim Grøstad 
wrote:
I think the downsides are conceptual and technical, not social. 
If you can implement a version counter then you get all kinds 
of problems, like first compilation succeeding, then the second 
compilation failing with no code changes. Also, you can no 
longer cache intermediate representations between compilations 
without a rather significant additional machinery. It is better 
to do this in a more functional way, so you can generate a 
file, but it isn't written to disk, it is an abstract entity 
during compilation and is turned into something concrete after 
compilation.




So, anything that can be deduced from the input is fair game, 
but allowing arbitrary I/O is a completely different beast, 
compilation has to be idempotent. It should not be possible to 
write a program where the first compilation succeeds and the 
second compilation fails with no code changes between the 
compilation executions. Such failures should be limited to the 
build system so that you can quickly correct the problem.


IMHO, a good productive language makes debugging easier, faster 
and less frequently needed. Anything that goes against that is 
a move in the wrong direction.


Right, I agree with all of this. I just think the way to get to 
it is to first allow everything, and then in a second step pare 
it down to something that does what people need while also being 
monitorable. This is as "simple" as merging every IO call the 
program does into its state count.


(For my lang, I just go and hash every part of the program during 
compilation. So caching works on the basis of what actually goes 
into the binary.)


But my view is, you have those issues anyways! You need to run a 
caching dub server, and you need to run it on a Docker image, and 
you need to pretty much mirror every upstream package that your 
Docker install pulls in, *anyways.* You can't escape the vagaries 
of the build environment changing under you by limiting the 
language. And everything you keep out of the language - IO, 
downloads, arbitrary commands whose output is relevant - the rest 
of your build environment usually does regardless. (And it will 
mess you up at the first, slightest provocation. Bloody Ubuntu 
and its breaking changes in patch releases...) So my view is the 
other way around - make the language the single point of contact 
for *all of that stuff*, make CTFE powerful enough to hash static 
libraries and process header files live during compilation, so 
you can pull as much of the complexity as possible into the 
controlled environment of the compiler. And then when you know 
what you need there, take a step back and frameworkize it, so you 
can do change detection inside your single build system. You 
can't get the entire OS on board, but you can maybe get all or 
most of your language library developers on board.


Anyways, even agreeing that you're right, "look, we tried it and 
it didn't work, in fact it was a disaster, see discussions here 
and here, or download version 2019-03-06 nightly to try how it 
went" is just inherently a stronger argument.


to find out what works and what doesn't, and you can't gather 
experience with what people actually want to do and how it 
works in practice if you lock things down from the start. In


That's ok for a prototype, but not for a production language.



I think most long-term successful languages straddle a line here. 
For instance, looking at Rust, the use of nightly and stable 
channels allows the language to experiment while also keeping a 
guarantee that once it commits to a feature enough to merge it 
into stable, it won't change "overnight". D is trying to do a 
similar thing with DIPs and preview flags and deprecations, but 
the jury's still out on how well it's working - or if not, if D 
can make it work. At any rate, with a lot of features like 
implicit conversions, I think people would find that they're 
harmless and highly useful if they'd just try them for a while.




Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-04-19 Thread Ola Fosheim Grøstad via Digitalmars-d-announce

On Monday, 19 April 2021 at 06:37:03 UTC, FeepingCreature wrote:
This is a social issue more than a technical one. The framework 
can help, by limiting access to disk and URLs and allowing 
tracing and hijacking, but ultimately you have to rely on code 
to not do crazy things.


I think the downsides are conceptual and technical, not social. 
If you can implement a version counter then you get all kinds of 
problems, like first compilation succeeding, then the second 
compilation failing with no code changes. Also, you can no longer 
cache intermediate representations between compilations without a 
rather significant additional machinery. It is better to do this 
in a more functional way, so you can generate a file, but it 
isn't written to disk, it is an abstract entity during 
compilation and is turned into something concrete after 
compilation.


to find out what works and what doesn't, and you can't gather 
experience with what people actually want to do and how it 
works in practice if you lock things down from the start. In


That's ok for a prototype, but not for a production language.

Most of my annoyances with D are issues where D isn't willing 
to take an additional step even though it would be technically 
very feasible. No implicit conversion for user-defined types, 
no arbitrary IO calls in CTFE, no returning AST trees from CTFE 
functions that are automatically inserted to create macros, and 
of course the cumbersome specialcased metaprogramming for type 
inspection instead of just letting us pass a type object to a 
ctfe function and calling methods on it.


So, anything that can be deduced from the input is fair game, but 
allowing arbitrary I/O is a completely different beast, 
compilation has to be idempotent. It should not be possible to 
write a program where the first compilation succeeds and the 
second compilation fails with no code changes between the 
compilation executions. Such failures should be limited to the 
build system so that you can quickly correct the problem.


IMHO, a good productive language makes debugging easier, faster 
and less frequently needed. Anything that goes against that is a 
move in the wrong direction.




Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-04-19 Thread FeepingCreature via Digitalmars-d-announce
On Sunday, 18 April 2021 at 04:41:44 UTC, Ola Fosheim Grostad 
wrote:

On Sunday, 18 April 2021 at 00:38:13 UTC, Ali Çehreli wrote:
I heard about safety issues around allowing full I/O during 
compilation but then the following points kind of convinced me:


- If I am compiling a program, my goal is to execute that 
program anyway. What difference does it make whether the 
program's compilation is harmful vs. the program itself.


I dont buy this, you can execute the code in a sandbox.

Compilation should be idempotent, writing to disk/databases 
during compilation breaks this guarantee.


I would not use a language that does not ensure this.



This is a social issue more than a technical one. The framework 
can help, by limiting access to disk and URLs and allowing 
tracing and hijacking, but ultimately you have to rely on code to 
not do crazy things. And right now in D we just push this 
complexity out of the language and into the build system, because 
if you don't let people do crazy things, they just do crazy and 
also horribly hacky things instead. (*Cough* gen_ut_main *cough*.)


In my opinion, the design approach should be to "default open, 
then restrict", rather than "default closed, then relax." This 
requires a certain willingness to break people's workflow, but if 
you default closed, you'll never get over your preconceptions, 
because you have to be able to do crazy things to find out what 
works and what doesn't, and you can't gather experience with what 
people actually want to do and how it works in practice if you 
lock things down from the start. In other words, I see no reason 
why "make one to throw it away" shouldn't apply  to languages.


Maybe we will decide one day to limit recursion for templates and 
CTFE, for instance, but if we do, it will be because of the 
experiences we gathered with unrestricted templates and the 
impact on compile times - if D had decided from day one to keep 
templates limited, we'd never have the wealth of experience and 
frameworks and selling points that D has with metaprogramming. 
You have to let people show you what they want to do and what 
comes of it, and you can't do that if you don't extend them 
enough rope to tangle themselves and their projects up first. 
There has to be a willingness to try and fail and backtrack.


Most of my annoyances with D are issues where D isn't willing to 
take an additional step even though it would be technically very 
feasible. No implicit conversion for user-defined types, no 
arbitrary IO calls in CTFE, no returning AST trees from CTFE 
functions that are automatically inserted to create macros, and 
of course the cumbersome specialcased metaprogramming for type 
inspection instead of just letting us pass a type object to a 
ctfe function and calling methods on it. If some new language 
will overtake D (cx¹, fingers crossed!), it will be because of 
this design conservatism as much as any technological restriction 
in the frontend.


¹ I have a language, by the way! :) 
https://github.com/FeepingCreature/cx , but it's pre-pre-alpha. 
Native CTFE and macros are a beautiful thing though.


Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-04-17 Thread Ola Fosheim Grostad via Digitalmars-d-announce

On Sunday, 18 April 2021 at 00:38:13 UTC, Ali Çehreli wrote:
I heard about safety issues around allowing full I/O during 
compilation but then the following points kind of convinced me:


- If I am compiling a program, my goal is to execute that 
program anyway. What difference does it make whether the 
program's compilation is harmful vs. the program itself.


I dont buy this, you can execute the code in a sandbox.

Compilation should be idempotent, writing to disk/databases 
during compilation breaks this guarantee.


I would not use a language that does not ensure this.

- If we don't allow file I/O during compilation, then the build 
system has to take that responsibility and can do the potential 
harm then anyway.


The build system is much smaller, so easier to inspect.




Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-04-17 Thread Ali Çehreli via Digitalmars-d-announce

On 4/17/21 10:14 AM, Gavin Ray wrote:

>> [1] https://www.meetup.com/D-Lang-Silicon-Valley/events/kmqcvqyccgbtb/
>
> Ali are these recorded by chance?

We've recorded only a couple of these meetups years ago when we had 
presentation-style meetups.


Although we could record these meetings, this and the last one were so 
unstructured that I don't think it's worth recording. On the other hand, 
I understand how they would still be valuable. Still, being on the 
record would take away the candidness of these "local" meetings by a 
small number of individuals.


> Now that I know will try to make it next month

I apologize for not giving advance notice but I really couldn't. :) This 
is how it happened: I attended the Silicon Valley C++ meetup where Sean 
Baxter presented Circle, his very powerful language based on C++:


  https://github.com/seanbaxter/circle

It turns out, Circle is full of compile-time magic that we already enjoy 
with D: proper compile time function execution, the equivalents of 
'static if' that can inject declarations, 'static foreach' that can 
inject case clauses, etc. etc.


There was so much overlap between D features and Circle that I came up 
about my meetup topic during that C++ meetup and announced it at the end 
of it. As always, I hoped that CTFE would just be the main theme and we 
would talk about other compile-time features of D.


To my surprise, such a short-notice brought just one participant and 
that was Sean Baxter himself! How fortunate I was! :) He said he had 
never heard of D before and was nodding his head to most D features that 
I showed. There was strong agreement with D's 'static if' (and Circle's 
equivalent feature).


One big difference between D and Circle, and one that he heard many 
concerns about is the fact that Circle allows file I/O at compile time. 
Other than the import expression, D does not.


I heard about safety issues around allowing full I/O during compilation 
but then the following points kind of convinced me:


- If I am compiling a program, my goal is to execute that program 
anyway. What difference does it make whether the program's compilation 
is harmful vs. the program itself.


- If we don't allow file I/O during compilation, then the build system 
has to take that responsibility and can do the potential harm then anyway.


Related, the following code fails with an unfriendly error message with dmd:

int foo() {
  import std.stdio;
  auto file = File("some_file");
  return 42;
}

pragma(msg, foo());

/usr/include/dmd/phobos/std/stdio.d(4352): Error: `fopen64` cannot be 
interpreted at compile time, because it has no available source code
/usr/include/dmd/phobos/std/stdio.d(392): Error: `malloc` cannot be 
interpreted at compile time, because it has no available source code


Sean thought this should be easy to fix. Well, we have to allow it to 
begin with I guess.


Anyway, that's my short summary. :)

Ali



Re: Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-04-17 Thread Gavin Ray via Digitalmars-d-announce

On Thursday, 15 April 2021 at 04:01:23 UTC, Ali Çehreli wrote:

We will talk about compile time function execution (CTFE).

Although this is announced on Meetup[1] as well, you can 
connect directly at


  
https://us04web.zoom.us/j/2248614462?pwd=VTl4OXNjVHNhUTJibms2NlVFS3lWZz09


April 15, 2021
Thursday
19:00 Pacific Time

Ali

[1] 
https://www.meetup.com/D-Lang-Silicon-Valley/events/kmqcvqyccgbtb/


Ali are these recorded by chance?

Now that I know will try to make it next month, but curious if 
the content is available for people who couldn't attend.


Silicon Valley D Meetup - April 15, 2021 - "Compile Time Function Execution (CTFE)"

2021-04-14 Thread Ali Çehreli via Digitalmars-d-announce

We will talk about compile time function execution (CTFE).

Although this is announced on Meetup[1] as well, you can connect directly at

  https://us04web.zoom.us/j/2248614462?pwd=VTl4OXNjVHNhUTJibms2NlVFS3lWZz09

April 15, 2021
Thursday
19:00 Pacific Time

Ali

[1] https://www.meetup.com/D-Lang-Silicon-Valley/events/kmqcvqyccgbtb/