I started to play with it. My plan is to keep the python wrapper. The reason is 
simple: I'm comfortable with python, I don't want to support multiple threads 
in Nim, and I want to send a kill signal to the process if the user wants to 
interrupt (menu Kernel->Interrupt in Jupyter) a long running calculation (which 
signal will be caught in Nim).

I'm planning to have one running Nim binary. The python wrapper will pass the 
code this binary, which will compile it, and call it with dynlib. The new code 
will not return, but it will compile the next code, etc. I'm going to use the 
stack at each run, and I'll not return space. Each new block will have all the 
previously defined functions and types prepasted before the block's code, and 
the block's content will be injected within a function, which receives all the 
previously allocated heap & stack variables. If these variables are `var`, then 
it will use the previous layer's stack memory. Furthermore, each block's code 
will be injected in a `block:` environment, so you will be able to overwrite 
previously defined variables with different types. The new local variables will 
be collected with `locals()`, so I'll be able to pass them to the next round. 
In each block the signal used for interrupting the kernel will be overwritten, 
so upon an interrupt you could continue the work with the latest stack.

Further optimizations: if there is no change in the local variables, then there 
is no need to keep the stack. For example if the block is `x=5` where `x` is 
defined in a previous stack, then we don't need to keep this stack. Secondly, 
if a function is not overloaded (e.g., if it is `proc f(x: int):int`, then 
later there is no `proc f(x: string):string`), then instead of repeating the 
implementation in the new library, we can simply pass the function as a 
parameter (I'm just only hope that this will work).

Probably the biggest concern about this proposal is that I'll have a stack 
overflow if I keep defining new variables in each block. Yes, that's true. I 
would be still able to load data and play with it interactively. In my Jupyter 
sessions I rarely reach 100 executed blocks, and I have some optimizations 
planned.

What do we gain? You could create variables on stack and heap, these won't be 
(at least not always) copied into the new blocks. You could overwrite 
previously defined variables, even change their types. It would run with 
`-d:release` flag to get speed. And the biggest pro is that I will be able to 
have something working in days.

In this plan I'll need to collect types and procs. I remember seeing some nim 
code representation in one of the fields of types, I hope that I'll be able to 
create a compiler plugin (copying the logic in `locals()`) which gives me all 
the types which are currently defined.
    Peter

Reply via email to