Yesterday I ran across Nock because I was thinking about doing a quick and 
dirty VM based on an early '80s idea I had.  I figured I could dispense 
with all the complicated optimizations due to 23 doublings under Moore's 
law.  I posted about this, too informally, in /. in 2000 under "Network 
Functional Programming 
<https://ask.slashdot.org/comments.pl?sid=6343&cid=930087>".  Before I 
started coding, I wanted to see if anyone had done anything like it since, 
after all, it's been 35 years!  So googling some keywords led me to 
discover first Nock and then the stack which is intended to solve the 
problem I had been hired to solve for AT&T and Knight-Ridder way back then 
-- but that's another story 
<https://tech.slashdot.org/comments.pl?sid=2702791&cid=39217853>.

I was quite excited to see the possibility that someone had finally solved 
the problem, but when I discovered Nock didn't do lazy evaluation -- 
instead leaving it as an option to higher levels -- I realized it wasn't 
going to do what I had hoped.

Let me explain the basic idea in more detail because it isn't "just" lazy 
evaluation, and I'm not sure -- even after doing the searches -- that 
anyone has seriously researched it.  And, just to preempt -- I'm not 
talking about "reactive" or "functional reactive" programming as normally 
conceived either.

Normally what people think of when they hear "lazy" or "demand driven" is 
that there is some kind of output required and everything happens -- only 
-- in service of that observation, and once the output is generated, it's 
all over until the next demand drives the evaluation again.  The network 
architecture I was targeting used dependency graphs required by David 
Reed's NAMOS system* (eventually becoming TeaTime for Alan Kay's Croquet 
system) for networked atomic actions, but the graphs were, along with 
pervasive memoized values, built by lazy evaluation and remained in place, 
connecting the inputs to the network to the continuously observed outputs 
of the network.  If one of the inputs change, the change propagates through 
the dependency graphs as a data-driven or eager evaluation -- terminating 
any propagation if an evaluation produced no change in its memoized 
output.  If an observation goes away, the reference counts are decremented 
through the graphs along with the dependency links and at 0 references, the 
memoized value is voided -- although not deleted as NAMOS was a write-once 
or single assignment system (depending on one's paradigm) and the result is 
journaled as it rolls off the system providing an audit trail and potential 
for pseudo-time travel.  If the eager evaluation propagates all the way to 
an output and discovers it is inactive, it assumes there was a failure to 
decrement the reference counters etc. by the removal of the observer and it 
happens at that point.  

Now, this may seem like way too much functionality to put into such a 
primitive VM as Nock, but one must bear in mind that SK reduction machines 
were widely considered to be a viable route for massively parallel dataflow 
evaluation back in the halcyon days post-Backus's Turing Award lecture 
<https://www.cs.ucf.edu/~dcm/Teaching/COT4810-Fall%202012/Literature/Backus.pdf>"".
  
Also bear in mind that Nock contains features that permit it to _formally_ 
fulfill the rigor of such horrors as Church numbers without sacrificing 
much in the way of performance.  You don't need to put all that machinery 
everywhere -- but when you need it, which is almost all the time -- you 
_really_ need it.

For example, one of the symptoms that one needs it is when one finds one 
has to construct "make" like systems or "build" systems to maintain file 
system version integrity.  What you _really_ need is a generalized atomic 
action system such as that provided by NAMOS, and a "release" of a "build" 
is just the commit of an atomic action that propagates changes outside of 
the temporary** "fork" in reality.

*"Naming and Synchronization In a Decentralized Computer System 
<http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-205.pdf>" -- an 
early form of named data networking aka content-centric networking aka 
information centric networking.  It turns out that Arvind and Gostellow -- 
down one story from Reed at the MIT LCS -- had used virtually isomorophic 
data structures to Reed's NAMOS in their contemporaneous dataflow machine 
(the "U-Interpreter <http://ieeexplore.ieee.org/document/1653940/>") to 
perform the data-driven evaluations but neither they nor Reed were aware of 
this until I pointed it out to them.  Their "tagged architecture" had 
"tags" that were pretty much the same as Reed's "names",  to match up data 
flow tokens.

**This "ease of forking reality" is, by the way, automatically accomplished 
by non-determinism and is a reason people working on pure functional 
operating systems find themselves going relational since functions are 
degenerate relations that may be considered inconsistent "forks" as 
required by the CAP theorem, among other realities.

-- 
You received this message because you are subscribed to the Google Groups 
"urbit" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to urbit-dev+unsubscr...@googlegroups.com.
To post to this group, send email to urbit-dev@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to