In chapter four, Guy Yarvin (author of Urbit) describes Hoon. He assigns
names to glyphs, e.g. `|` is bar and `=` is tis, so the digraph `|=` is
called `bartis` (or barts). The first character is a semantic category (bar
is for 'gates').
The idea of 'speakable' PL does appeal to me. I've contemplated doing
similar a few times, though I've never gotten much past fanciful
contemplation. For the environment I'm describing in the other thread, I
imagine use of voice control might become part of it. I also imagine this
would be part of the personal language between a user and the environment,
via mix of machine learning and human learning - meeting half-way.
But I think a speakable PL also needs to operate at a level a human can
grok - i.e. higher artifact manipulations, raising menus, calling tools to
hand, refining gestures. There's no way anyone's going to sit there and
rattle off assembly, and even when we do use words they'll need to be
somewhat imprecise, allowing partial search for contextually relevant
semantics.
I find it interesting that Yarvin's view has remained pretty stable over
the last four years:
http://moronlab.blogspot.com/2010/01/urbit-functional-programming-from.html
Regarding 'jets', I'd be more interested if there was a way to easily guide
the machine to build new ones. As is, I'd hate to depend on them.
Regards,
Dave
On Tue, Sep 24, 2013 at 11:30 PM, David Barbour dmbarb...@gmail.com wrote:
Yeah. Then I tried chapter two.
The idea of memoizing optimized functions (jets) is neat. As is his
approach to networking.
On Sep 24, 2013 10:54 PM, Julian Leviston jul...@leviston.net wrote:
http://www.urbit.org/2013/08/22/Chapter-0-intro.html
Interesting?
Julian
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc