Re: [fonc] Urbit, Nock, Hoon
Yeah. Then I tried chapter two. The idea of memoizing optimized functions (jets) is neat. As is his approach to networking. On Sep 24, 2013 10:54 PM, Julian Leviston jul...@leviston.net wrote: http://www.urbit.org/2013/08/22/Chapter-0-intro.html Interesting? Julian ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Urbit, Nock, Hoon
In chapter four, Guy Yarvin (author of Urbit) describes Hoon. He assigns names to glyphs, e.g. `|` is bar and `=` is tis, so the digraph `|=` is called `bartis` (or barts). The first character is a semantic category (bar is for 'gates'). The idea of 'speakable' PL does appeal to me. I've contemplated doing similar a few times, though I've never gotten much past fanciful contemplation. For the environment I'm describing in the other thread, I imagine use of voice control might become part of it. I also imagine this would be part of the personal language between a user and the environment, via mix of machine learning and human learning - meeting half-way. But I think a speakable PL also needs to operate at a level a human can grok - i.e. higher artifact manipulations, raising menus, calling tools to hand, refining gestures. There's no way anyone's going to sit there and rattle off assembly, and even when we do use words they'll need to be somewhat imprecise, allowing partial search for contextually relevant semantics. I find it interesting that Yarvin's view has remained pretty stable over the last four years: http://moronlab.blogspot.com/2010/01/urbit-functional-programming-from.html Regarding 'jets', I'd be more interested if there was a way to easily guide the machine to build new ones. As is, I'd hate to depend on them. Regards, Dave On Tue, Sep 24, 2013 at 11:30 PM, David Barbour dmbarb...@gmail.com wrote: Yeah. Then I tried chapter two. The idea of memoizing optimized functions (jets) is neat. As is his approach to networking. On Sep 24, 2013 10:54 PM, Julian Leviston jul...@leviston.net wrote: http://www.urbit.org/2013/08/22/Chapter-0-intro.html Interesting? Julian ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Personal Programming Environment as Extension of Self
John Carlson yottz...@gmail.com writes: I encourage you to leverage HTML and JavaScript to the extent you need to, but beware of more understandable protocols happening at the same level or above. Sometimes giving up expressive power can be better in the short run to gain market share. That is, the best product doesn't always win. Obligatory http://en.wikipedia.org/wiki/Worse_is_better ;) Cheers, Chris ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] History of AR/VR Programming Environments? [was Re: Personal Programming Env...]
I would also be interested in a history for this subject. I've read a few papers on the subject of VR programming. Well, I remember the act of reading them, but I can't recall their subjects or authors or being very impressed with them in PL terms. Does anyone else have links? On Wed, Sep 25, 2013 at 2:43 AM, Jb Labrune labr...@media.mit.edu wrote: oh! and since i post on fonc today, i would like to say that i'm very intrigued by the notion of AR programming (meaning programming in an actual VR/AR/MR environment) discussed in the recent mesh of emails. I would love to see references or historical notes on who/what/where was done on this topic. I mean, did Ivan Sutherland used its HMD system to program the specifications (EDM) code of its own hardware ? did supercockpit VRD (virtual retinal display) sytem had a multimodal situational awareness (SA) real-time (RT) integrated development environment (IDE) to program directly with gaze neuronal activity ? :))) ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Personal Programming Environment as Extension of Self
I've been kicking around a model that may be useful here, vis à vis naming and the difficulties it implies. In short, a language may have a single global namespace that is a collision-resistant hash function. Values below say 256 bits are referred to as themselves, those above are referred to as the 256 bit digest of their value. Identities are also hashes, across the 'initial' value of the identity and some metadata recording the 'what where when' of that identity. An identity has a pointer to the current state/value of the identity, which is, of course, a hash of the value or the value itself depending on size. We'd also want a complete history of all values the identity has ever had, for convenience, which might easily obtain git levels of complexity. Code always and only refers to these hashes, so there is never ambiguity as to which value is which. Symbols are pointer cells in the classic Lisp fashion, but the canonical 'symbol' is a hash and the text string associated with it is for user convenience. I've envisioned this as Lispy for my own convenience, though a concatenative language has much to recommend it. On Wed, Sep 25, 2013 at 3:04 AM, Eugen Leitl eu...@leitl.org wrote: On Wed, Sep 25, 2013 at 11:43:44AM +0200, Jb Labrune wrote: as a friend of some designers who think in space colors, it always strucks me that many (not all of course!) of my programmers friends think like a turing-machine, in 1D, acting as if their code is a long vector, some kind of snake which unlikes the ouroboros does not eat its own tail... Today's dominating programming model still assumes human-generated and human-readable code. There are obvious ways where this is not working: GA-generated blobs for 3d-integration hardware, for instance. People are really lousy at dealing with massive parallelism and nondeterminism, yet this is not optional, at least according to known physics of this universe. So, let's say you have Avogadro number of cells in a hardware CA crystal, with an Edge of Chaos rule. Granted you can write the tranformation rule down on the back of the napkin, but what about the state residing in the volume of said crystal? And the state is not really compressible, though you could of course write a seed that grows into something which does something interesting on a somewhat larger napkin, but there's no human way how you could derive that seed, or even understand how that thing does even work. Programmers of the future are more like gardeners and farmers than architects. Programmers of the far future deal with APIs that are persons, or are themselves integral parts of the API, and no longer people. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] History of AR/VR Programming Environments? [was Re: Personal Programming Env...]
As I said below, this is no longer part of my 'recall' memory. It's been many years since I looked at the existing research on the subject, and I've lost most of my old links. A few related things I did find: http://homes.cs.washington.edu/~landay/pubs/publication-list.htm ftp://ftp.cc.gatech.edu/pub/people/blair/dissertation.pdf Landay I had looked up regarding non-speech voice control (apparently, it's 150% faster), and I recall some of the experiments being similar to programming. I've never actually read Blair's paper, just 'saved it for later' then forgot about it. It looks fascinating. VRML sucks. X3D sucks only marginally less. If your interest is representation of structure, I suggest abandoning any fixed-form meshes and focusing on procedural generation. Procedurally generated scenegraphs - where 'nodes' can track rough size, occlusion, and rough brightness/color properties (to minimize pop-in) - can be vastly more efficient, reactive, interactive, have finer 'level of detail' steps. (Voxels are also interactive, but have a relatively high memory overhead, and they're ugly.) Most importantly, PG content can also be 'adaptive' - i.e. pieces of art that partially cooperate with their context to fit themselves in. If I ever get back to this subject in earnest, I'll certainly be pursuing a few hypotheses that I haven't found opportunity to test: http://awelonblue.wordpress.com/2012/09/07/stateless-stable-arts-for-game-development/ http://awelonblue.wordpress.com/2012/07/18/unlimited-detail-for-large-animated-worlds/ But even if those don't work out, the procedural generation communities have a lot of useful stuff to say on the subject of VR. I haven't paid attention to VWF. If you haven't done so, you should look into Croquet and OpenCobalt. Best, Dave On Wed, Sep 25, 2013 at 10:30 AM, danm d...@zen3d.com wrote: Hi David, Moving this outside the FONC universe, although your response might also be of interest to other FONCers. Can you share with me your findings on VR programming? I'm aware of VRML and X3D (and its related tech.) as well as VWF (Virtual Worlds Framework), but I'm always interested in expanding my horizons, since this topic is near and dear to my heart. Thanks. cheers, danm On 9/25/13 10:22 AM, David Barbour wrote: I would also be interested in a history for this subject. I've read a few papers on the subject of VR programming. Well, I remember the act of reading them, but I can't recall their subjects or authors or being very impressed with them in PL terms. Does anyone else have links? On Wed, Sep 25, 2013 at 2:43 AM, Jb Labrune labr...@media.mit.edu mailto:labr...@media.mit.edu** wrote: oh! and since i post on fonc today, i would like to say that i'm very intrigued by the notion of AR programming (meaning programming in an actual VR/AR/MR environment) discussed in the recent mesh of emails. I would love to see references or historical notes on who/what/where was done on this topic. I mean, did Ivan Sutherland used its HMD system to program the specifications (EDM) code of its own hardware ? did supercockpit VRD (virtual retinal display) sytem had a multimodal situational awareness (SA) real-time (RT) integrated development environment (IDE) to program directly with gaze neuronal activity ? :))) __**_ fonc mailing list fonc@vpri.org http://vpri.org/mailman/**listinfo/fonchttp://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Personal Programming Environment as Extension of Self
What you first suggest is naming for compression and caching. I think that's an okay performance hack (it's one I've contemplated before), but I wouldn't call it naming. Names generally need to bind values that are maintained independently or cannot be known at the local place or time. I think that what you call identity, I might call naming. It is not clear to me what you hope to gain from the global namespace, or by hashing the identities (e.g. what do you gain relative to full URLs?). Maybe if you're pursuing a DHT or Chord-like system, identity might be a great way to avoid depending on centralized domain name services. But we also need to be careful about any values we share through such models, due to security concerns and the overheads involved. I would tend to imagine only physical devices should be represented in this manner. Any system that requires keeping a complete history for large, automatically maintained objects has already doomed itself. We can handle it for human-managed code - but only because humans are slow, our input is low bandwidth, and the artifacts we build tend to naturally stabilize. None of those apply to machine-managed objects. Exponential decay of history ( http://awelonblue.wordpress.com/2013/01/24/exponential-decay-of-history-improved/) provides a better alternative for keeping a long-running history (for both humans and devices). Anyhow, can you explain what your global namespace offers? On Wed, Sep 25, 2013 at 11:07 AM, Sam Putman atmanis...@gmail.com wrote: I've been kicking around a model that may be useful here, vis à vis naming and the difficulties it implies. In short, a language may have a single global namespace that is a collision-resistant hash function. Values below say 256 bits are referred to as themselves, those above are referred to as the 256 bit digest of their value. Identities are also hashes, across the 'initial' value of the identity and some metadata recording the 'what where when' of that identity. An identity has a pointer to the current state/value of the identity, which is, of course, a hash of the value or the value itself depending on size. We'd also want a complete history of all values the identity has ever had, for convenience, which might easily obtain git levels of complexity. Code always and only refers to these hashes, so there is never ambiguity as to which value is which. Symbols are pointer cells in the classic Lisp fashion, but the canonical 'symbol' is a hash and the text string associated with it is for user convenience. I've envisioned this as Lispy for my own convenience, though a concatenative language has much to recommend it. On Wed, Sep 25, 2013 at 3:04 AM, Eugen Leitl eu...@leitl.org wrote: On Wed, Sep 25, 2013 at 11:43:44AM +0200, Jb Labrune wrote: as a friend of some designers who think in space colors, it always strucks me that many (not all of course!) of my programmers friends think like a turing-machine, in 1D, acting as if their code is a long vector, some kind of snake which unlikes the ouroboros does not eat its own tail... Today's dominating programming model still assumes human-generated and human-readable code. There are obvious ways where this is not working: GA-generated blobs for 3d-integration hardware, for instance. People are really lousy at dealing with massive parallelism and nondeterminism, yet this is not optional, at least according to known physics of this universe. So, let's say you have Avogadro number of cells in a hardware CA crystal, with an Edge of Chaos rule. Granted you can write the tranformation rule down on the back of the napkin, but what about the state residing in the volume of said crystal? And the state is not really compressible, though you could of course write a seed that grows into something which does something interesting on a somewhat larger napkin, but there's no human way how you could derive that seed, or even understand how that thing does even work. Programmers of the future are more like gardeners and farmers than architects. Programmers of the far future deal with APIs that are persons, or are themselves integral parts of the API, and no longer people. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] History of AR/VR Programming Environments? [was Re: Personal Programming Env...]
Thanks for the pointers. X3D is basically an XML encoding of VRML, which is basically a declarative scene graph description. To the extent that you can procedurally generate this, it's all the same thing. VWF can be considered the successor to Croquet and OpenCobalt (albeit in HTML5/WebGL form). David Smith was/is a key contributor to all of these systems. (Of course, we know that there were other contributors to Croquet as well :). cheers, danm On 9/25/13 12:05 PM, David Barbour wrote: As I said below, this is no longer part of my 'recall' memory. It's been many years since I looked at the existing research on the subject, and I've lost most of my old links. A few related things I did find: http://homes.cs.washington.edu/~landay/pubs/publication-list.htm ftp://ftp.cc.gatech.edu/pub/people/blair/dissertation.pdf Landay I had looked up regarding non-speech voice control (apparently, it's 150% faster), and I recall some of the experiments being similar to programming. I've never actually read Blair's paper, just 'saved it for later' then forgot about it. It looks fascinating. VRML sucks. X3D sucks only marginally less. If your interest is representation of structure, I suggest abandoning any fixed-form meshes and focusing on procedural generation. Procedurally generated scenegraphs - where 'nodes' can track rough size, occlusion, and rough brightness/color properties (to minimize pop-in) - can be vastly more efficient, reactive, interactive, have finer 'level of detail' steps. (Voxels are also interactive, but have a relatively high memory overhead, and they're ugly.) Most importantly, PG content can also be 'adaptive' - i.e. pieces of art that partially cooperate with their context to fit themselves in. If I ever get back to this subject in earnest, I'll certainly be pursuing a few hypotheses that I haven't found opportunity to test: http://awelonblue.wordpress.com/2012/09/07/stateless-stable-arts-for-game-development/ http://awelonblue.wordpress.com/2012/07/18/unlimited-detail-for-large-animated-worlds/ But even if those don't work out, the procedural generation communities have a lot of useful stuff to say on the subject of VR. I haven't paid attention to VWF. If you haven't done so, you should look into Croquet and OpenCobalt. Best, Dave On Wed, Sep 25, 2013 at 10:30 AM, danm d...@zen3d.com mailto:d...@zen3d.com wrote: Hi David, Moving this outside the FONC universe, although your response might also be of interest to other FONCers. Can you share with me your findings on VR programming? I'm aware of VRML and X3D (and its related tech.) as well as VWF (Virtual Worlds Framework), but I'm always interested in expanding my horizons, since this topic is near and dear to my heart. Thanks. cheers, danm On 9/25/13 10:22 AM, David Barbour wrote: I would also be interested in a history for this subject. I've read a few papers on the subject of VR programming. Well, I remember the act of reading them, but I can't recall their subjects or authors or being very impressed with them in PL terms. Does anyone else have links? On Wed, Sep 25, 2013 at 2:43 AM, Jb Labrune labr...@media.mit.edu mailto:labr...@media.mit.edu mailto:labr...@media.mit.edu mailto:labr...@media.mit.edu__ wrote: oh! and since i post on fonc today, i would like to say that i'm very intrigued by the notion of AR programming (meaning programming in an actual VR/AR/MR environment) discussed in the recent mesh of emails. I would love to see references or historical notes on who/what/where was done on this topic. I mean, did Ivan Sutherland used its HMD system to program the specifications (EDM) code of its own hardware ? did supercockpit VRD (virtual retinal display) sytem had a multimodal situational awareness (SA) real-time (RT) integrated development environment (IDE) to program directly with gaze neuronal activity ? :))) _ fonc mailing list fonc@vpri.org mailto:fonc@vpri.org http://vpri.org/mailman/__listinfo/fonc http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Personal Programming Environment as Extension of Self
Well, since we're talking about a concatenative bytecode, I'll try to speak Forthfully. Normally when we define a word in a stack language we make up an ASCII symbol and say this symbol refers to all these other symbols, in this definite order. Well and good, with two potential problems: we have to make up a symbol, and that symbol might conflict with someone else's symbol. Name clashes is an obvious problem. The fact that we must make up a symbol is less obviously a problem, except that the vast majority of our referents should be generated by a computer. A computer generated symbol may as well be a hash function, at which point, a user-generated symbol may as well be a hash also, in a special case where the data hashed includes an ASCII handle for user convenience. This is fine for immutable values, but for identities (referents to a series of immutable values, essentially), we need slightly more than this: a master hash, taken from the first value the identity refers to, the time of creation, and perhaps other useful information. This master hash then points to the various values the identity refers to, as they change. There are a few things that are nice about this approach, all of which derive from the fact that identical values have identical names and that relatively complex relationships between identifies and values may be established and modified programmatically. As an example, if I define a foo function which is identical to someone else's bar function, they should have the same name (hash) despite having different handles. With a little work, we should be able to retrieve all the contexts where a value appears, as well as all the handles and other metadata associated with that value in those contexts. On Wed, Sep 25, 2013 at 12:45 PM, David Barbour dmbarb...@gmail.com wrote: What you first suggest is naming for compression and caching. I think that's an okay performance hack (it's one I've contemplated before), but I wouldn't call it naming. Names generally need to bind values that are maintained independently or cannot be known at the local place or time. I think that what you call identity, I might call naming. It is not clear to me what you hope to gain from the global namespace, or by hashing the identities (e.g. what do you gain relative to full URLs?). Maybe if you're pursuing a DHT or Chord-like system, identity might be a great way to avoid depending on centralized domain name services. But we also need to be careful about any values we share through such models, due to security concerns and the overheads involved. I would tend to imagine only physical devices should be represented in this manner. Any system that requires keeping a complete history for large, automatically maintained objects has already doomed itself. We can handle it for human-managed code - but only because humans are slow, our input is low bandwidth, and the artifacts we build tend to naturally stabilize. None of those apply to machine-managed objects. Exponential decay of history ( http://awelonblue.wordpress.com/2013/01/24/exponential-decay-of-history-improved/) provides a better alternative for keeping a long-running history (for both humans and devices). Anyhow, can you explain what your global namespace offers? On Wed, Sep 25, 2013 at 11:07 AM, Sam Putman atmanis...@gmail.com wrote: I've been kicking around a model that may be useful here, vis à vis naming and the difficulties it implies. In short, a language may have a single global namespace that is a collision-resistant hash function. Values below say 256 bits are referred to as themselves, those above are referred to as the 256 bit digest of their value. Identities are also hashes, across the 'initial' value of the identity and some metadata recording the 'what where when' of that identity. An identity has a pointer to the current state/value of the identity, which is, of course, a hash of the value or the value itself depending on size. We'd also want a complete history of all values the identity has ever had, for convenience, which might easily obtain git levels of complexity. Code always and only refers to these hashes, so there is never ambiguity as to which value is which. Symbols are pointer cells in the classic Lisp fashion, but the canonical 'symbol' is a hash and the text string associated with it is for user convenience. I've envisioned this as Lispy for my own convenience, though a concatenative language has much to recommend it. On Wed, Sep 25, 2013 at 3:04 AM, Eugen Leitl eu...@leitl.org wrote: On Wed, Sep 25, 2013 at 11:43:44AM +0200, Jb Labrune wrote: as a friend of some designers who think in space colors, it always strucks me that many (not all of course!) of my programmers friends think like a turing-machine, in 1D, acting as if their code is a long vector, some kind of snake which unlikes the ouroboros does not eat its own tail... Today's dominating programming
Re: [fonc] Personal Programming Environment as Extension of Self
What we gain relative to URLs is that a hash is not arbitrary. If two programs are examining the same piece of data, say a sound file, it would be nice if they came to the same, independant conclusion as to what to call it. Saving total state at all times is not necessary, but there are times when it may be convenient. If I were to enter 3 characters a second into a computer for 40 years, assuming a byte per character, I'd have generated ~3.8 GiB of information, which would fit in memory on my laptop. I'd say that user input at least is well worth saving. On Wed, Sep 25, 2013 at 2:45 PM, Sam Putman atmanis...@gmail.com wrote: Well, since we're talking about a concatenative bytecode, I'll try to speak Forthfully. Normally when we define a word in a stack language we make up an ASCII symbol and say this symbol refers to all these other symbols, in this definite order. Well and good, with two potential problems: we have to make up a symbol, and that symbol might conflict with someone else's symbol. Name clashes is an obvious problem. The fact that we must make up a symbol is less obviously a problem, except that the vast majority of our referents should be generated by a computer. A computer generated symbol may as well be a hash function, at which point, a user-generated symbol may as well be a hash also, in a special case where the data hashed includes an ASCII handle for user convenience. This is fine for immutable values, but for identities (referents to a series of immutable values, essentially), we need slightly more than this: a master hash, taken from the first value the identity refers to, the time of creation, and perhaps other useful information. This master hash then points to the various values the identity refers to, as they change. There are a few things that are nice about this approach, all of which derive from the fact that identical values have identical names and that relatively complex relationships between identifies and values may be established and modified programmatically. As an example, if I define a foo function which is identical to someone else's bar function, they should have the same name (hash) despite having different handles. With a little work, we should be able to retrieve all the contexts where a value appears, as well as all the handles and other metadata associated with that value in those contexts. On Wed, Sep 25, 2013 at 12:45 PM, David Barbour dmbarb...@gmail.comwrote: What you first suggest is naming for compression and caching. I think that's an okay performance hack (it's one I've contemplated before), but I wouldn't call it naming. Names generally need to bind values that are maintained independently or cannot be known at the local place or time. I think that what you call identity, I might call naming. It is not clear to me what you hope to gain from the global namespace, or by hashing the identities (e.g. what do you gain relative to full URLs?). Maybe if you're pursuing a DHT or Chord-like system, identity might be a great way to avoid depending on centralized domain name services. But we also need to be careful about any values we share through such models, due to security concerns and the overheads involved. I would tend to imagine only physical devices should be represented in this manner. Any system that requires keeping a complete history for large, automatically maintained objects has already doomed itself. We can handle it for human-managed code - but only because humans are slow, our input is low bandwidth, and the artifacts we build tend to naturally stabilize. None of those apply to machine-managed objects. Exponential decay of history ( http://awelonblue.wordpress.com/2013/01/24/exponential-decay-of-history-improved/) provides a better alternative for keeping a long-running history (for both humans and devices). Anyhow, can you explain what your global namespace offers? On Wed, Sep 25, 2013 at 11:07 AM, Sam Putman atmanis...@gmail.comwrote: I've been kicking around a model that may be useful here, vis à vis naming and the difficulties it implies. In short, a language may have a single global namespace that is a collision-resistant hash function. Values below say 256 bits are referred to as themselves, those above are referred to as the 256 bit digest of their value. Identities are also hashes, across the 'initial' value of the identity and some metadata recording the 'what where when' of that identity. An identity has a pointer to the current state/value of the identity, which is, of course, a hash of the value or the value itself depending on size. We'd also want a complete history of all values the identity has ever had, for convenience, which might easily obtain git levels of complexity. Code always and only refers to these hashes, so there is never ambiguity as to which value is which. Symbols are pointer cells in the classic Lisp fashion, but the canonical
Re: [fonc] Personal Programming Environment as Extension of Self
If we're just naming values, I'd like to avoid the complexity and just share the value directly. Rather than having foo function vs. bar function, we'll just have a block of anonymous code. If we have a large sound file that gets a lot of references, perhaps in that case explicitly using a content-distribution and caching model would be appropriate, though it might be better to borrow from Tahoe-LAFS for security reasons. For identity, I prefer to formally treat uniqueness as a semantic feature, not a syntactic one. Uniqueness can be formalized using substructural types, i.e. we need an uncopyable (affine typed) source of unique values. I envision a uniqueness source is used for: 1) creating unique sealer/unsealer pairs. 2) creating initially 'exclusive' bindings to external state. 3) creating GUID-like values that afford equality testing. In a sense, this is three different responsibilities for identity. Each involves different types. It seems what you're calling 'identity' corresponds to item 2. If I assume those responsibilities are handled, and also elimination of local variable or parameter names because of tacit programming, the remaining uses of 'names' I'm likely to encounter are: * names for dynamic scope, config, or implicit params * names for associative lookup in shared spaces * names as human short-hand for values or actions It is this last item that I think most directly corresponds to what Sean and Matt call names, though there might also be a bit of 'independent maintenance' (external state via the programming environment) mixed in. Regarding shorthand, I'm quite interested in alternative designs, such as binding human names to values based on pattern-matching (so when you write 'foo' I might read 'bar'), but Sean's against this due to out-of-band communication concerns. To address those concerns, use of an extended dictionary that tracks different origins for words seems reasonable. Regarding your 'foo' vs. 'bar' equivalence argument, I believe hashing is not associative. Ultimately, `foo bar baz` might have the same expansion-to-bytecode as `nitwit blubber oddment tweak` due to different factorings, but I think it will have a different hash, unless you completely expand and rebuild the 'deep' hashes each time. Of course, we might want to do that anyway, i.e. for optimization across words. If I were to enter 3 characters a second into a computer for 40 years, assuming a byte per character, I'd have generated ~3.8 GiB of information, which would fit in memory on my laptop. I'd say that user input at least is well worth saving. Huh, I think you underestimate how much data you generate, and how much that will grow with different input devices. Entering characters in a keyboard is minor compared to the info-dump caused by a LEAP motion. The mouse is cheap when it's sitting still, but can model spatial-temporal patterns. If you add information from your cell-phone - you've got GPS, accelerometers, temperatures, touch, voice. If you get some AR setup, you'll have six-axis motion for your head, GPS, voice, and gestures. It adds up. But it's still small compared to what devices can input if we kept a stream of microphone input or camera visual data. I think any history will inevitably be lossy. But I agree that it would be convenient to keep high-fidelity data available for a while, and preferably extract the most interesting operations. On Wed, Sep 25, 2013 at 2:45 PM, Sam Putman atmanis...@gmail.com wrote: Well, since we're talking about a concatenative bytecode, I'll try to speak Forthfully. Normally when we define a word in a stack language we make up an ASCII symbol and say this symbol refers to all these other symbols, in this definite order. Well and good, with two potential problems: we have to make up a symbol, and that symbol might conflict with someone else's symbol. Name clashes is an obvious problem. The fact that we must make up a symbol is less obviously a problem, except that the vast majority of our referents should be generated by a computer. A computer generated symbol may as well be a hash function, at which point, a user-generated symbol may as well be a hash also, in a special case where the data hashed includes an ASCII handle for user convenience. This is fine for immutable values, but for identities (referents to a series of immutable values, essentially), we need slightly more than this: a master hash, taken from the first value the identity refers to, the time of creation, and perhaps other useful information. This master hash then points to the various values the identity refers to, as they change. There are a few things that are nice about this approach, all of which derive from the fact that identical values have identical names and that relatively complex relationships between identifies and values may be established and modified programmatically. As an example, if I define a foo function which is identical to