Re: [fonc] Actors, Light Cones and Epistemology (was Layering, Thinking and Computing)
On Mon, Apr 15, 2013 at 08:27:08AM -0700, David Barbour wrote: On Mon, Apr 15, 2013 at 6:46 AM, Eugen Leitl eu...@leitl.org wrote: Few ns are effective eternities in terms of modern gate delays. I presume the conversation was about synchronization, which should be avoided in general unless absolutely necessary, and not done directly in hardware. Synchronization always has bounded error tolerances - which may differ by Synchronization at multiple scales is essential to the functioning of mammal brain, yet the fundamental elements are clockless and operate asynchronously. Mixed positive/negative feedback loops can synchronize fine across large substrates, though they take a bit of time to drift into synchrony. Synchronization requirement creates serial sections of code, and verification in a spatially distributed assembly scales badly with the number of instances to be synchronized. This is why coherent caches and kilocores don't mix. In many cases you can deal with small inconsistencies and nondeterministic results, provided they're good enough. If you keep record, you can make it deterministic post facto, by reaching back into time. many orders of magnitude, based on application. Synchronized audio-video, for example, generally has a tolerance of about 10 milliseconds - large enough to accomplish it in software. But really good AV software tries to push it below 1ms. Synchronization for modern CPUs has extremely tight tolerances (just like everything else about modern CPUs). But you should not only think about CPUs or hardware when you think 'synchronization'. I'm most assuredely not thinking about CPUs or 'software', but about fundamental limits of computation. Where the light cones are quite literally true, it's how an adjacent system learns about a state change. If you want to maximize the operation rate, such as frequency of refreshes across a volume occupied by autonomous computation nodes or cells, then you have to keep these who need to know downwind of your light cone. As there's no way to rearrange atoms on that time scale on demand, you just have to live with a fixed geometry and have to rearrange your living objects. You say 'synchronization should be avoided unless absolutely necessary'. I disagree; a blanket statement like that is too extreme. Sometimes synchronized is more efficient even if it is not 'absolutely' necessary - it reduces need to keep state, which has its own expense. It Depends. The only way to keep state to copy it into adjacent computatation or storage elements. In a sense IBM's racetrack memory directly implements a shift register or wrap-around FIFO, and even one orthogonally alignable to the semiconductor face, so it doesn't crowd out your adjacent units of computation (as long as you're still in flatland, in 3d you're out of options). In any case, the conversation wasn't even about synchronization (which means to CAUSE to be synchronized). It was simply about 'synchronized' - whether things can happen at the same time or rate (which often has natural causes). And synchronization is never about clocks. It's the reverse, really. Interestingly, it is provably impossible to synchronize oscillators in a spacetime frame-dragging context. Luckily, in case of the Earth the effect is negligible. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Actors, Light Cones and Epistemology (was Layering, Thinking and Computing)
On Sun, Apr 14, 2013 at 03:03:12PM -0700, David Barbour wrote: And I've seen Grace Hopper's video on nanoseconds before. If you carry a piece of wire of the right length, it isn't difficult to say where light carrying information will be after a few nanoseconds. :D Few ns are effective eternities in terms of modern gate delays. I presume the conversation was about synchronization, which should be avoided in general unless absolutely necessary, and not done directly in hardware. Ditto clocks, as distributed systems of oscillators would tend to sync up due to local coupling. No need for a centralized, global clock. Purely asynchronous CPUs show clocks are not necessary, and even for cloked systems there are ways to save on power and distribution delays http://spectrum.ieee.org/semiconductors/processors/powersaving-clock-scheme-in-new-pcs http://www.extremetech.com/computing/119507-amd-to-use-resonant-clock-mesh-to-push-trinity-above-4ghz ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Actors, Light Cones and Epistemology (was Layering, Thinking and Computing)
On Mon, Apr 15, 2013 at 6:46 AM, Eugen Leitl eu...@leitl.org wrote: Few ns are effective eternities in terms of modern gate delays. I presume the conversation was about synchronization, which should be avoided in general unless absolutely necessary, and not done directly in hardware. Synchronization always has bounded error tolerances - which may differ by many orders of magnitude, based on application. Synchronized audio-video, for example, generally has a tolerance of about 10 milliseconds - large enough to accomplish it in software. But really good AV software tries to push it below 1ms. Synchronization for modern CPUs has extremely tight tolerances (just like everything else about modern CPUs). But you should not only think about CPUs or hardware when you think 'synchronization'. You say 'synchronization should be avoided unless absolutely necessary'. I disagree; a blanket statement like that is too extreme. Sometimes synchronized is more efficient even if it is not 'absolutely' necessary - it reduces need to keep state, which has its own expense. It Depends. In any case, the conversation wasn't even about synchronization (which means to CAUSE to be synchronized). It was simply about 'synchronized' - whether things can happen at the same time or rate (which often has natural causes). And synchronization is never about clocks. It's the reverse, really. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Actors, Light Cones and Epistemology (was Layering, Thinking and Computing)
David Barbour dmbarb...@gmail.com writes: On Sun, Apr 14, 2013 at 1:23 PM, Pascal J. Bourguignon p...@informatimago.com wrote: David Barbour dmbarb...@gmail.com writes: On Apr 14, 2013 9:46 AM, Tristan Slominski tristan.slomin...@gmail.com wrote: A mechanic is a poor example because frame of reference is almost irrelevant in Newtonian view of physics. The vast majority of information processing technologies allow you to place, with fair precision, every bit in the aether at any given instant. The so-called Newtonian view will serve more precisely and accurately than dubious metaphors to light cones. What are you talking about??? I don't know how to answer that without repeating myself, and in this case it's a written conversation. Do you have a more specific question? Hmm. At a guess, I'll provide an answer that might or might not be to the real question you intended: The air-quotes around Newtonian are because (if we step back in context a bit) the context is Tristan is claiming that any knowledge of synchronization is somehow 'privileged'. (Despite the fact nearly all our technology relies on this knowledge, and it's readily available at a glance, and does not depend on Newtonian anything.) And I've seen Grace Hopper's video on nanoseconds before. If you carry a piece of wire of the right length, it isn't difficult to say where light carrying information will be after a few nanoseconds. :D I think that one place where light cone considerations are involved is with caches in multi-processor systems. If all processors could have instantaneous knowledge of what the views of the other processors are about memory, there wouldn't be any cache coherence problem. But light speed, or information transmission speed is not infinite, hence the appearance of light cones or light cones-like phenomena. -- __Pascal Bourguignon__ http://www.informatimago.com/ A bad day in () is better than a good day in {}. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Actors, Light Cones and Epistemology (was Layering, Thinking and Computing)
On Mon, Apr 15, 2013 at 3:10 PM, Pascal J. Bourguignon p...@informatimago.com wrote: I think that one place where light cone considerations are involved is with caches in multi-processor systems. If all processors could have instantaneous knowledge of what the views of the other processors are about memory, there wouldn't be any cache coherence problem. But light speed, or information transmission speed is not infinite, hence the appearance of light cones or light cones-like phenomena. Many people seem to jump from one extremism to another - from instantaneous transfer to unbounded delay - without seriously considering the useful middle (predictable, bounded delay). The middle has many models (including cellular automata) and is capable of supporting synchronous/real-time distributed systems. It's also where you'll find light cones... and many interesting, efficient synchronization patterns. Interestingly, cache coherence is not a problem if your programming model *doesn't* assume instantaneous transfer, i.e. because you'd end up explicitly modeling the delays and thus managing the distinct views in a formal manner - using distinct locations in memory, and thus distinct cache lines. (I believe this contributes to the success of modeling multi-processor systems as distributed systems.) Regards, Dave ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Actors, Light Cones and Epistemology (was Layering, Thinking and Computing)
I believe our world is 'synchronous' in the sense of things happening at the same time in different places... It seems to me that you are describing a privileged frame of reference. How is it privileged? Would you consider your car mechanic to have a 'privileged' frame of reference on our universe because he can look down at your vehicle's engine and recognize when components are in or out of synch? Is it not obviously the case that, even while out of synch, the different components are still doing things at the same time? Is there any practical or scientific merit for your claim? I believe there is abundant scientific and practical merit to models and technologies involving multiple entities or components moving and acting at the same time. A mechanic is a poor example because frame of reference is almost irrelevant in Newtonian view of physics. Obvious things in Newtonian view become very wrong in Einsteinian take on physics once we get into extremely large masses or extremely fast speeds. In my opinion, the pattern of information distribution in actor systems via messages resembles the Einsteinian view much more closely than the Newtonian view. When an actor sends messages, there is an information light cone that spreads from that actor to whatever actors it will reach. Newtonian view is not helpful in this environment. Within an actor system, after a creation event, an actor is limited to knowing the world through messages it receives. This seems to me to be a purely empirical knowledge (i.e. coming only from sensory experience). This goes back to what you highlighted about my point of view: That only matters to people who want as close to the Universe as possible. So yes, you're right, I agree. I would probably remove only from the above statement, but otherwise, I accept your assertion. On Sat, Apr 13, 2013 at 1:29 PM, David Barbour dmbarb...@gmail.com wrote: On Sat, Apr 13, 2013 at 9:01 AM, Tristan Slominski tristan.slomin...@gmail.com wrote: I think we don't know whether time exists in the first place. That only matters to people who want as close to the Universe as possible. To the rare scientist who is not also a philosopher, it only matters whether time is effective for describing and predicting behavior about the universe, and the same is true for notions of particles, waves, energy, entropy, etc.. I believe our world is 'synchronous' in the sense of things happening at the same time in different places... It seems to me that you are describing a privileged frame of reference. How is it privileged? Would you consider your car mechanic to have a 'privileged' frame of reference on our universe because he can look down at your vehicle's engine and recognize when components are in or out of synch? Is it not obviously the case that, even while out of synch, the different components are still doing things at the same time? Is there any practical or scientific merit for your claim? I believe there is abundant scientific and practical merit to models and technologies involving multiple entities or components moving and acting at the same time. I've built a system that does what you mention is difficult above. It incorporates autopoietic and allopoietic properties, enables object capability security and has hints of antifragility, all guided by the actor model of computation. Impressive. But with Turing complete models, the ability to build a system is not a good measure of distance. How much discipline (best practices, boiler-plate, self-constraint) and foresight (or up-front design) would it take to develop and use your system directly from a pure actors model? I don't want programming to be easier than physics. Why? First, this implies that physics is somehow difficult, and that there ought to be a better way. Physics is difficult. More precisely: setting up physical systems to compute a value or accomplish a task is very difficult. Measurements are noisy. There are many non-obvious interactions (e.g. heat, vibration, covert channels). There are severe spatial constraints, locality constraints, energy constraints. It is very easy for things to 'go wrong'. Programming should be easier than physics so it can handle higher levels of complexity. I'm not suggesting that programming should violate physics, but programs shouldn't be subject to the same noise and overhead. If we had to think about adding fans and radiators to our actor configurations to keep them cool, we'd hardly get anything done. I hope you aren't so hypocritical as to claim that 'programming shouldn't be easier than physics' in one breath then preach 'use actors' in another. Actors are already an enormous simplification from physics. It even simplifies away the media for communication. Whatever happened to the pursuit of Maxwell's equations for Computer Science? Simple is not the same as easy. Simple is also not the same as
Re: [fonc] Actors, Light Cones and Epistemology (was Layering, Thinking and Computing)
On Sun, Apr 14, 2013 at 1:23 PM, Pascal J. Bourguignon p...@informatimago.com wrote: David Barbour dmbarb...@gmail.com writes: On Apr 14, 2013 9:46 AM, Tristan Slominski tristan.slomin...@gmail.com wrote: A mechanic is a poor example because frame of reference is almost irrelevant in Newtonian view of physics. The vast majority of information processing technologies allow you to place, with fair precision, every bit in the aether at any given instant. The so-called Newtonian view will serve more precisely and accurately than dubious metaphors to light cones. What are you talking about??? I don't know how to answer that without repeating myself, and in this case it's a written conversation. Do you have a more specific question? Hmm. At a guess, I'll provide an answer that might or might not be to the real question you intended: The air-quotes around Newtonian are because (if we step back in context a bit) the context is Tristan is claiming that any knowledge of synchronization is somehow 'privileged'. (Despite the fact nearly all our technology relies on this knowledge, and it's readily available at a glance, and does not depend on Newtonian anything.) And I've seen Grace Hopper's video on nanoseconds before. If you carry a piece of wire of the right length, it isn't difficult to say where light carrying information will be after a few nanoseconds. :D ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc