On Mon, Apr 15, 2013 at 08:27:08AM -0700, David Barbour wrote: > On Mon, Apr 15, 2013 at 6:46 AM, Eugen Leitl <[email protected]> wrote: > > > > > Few ns are effective eternities in terms of modern gate delays. > > I presume the conversation was about synchronization, which > > should be avoided in general unless absolutely necessary, and > > not done directly in hardware. > > > > Synchronization always has bounded error tolerances - which may differ by
Synchronization at multiple scales is essential to the functioning of mammal brain, yet the fundamental elements are clockless and operate asynchronously. Mixed positive/negative feedback loops can synchronize fine across large substrates, though they take a bit of time to drift into synchrony. Synchronization requirement creates serial sections of code, and verification in a spatially distributed assembly scales badly with the number of instances to be synchronized. This is why coherent caches and kilocores don't mix. In many cases you can deal with small inconsistencies and nondeterministic results, provided they're good enough. If you keep record, you can make it deterministic post facto, by reaching back into time. > many orders of magnitude, based on application. Synchronized audio-video, > for example, generally has a tolerance of about 10 milliseconds - large > enough to accomplish it in software. But really good AV software tries to > push it below 1ms. Synchronization for modern CPUs has extremely tight > tolerances (just like everything else about modern CPUs). But you should > not only think about CPUs or hardware when you think 'synchronization'. I'm most assuredely not thinking about CPUs or 'software', but about fundamental limits of computation. Where the light cones are quite literally true, it's how an adjacent system learns about a state change. If you want to maximize the operation rate, such as frequency of refreshes across a volume occupied by autonomous computation nodes or cells, then you have to keep these who need to know downwind of your light cone. As there's no way to rearrange atoms on that time scale on demand, you just have to live with a fixed geometry and have to rearrange your living objects. > You say 'synchronization should be avoided unless absolutely necessary'. I > disagree; a blanket statement like that is too extreme. Sometimes > synchronized is more efficient even if it is not 'absolutely' necessary - > it reduces need to keep state, which has its own expense. It Depends. The only way to keep state to copy it into adjacent computatation or storage elements. In a sense IBM's racetrack memory directly implements a shift register or wrap-around FIFO, and even one orthogonally alignable to the semiconductor face, so it doesn't crowd out your adjacent units of computation (as long as you're still in flatland, in 3d you're out of options). > In any case, the conversation wasn't even about synchronization (which > means "to CAUSE to be synchronized"). It was simply about 'synchronized' - > whether things can happen at the same time or rate (which often has natural > causes). > > And synchronization is never about clocks. It's the reverse, really. Interestingly, it is provably impossible to synchronize oscillators in a spacetime frame-dragging context. Luckily, in case of the Earth the effect is negligible. _______________________________________________ fonc mailing list [email protected] http://vpri.org/mailman/listinfo/fonc
