Re: [fonc] automation

2010-07-14 Thread Ryan Mitchley
Julian Leviston wrote:

 This is essentially what I refer to when I talk about planck size of 
 algorithms. You can't get any simpler than a certain size and therefore not 
 only is it incredibly understandable, it simply won't break.

   

Say we have a Maximum Length Sequence constructed using a shift register
of length N and a series of XOR gates. The MLS has a series of 2^N-1
states. Imagine, now, that the states are interpreted as byte code in
some language.

As an inverse problem, it may be possible to find a shift register
factorisation for a given algorithm implemented in byte code. I would
argue that the reduced information size (N + XOR gate encoding) is not
understandable, although it would be very small.

This is, of course, analogous to symbol representation and compression
in information theory. A very information dense (compressed)
communication becomes indistinguishable from noise.

How do you determine that a very dense program is, in fact, understandable?


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] automation

2010-07-14 Thread BGB
I think it is mostly because the internet is composed of well-defined / 
agreed-upon protocols and data formats.

each part is largely decoupled from the others. it sends and accepts data, and 
it responds to whatever is happening.

often, the protocols are very much layered, with most layers not concerning 
themselves either with what is above them (such as application data or 
app-specific protocols), or what is below (such as hardware-specific protocols, 
signaling over the wire, ...).

the hardware may be concerned with the wire-signaling and low-level data 
transmission details, but need not care what sort of data it is sending or 
recieving (it only cares that the bytes go over the wires, to the other end).

then, at the higher levels, we see app-specific data, files, ... being 
transmitted.

moving up further, the chain of files may in turn be part of a larger 
operation, such as requesting a web-page, with some being static files, and 
others dynamically generated as a result of a particular request. the server 
need not care why these items are being requested, and the client need not 
care how they are produced.

so, each part is largely abstracted from the others.


notably: the vast majority of this takes place without any app-specific or 
code-level interaction (RPC/DCOM/CORBA/SOAP/...), and AFAICT these technologies 
have usually hurt more than they have helped.

often, IME, shared files and protocols often seem to be a better way to move 
data.


also maybe relevant:
the use of open-ended non-clashing namespaces (IP addresses, DNS host names, 
...), and symbolic reference, rather than hard-linking (say, having to plug the 
client directly into the server with some long cable);
the use of packet-switching and routing;
that pretty much everyone interfaces with it, and that often an entity 
improving their connectivity to the various networks improves the internet as a 
whole (although I guess many companies dislike having part of their bandwidth 
getting used up by random internet traffic that happens to be moving between 
whatever networks they are connected to, as well as people making issue over 
which borders and countries data may pass through to get from one place to 
another, ...);
...


of course, I guess this is more of a how question than a why question.


or such...


  - Original Message - 
  From: Alan Kay 
  To: Fundamentals of New Computing 
  Cc: PiLuD 
  Sent: Wednesday, July 14, 2010 9:39 AM
  Subject: Re: [fonc] automation


  Consider why the Internet works 

  Cheers,

  Alan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc