On 2010-03-04, at 18:12, Alejandro Garcia wrote:

Now given those rules:
If in system A I set one of the nodes to TRUE I don't know the state of the whole system.
This system is harder to know it has 16 possible states (2^4)


If in system B I set bottom node to TRUE then turns out all the nodes have a value of TRUE.
If I set that node to FALSE then all the nodes have a value of FALSE
so in essence it has 2 possible states.

And that is it. It is easier to know what the whole system B is going to be in any given time
than it is to know system A. Therefore simplier.



Correct.

However, this depends on the "processor".

We haven't indentified an "absolute" complexity.

The complexity of a system is a function of that system (the program) and the underlying system (the processor).

When we give the asymptotic complexities of algorithms (time or space), we give then in units relative to the underlying processor. For example, we count the number of comparisons, or the number of sums, assuming that there is a processor in which these operations are considered as the base (in vectorial space sense).

Another example is when you compile a program for two processors, one "complex" and one "simple" (CISC vs. RISC). The binary generated for the CISC processor will be simplier than the binary generated for the RISC processor. However, the RISC processor is simplier than the CISC processor, in terms of their own processors: the silicium circuits.

At the other end, when a customer gives you a very simple specification: "make a program to compute the pay of each employee of my company", he relies on a very complex processor: the human brain of the analyst, (or even the brains of a whole software development organization). The work of the programmers will consist in expanding the specification by rendering the complexity of the problem explicit, so that eventually a processor simple enough to be implemented in silicium is able to work on it.


So the problem is that when you give your two systems, A and B, you don't identify which processor you're considering.

If you consider as processor a machine that must determine the state of an automata instance of the graphs A or B, then indeed, system A + that processor is more complex, because system A has more states.

But if you consider as processor a machine that must reproduce the graphs A or B, then obviously the graph B containing more information, is more complex to reproduce, and will require a more complex processor.



Once you consider the systems composed of a "program" along with a "processor", perhaps we may now elaborate an absolute complexity measure for them. (I would use Kolmogorov complexity, or take the size of the descriptions of both parts together, compressed, as an approximation).




Now, an interesting fact is that the Lisp processor is itself described in one page of Lisp code, so that it doesn't add any essential complexity to any Lisp program (well, that's true of scheme and LISP, Common Lisp is slightly more complex).

To compare hw.c with hw.lisp, you would have to add the sources of a C interpreter, or a C compiler with the sources of a virtual machine for its target, etc.


There's also a caveat, is that if you consider a processor very powerful (a human brain), you can either over complexify or under complexify a "programm" (a specification). But a good human processor will be able to extract or expand the complexity of the specification to produce a program and processor system closer to the real complexity of the problem (if only for economical reasons).


--
__Pascal Bourguignon__
http://www.informatimago.com/





_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to