Re: [Fis] FW: The Information Flow

2012-11-19 Thread John Collier

Dear folks,
Overall, I agree with Gordana. I have one, perhaps very large, correction
though. I will go through this bit by bit (no pun intended). I have been
planning to jump into this discussion, but a visit here (UFBA) by
Stewart Newman has kept me busy. He gave lots of examples of
non-computable processes in development in showing that developmental
units could be retained through changes in both function and genes. Gerd
Muller had told me this almost ten years ago, but Stewart has some
remarkable new cases, some of which have implications for evolution
and phylogeny. 

At 12:54 PM 2012/11/19, Bruno Marchal wrote:
Dear Joe, 
On 19 Nov 2012, at 12:26,

Dear Bruno, Gordana and All,
What I am resisting is any form of numerical-computational
Joe, if you have an idea of an effective procedure that is not captured
by the Turing machine, recursive function model, and you can make clear
why it doesn't then I will listen to that complaint. Turing did, in fact,
give a idea of how this might work in unpublished papers of his. He was
also the originator of diffusion-reaction processes (I would not call
them mechanisms, but they are commonly called so in the literature, since
it has become fashionable to call processes that are not mechanical in
the classical sense mechanisms). This sort of process is fundamental to
Stuart's arguments about the non-reducibility of differences in
developmental units to either genetic differences or to functional
differences. Note, in passing, the role of differences here. If
information is a difference that makes a difference, then we are talking
about types of information here. Actual systems that realized Turing's
mathematics were only created some time later, with the Brusselator (an
autocatalytic reaction devised by Prigogine, but it is more) in the
material form of the

Belousov–Zhabotinsky reaction (BZ reaction) from the 1950s, but they
are very common in nature, such as in forming the stripes on a zebra, or
the correct end of a severed hydra.
I have some remarks about computations to make next, which agree in
general with Gordana's remarks, but add another element that I think is
Computationalism would be
totalitarian if there was a computable universal programs for the total
computable functions, and only them. But it is easy to prove that such a
total universal machine cannot exist.
This sort of machine is often called an 'oracle'. Greg Chaitin, who
originated algorithmic information theory independently of Kolmogorov,
did a construction of a number omega (my email programme won't take the
Greek capital letter). If we knew that number, then we could tell whether
any program halts. We could also give the first n digits of the product
of any program if we knew the first n digits of omega (assuming the same
programming language). The digits of omega, though, are provably random,
which means that there is no program shorter than n that can compute the
first n digits of the product for every program. So there are no oracles.
We can get this from the unsolvability of the halting problem, but
Chaitin's approach shows what an oracle would have to look like.

The price of having universal
machine is that it will be a partial machine, undefined on many
arguments. Then such machine can be shown to be able to defeat all
reductionist or totalitarian theories about their own behavior.
That is why I say that computationalism is a vaccine against
totalitarianism or reductionism.
This is where I part company. There are two notions of computation. The
first is the sense of computation. The first and most widely used is that
of an algorithm (specifically a Knuth algorithm). It is equivalent to a
program that halts, for all intents. There are programs that don't halt,
which are partial functions: there is no algorithm that gives a value for
every input. However, a program that could compute such a function (an
oracle) would be able to compute the first n, for any n, numbers. It just
can't compute them all. For example, the three body problem is known to
be uncomputable. However, with a computer powerful enough, and some
epsilon of some value, we could computer the later state of the system to
within epsilon for any arbitrary finite time t.
What we can't do is solve reaction diffusion systems. Why is that? It is
because dissipation is an essential part of their dynamics -- they need
an energy input to work (this is true of all dissipative systems in the
Prigogine sense -- I say Prigogine sense since there are systems that
self-organize through dissipation that are not dissipative systems, they
are only self-reorganizing, not spontaneously self-organizing -- some
people confuse the two, or are careless about distinguishing them, such
as Stuart Kauffman). In any case, to reach a steady state in finite time
(e.g., the BZ reaction oscillations) they do the noncomputable in finite
time. Harmonics in the Solar System, such as the1-1 

Re: [Fis] FW: The Information Flow

2012-11-19 Thread Robert Ulanowicz
Quoting John Collier

As I have tried to argue above, to avoid reductionism in reality as  
opposed to in logic and mathematics I think we need the additional  
condition of dissipation (what I call nonHamiltonian mechanics  
elsewhere -- the usual condition of conservation breaks down due to  
the loss of free energy to the system).


Your point underscores my earlier one. Dissipation is emblematic of  
entropic processes -- which make ours an open world. There's no  
wishing that away!


fis mailing list