RE: Algorithmic Revolution?

2002-11-29 Thread Marchal Bruno
Colin Hales wrote

 ...
Not really TOE stuff, so I?ll desist for now. I remain ever hopeful that one
day I?ll be able to understand Bruno?. :-)


Ah! Thanks for that optimistic proposition :-)
Let us forget the AUDA which needs indeed some familiarity with
mathematical logic. 
But the UDA? It would help me to understand at which point you have
a problem. For example I understand where Hall Finney stops, although
I still does not understand why. I got a pretty clear idea where and
why Stephen King disagrees.  This can help me to ameliorate the
presentation. You could also help yourself  through the formulation of
precise questions. Perhaps you did and I miss it? (*)

Bruno

(*) My computer crashed badly some weeks ago and I use the university
mailing system which is not so stable. Apology for funny spellings,
RE:RE:-addition in replies, lack of signature, etc.




Re: Algorithmic Revolution?

2002-11-28 Thread Russell Standish
Colin Hales wrote:
 Here is another possible confusion: ‘emergence’ as a descriptive artefact vs
 ‘emergence’ as real layered behaviour in a real system. The wording
 initially looks as if you think emergence is not real. The emergence is real
 (whatever we consider real is!). Example: There are at least 6 fundamental
 layers of emergence from quantum froth to mind. The agreed view appears to
 be that any formal mathematics of each layer stops at each layer whereas an
 algorithmic approach generates/spans the layers, which are delineated by an
 appropriately sensitised observer. Both styles of description seem
 appropriate and able to coexist provided their character is understood.

Remember my insistence on the models being good (by whatever
criterion good is). Emergence in a poor model is most definitely not
real. But we expect good models to be capturing something about
reality, so emergence in these models is in some sense a real
phenomenon. 

However, all I really insist upon is that two observers discussing a
phenomenon agree that a particular model (or language as it were) is
good. Then they can agree upon the emergence in that case. For the
billions of people who believe in God, presumably God is equally a
real emergent phenomenon. 

 
 Not really TOE stuff, so I’ll desist for now. I remain ever hopeful that one
 day I’ll be able to understand Bruno…. :-)
 

Same here!

Cheers



A/Prof Russell Standish  Director
High Performance Computing Support Unit, Phone 9385 6967, 8308 3119 (mobile)
UNSW SYDNEY 2052 Fax   9385 6965, 0425 253119 ()
Australia[EMAIL PROTECTED] 
Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02





RE: Algorithmic Revolution?

2002-11-27 Thread Colin Hales
Russell Standish wrote:
 Colin Hales wrote:
 Hi Folks,
 I have chewed this thread with great interest.

 Our main gripe is the issue of emergent behaviour and the mathematical
 treatment thereof? Yes? This is the area in which Wolfram claims to have
 made progress. (I am still wading my way through his tome).

 ***Isn’t the 'algorithmic revolution' really a final acceptance that
there
 are behaviours in numbers that are simply inaccessible to closed form
 mathematical formulae?  - That closed-form mathematics cannot traverse
the
 complete landscape of the solution space in all contexts?


If this were the case, the 'algorithmic revolution' is at least 200 years
old, as people have
known at least this long that most integrals cannot be written in closed
form.

Of course, from a practical point of view, it was so expensive to solve
mathematical problems numerically,
 that hardly anyone bothered until the advent of the electronic computer.
Since then, of course
 computational science has taken off like a rocket, and keeps the likes of
me employed. But this is
 hardly new news, or philosophically interesting.
   Cheers

We have a small confusion here (probably caused by my own choice of words).
I’m not talking about the numerical solution to a given mathematical
formula. Agreed: Mundane++. Memories of laboriously iterating on my
calculator come to mind! :-)

What I’m saying (as you seem to agree) is that for some aspects of the
universe (including those aspects we identify as emergent phenomena) there
are no formulae possible in the first place. I think I find the answer to my
original question in the last 2 paragraphs of your paper section 2.
http://life.csu.edu.au/ci/vol09/standi09/, as well in your ‘emergence’
thread para where I think you have ‘nailed it’:

………. “As to mathematics predicting emergent phenomena, I believe that the
answer is categorically no. Emergent phenomena are a result of a modelling
process - eg what a brain does, not an analytic process. Mathematics can be
used to describe the emergent phenomenon after it is discovered, but I don't
think the discovery process can really be called mathematics”..

Here is another possible confusion: ‘emergence’ as a descriptive artefact vs
‘emergence’ as real layered behaviour in a real system. The wording
initially looks as if you think emergence is not real. The emergence is real
(whatever we consider real is!). Example: There are at least 6 fundamental
layers of emergence from quantum froth to mind. The agreed view appears to
be that any formal mathematics of each layer stops at each layer whereas an
algorithmic approach generates/spans the layers, which are delineated by an
appropriately sensitised observer. Both styles of description seem
appropriate and able to coexist provided their character is understood.

I think we do have a ‘revolution’ and it is a revolution that will force us
to use a wolfamesque rule-based numerical combination descriptive/predictive
method to deal with emergence whether we like it or not because it’s the
only technique that can traverse the layers and there are a multitude of
problems where we need to do exactly that. It seems more than a passing fad.
I can foresee industry based on derivation of elaborate CA to fulfil useful
requirements that cannot be achieved otherwise. I can see scientific method
and Cog. Sci. in particular undergoing a transformation of a sort as a
result.

Not really TOE stuff, so I’ll desist for now. I remain ever hopeful that one
day I’ll be able to understand Bruno…. :-)

Cheers,

Colin Hales





RE: Algorithmic Revolution?

2002-11-24 Thread Colin Hales
Hi Folks,

I have chewed this thread with great interest.

Our main gripe is the issue of emergent behaviour and the mathematical
treatment thereof? Yes? This is the area in which Wolfram claims to have
made progress. (I am still wading my way through his tome).

***Isn’t the 'algorithmic revolution' really a final acceptance that there
are behaviours in numbers that are simply inaccessible to closed form
mathematical formulae?  - That closed-form mathematics cannot traverse the
complete landscape of the solution space in all contexts?

My own approach has been to regard emergence as the repositioning of the
observer of a system such that the mathematical descriptions you have been
using fall over/cease to be relevant. The idea that the math can seamlessly
transcend an observer’s scope is, I concluded, simply meaningless as the
math is defined by the observer’s scope. The prejudices of our position as
observers are therefore automatically destined to be embedded in our
descriptors of things.

If this is the case then one cannot overlook the use of computers or the AIT
approach if you need to study, understand and replicate real-world phenomena
(in particular, MIND) that transcend the boundaries of emergence.

Will the historians look back on our obsession with closed form math and see
it as the machinations of mathematical youth? Para *** above is the clincher
and I have been unable to distil a definitive stance from all the writings.
Clues anyone?

regards,

Colin Hales
* somewhat perplexed *





Re: Algorithmic Revolution?

2002-11-24 Thread Tim May
My caveat before commenting: I'm an opinionated person, but I really 
don't have any particular theory of everything to share with you. No 
dreams theory, no soap bubble theory, no 18-dimensional cellular 
automaton theory. I'm currently doing a lot of reading in logic, topos 
theory, quantum mechanics, and math in general. I'm more interested 
right now in really grasping why the Kochen-Specher says what it says, 
for example. Some of you are way ahead of me. Anyway, these are some of 
my caveats before plunging in.

On Sunday, November 24, 2002, at 04:26  AM, Colin Hales wrote:

Hi Folks,

I have chewed this thread with great interest.

Our main gripe is the issue of emergent behaviour and the mathematical
treatment thereof? Yes? This is the area in which Wolfram claims to 
have
made progress. (I am still wading my way through his tome).

***Isn’t the 'algorithmic revolution' really a final acceptance that 
there
are behaviours in numbers that are simply inaccessible to closed form
mathematical formulae?  - That closed-form mathematics cannot traverse 
the
complete landscape of the solution space in all contexts?

And I believe Charles Bennett said all this more clearly and 
convincingly with his logical depth arguments. (See, for example, his 
long essay in an excellent book called Fifty Years of the Turing 
Machine.)

The argument goes like this: suppose one is walking on a beach and 
finds a gold watch. The watch has many moving parts, many non-natural 
things like the crystal, the case, and much internal structure. (I'm 
deliberately mixing in parts of Dawkins' Blind Watchmaker, as the 
arguments are closely related.)

The watch shows evidence that a lot of processes have run for a very 
long time. Not computer processes, but processes of manufacturing the 
components, of fitting them together, of learning what doesn't work and 
what does work, and of a serious industrial infrastructure.

Or consider an e. coli organism. Something like 4 GB of genetic 
material, measured in bits and bytes (if I recall this correctly...it 
doesn't affect the argument if I am off by some factor).

The complexity of the e. coli genome is, by some measures, just this 
4 GB. But nearly all 4 GB strings (which is a very, very large number!) 
produce dead organisms. In the space of 4 GB strings, some relatively 
small patch of them produce functioning, reproducing organisms like 
e. coli.

Both the watch and the e. coli appear to have been the result of a lot 
of shuffling and processing of the apparent number of bits.

Charles Bennett calls this logical depth. This is closely related to 
algorithmic information theory, where the shortest description of a 
string (or other object) is essentially the program or process which 
produces the object. Bennett has placed more emphasis on the depth of 
a series of iterated processes, but the idea is basically the same. 
(And there may have been good syntheses of the ideas in recent years...)

Another way to look at this, metaphorically, is in terms of compression 
of a spring. The evolutionary pressures and differential reproduction 
rates with e. coli, or with watches!, takes a spring and puts more 
energy into it...the energy to do things later. Even a fixed-length 
string, like 4 GB in e. coli, can be seen as being compressed in this 
sense. More and more logical depth is compressed into a string of fixed 
length. (Imagine a program in a competing robot, perhaps in one of 
those Battlebots arena shows, where the program is perhaps, by 
coincidence, limited to about 4 GB of Pentium 4 main memory. The 
program gets shuffled and changed, via either genetic algorithms (GA) 
or genetic programming (GP) or whatever. The same 4 GB program space 
(string, seen abstractly) gets more and more capable.

A cellular automaton can also have high logical depth. In fact, back 
when I read (some of ) Wolfram's earlier book on CAs, Cellular 
Automata and Complexity, I believe it is called, this is the viewpoint 
I was reading it from.

But the fact that cellular automata exhibit this kind of logical depth 
does not mean that gold watches and e. coli are proof that the universe 
is a cellular automaton!

Q.E.D.

Now, pace Zuse, Fredkin, Lloyd, and all the others, it may in fact be 
the case that a Theory of Everything somehow involves CA-like 
computations or interactions at perhaps the Planck scale.

But nothing in Wolfram's recent book is at all convincing to me that he 
has shown this in any meaningful sense. The phenomena he has been 
experimenting with are at least 25-30 orders of magnitude away from the 
Planck scale. Believing snowflakes, crystals, and sea shells accrete 
material in CA-like ways, which I think physicists and biologists have 
been convinced of for a long time, does not mean the universe is in any 
meaningful sense itself a cellular automaton.

(And, pace Gleason's Theorem and the aforementioned Kochen-Specker 
Theorem, and the work of Bell, I am suspicious for other 

Re: Algorithmic Revolution?

2002-11-24 Thread Hal Finney
One more point with regard to Wolfram and our list's theme.  I think
that implicit in his conception of the underlying rules of the universe
you have to assume some kind of all-universe model.  The reason is that
he does not expect our universe's program to be particularly special
or unique.  He thinks it will be relatively short, but given the kind
of computational structures that might be the basis for the universe,
the program is expected to be basically random.

Every computational system has a certain percentage of its programs that
can produce interesting-looking structure, and the expectation is that
the same thing will hold for whatever computational model turns out to
most simply express the universe's program.  Whatever our program is, it
will turn out to be one of the ones that produces structure.  But these
are almost certainly a tiny percentage of the whole, based on Wolfram's
results.  And the specifics of which ones will produce structure are
very difficult to define.  Making a tiny change in a program will often
produce a completely different output, at least with the computational
systems Wolfram investigates.  The ones which produce structure are
scattered very randomly throughout the space of all possible programs.

For example, among the nearest-neighbor, 2-state, 1-dimensional CAs that
Wolfram uses as his simple exemplars, there are 256 possible different
programs, which Wolfram numbers from 0 to 255.  As it turns out, only one
of them, program number 110, produces a certain kind of complex structure
that has particle-like behavior with very complex kinds of interactions.
Of all of them, this is the one which at least superficially would be
the most likely to be able to evolve life (although it's almost certainly
too simple to actually do so).

But let's suppose that we learned that we lived in a universe based on
rule 110.  Why would that be?  Why, of all the 256 possible programs,
would only rule 110 exist?  Well, there are two plausible answers.
One is that someone selected rule 110, but that raises all kinds
of insurmountable problems of its own.  The other is that all of the
programs were instantiated, and then of course we turned out to live in
rule 110 because it was the only one that could sustain life.

So Wolfram's approach leads very naturally to the assumption that all
possible programs are being run.  If it does turn out that our universe's
program is relatively simple but still densely packed and random among
a vast space of comparable programs, most of which would produce sterile
universes, then I think we would be forced to seriously consider that
all of the other programs were being run, too.

My one concern is that if Wolfram is right and our universe is a random
program from some set, and if there are much more than on the order
of 100 bits in the program, we will never be able to find the right
program.  If the nature of the program space is similar to what Wolfram's
explorations suggest, that most of the space is unstructured and there
is no way to identify the likely fruitful programs, there will be no
way for us to know if we are on the right track or not.  We won't have
the hope of finding a program that almost works and then successively
refining it to get closer and closer, because the true program will be
completely different from an almost-true program.

The situation will be something like the search for a cryptographic key,
where you can't really hope to get closer and closer until you get it.
You're stabbing totally in the dark until you fall upon the right one.
And if the search space is too large, you will never find the answer.

Hal Finney




Re: Algorithmic Revolution?

2002-11-24 Thread Russell Standish
Hal Finney wrote:
 
 My one concern is that if Wolfram is right and our universe is a random
 program from some set, and if there are much more than on the order
 of 100 bits in the program, we will never be able to find the right
 program.  If the nature of the program space is similar to what Wolfram's
 explorations suggest, that most of the space is unstructured and there
 is no way to identify the likely fruitful programs, there will be no
 way for us to know if we are on the right track or not.  We won't have
 the hope of finding a program that almost works and then successively
 refining it to get closer and closer, because the true program will be
 completely different from an almost-true program.
 
 The situation will be something like the search for a cryptographic key,
 where you can't really hope to get closer and closer until you get it.
 You're stabbing totally in the dark until you fall upon the right one.
 And if the search space is too large, you will never find the answer.
 
 Hal Finney
 

This is exactly the problem I have with the idea that we live within
one particular computational history, for all time, but can never know
which.

There will be constraints on the set of possible histories in which
can be lived, which can be uncovered and linked to a theory of
conscious observation. All else is pure contingency. So other ideas
proposed on this list - that we can be identified with a whole sheaf
of computations that are continually diverging - seem far more appropriate.


A/Prof Russell Standish  Director
High Performance Computing Support Unit, Phone 9385 6967, 8308 3119 (mobile)
UNSW SYDNEY 2052 Fax   9385 6965, 0425 253119 ()
Australia[EMAIL PROTECTED] 
Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02





Re: Algorithmic Revolution?

2002-11-24 Thread Hal Finney
I don't think I received the first of my two messages written today on
Wolfram, but it made it to the archive.  In case anyone missed it I'll
just point to it rather than re-sending.  It's available at
http://www.escribe.com/science/theory/m4156.html.

Hal Finney




Re: Algorithmic Revolution?

2002-11-22 Thread jamikes
George, beautiful.
Maybe I propose a line at the end:
With a LOT of ego attached

Best wishes
John Mikes

- Original Message - 
From: George Levy [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, November 22, 2002 12:55 AM
Subject: Re: Algorithmic Revolution?


 When you look at the bottom of the well,
 all the way
 deep down,
 you see yourself staring right back at you.
 
 And right now you look like an algorithm.
 Oh well, there was a time when you looked like clockwork
 Maybe tomorrow you'll be a brain.
 And the day after tomorrow maybe a quantum device.
 
 The universe that we perceive is in our own image.
 It can only be in our own image.
 Bruno is right, the foundation of physics may well be psychology.
 It could even be neurology.
 
 George





Re: Algorithmic Revolution?

2002-11-21 Thread vznuri
RS wrote on one level how the algorithmic revolution 
was epistemological. I objected to this partly. let me
quote the dictionary defn of epistemology

epistemology-- the branch of philosophy that deals with 
the nature and theory of knowledge.

now in newtons time, science was seen as a branch of philosophy.
however in modern times, philosophy has become somewhat disconnected
from science and followed its own course. so to me to label a genuine
scientific paradigm shift epistemological seems to downplay its
significance somewhat as a little too abstract. the scientific revolution
is not merely about a different way of seeing the universe, but a different
way of interacting with it. (experimental method, etc.) 

this is exactly
the way in which I insist the algorithmic revolution be interpreted
as I outlined..  not merely a shift in the way 
we view the world. (unfortunately 
paradigm shift terminology sometimes implies a merely conceptual,
subjective shift in view, partly due to kuhns perspective, but a
paradigm shift means much more than a mere psychological rearrangement.)


next, RS defines the clockwork metaphor in terms of the newtonian
revolution. this is very reasonable and there is a high correlation.
however I would argue the clockwork paradigm is ongoing. the
clockwork universe involved multiple new ways of seeing the
world. one of them, indeed, was newtonian mathematical laws
for physics, gravitation, etcetera. another was determinism,
ala the famous laplacian quote re: atoms as billiard balls. 

however another was simply, universe as mechanistic. the clock is a 
machine. the clock metaphor proposes the universe runs like a kind
of automated machine subject to mathematical/physical laws. 

lets be very careful to define clockwork universe metaphor in terms
of the accurate history of its origination, not from our modern point
of view. note that in the middle ages, prior to
the newtonian revolution, the previous paradigm for the concept
of force was something sometimes involving supernatural aspects.
the world was presumed to be set in motion by god  influenced
by various spirits, entities, etcetera in ways not fully conceivable.
this is what the clockwork metaphor replaced.

the universe as mechanistic theme from the clockwork metaphor
persists to this day. einsteins relativistic theory involved the
consideration at clocks in moving frames. 
when physicists analyze particle dynamics,
or even search for a TOE as we are here, I would say the clockwork
metaphor is still alive. its still ticking, so to speak.. wink

again, let me contrast the algorithmic metaphor for the universe
with the clockwork one. even in newtons time, the idea was
that the universe ran **like** a clock. it was a metaphor. but
the zuse-fredkin-wolfram idea of the universe is that the
universe evolves not merely **as** a computation, but that it
**is** a computation. 

therefore, imho the algorithmic metaphor
is actually more than a metaphor, more than the clockwork model
was a metaphor.  its not merely a paradigm shift I would say, its
something more. its a new model, a new system, a new framework. 
its comparable to newtons discovery
of the law of gravitation if the program can be successfully
carried out.

is the algorithmic idea incorrect? someday we will probably
notice that it has its deficiencies just as the clockwork idea
did, but we will not discard it entirely, just as we have
not discarded the clockwork universe idea.

so imho to say the clockwork metaphor for reality is wrong,
is (uh) wrong. imho its a simplistic/facile rejection of a 
still-legitimate paradigm.




Re: Algorithmic Revolution?

2002-11-20 Thread Russell Standish
There seems to be more heat that light in this discussion. There's
several things going on here:

1. A social revolution in the use of information technology (mobile
phones, internet and all that). This is beyond dispute, I believe, and
I didn't see anyone on this list disputing that.

2. A scientific revolution - use of computer simulations as a 3rd way
from theory and experiment. This is using computing technology, as
opposed to information technology, and is about 2 decades older thatn
the social revolution. Again, there is little dispute in this.

3. An epistemological revolution - a paradigm shift that sees reality
cast in terms of a computational or algorithmic metaphor. This is how
Tim May interpreted algorithmic revolution, as did I.

4. A mathematical revolution - algorithmic information theory has been
explosive since it was founded in the mid-60s. It has profound
consequences to the roots of mathematics. 

No. 3 is the type of thesis promoted by Wolfram, and goes back at
least to Konrad Zuse in the 1960s. It is worth treating with a
considerable grain of salt - it is a paradigm, as potentially wrong
as the clockwork model in the late 19th century.

No. 4 - I think the jury is still out. Practical applications of AIT
are still rather meagre compared with more traditional areas of
mathematics. But I do think it is a fascinating area of study.

Cheers


A/Prof Russell Standish  Director
High Performance Computing Support Unit, Phone 9385 6967, 8308 3119 (mobile)
UNSW SYDNEY 2052 Fax   9385 6965, 0425 253119 ()
Australia[EMAIL PROTECTED] 
Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02





Re: Algorithmic Revolution?

2002-11-20 Thread Russell Standish
[EMAIL PROTECTED] wrote:
 
 
 RS reformulates/reduces the term algorithmic revolution as:
 
 1. A social revolution..
 2. A scientific revolution..
 3. An epistemological revolution..
 4. A mathematical revolution..
 
 all true. however, wolfram-fredkin-zuse et al are not merely proposing a 
 mere epistemological revolution as you state with (3). they're
 saying, the next state of the universe _really_is_ a 
 computation, that we really are (and all reality is)
 built out of cells in a very large 3D or 4D cellular automaton. its
 not merely a metaphor. in this sense it probably cannot be seen on
 the same level as the clockwork mechanism for the universe, or
 the universe-as-energy from the thermodynamic/industrial/steam 
 engine perspective.

Where did the mere come from? Epistemological as in a revolution of
our understanding of the world.

Zoom back 150 years, and you will find that people believed that the
universe really did follow Newton's equations of motion exactly, and
that all you needed to know everything about the universe at all times
was the positions and velocities of all constituent particles _at one
moment in time_. This is what is described by the clockwork metaphor.

The Wolfram-Fredkin-Zuse thesis that the universe is a Turing machine
is described metaphorically as a computer universe - just as real
computers are only metaphors for Turing machines.

I'm afraid I don't appreciate the difference here. The clockwork
universe was shown to be wrong with Qunatum Mechanics. My gut feeling
is that the computer universe will also be shown to be wrong.

 
 this is a physical hypothesis about the universe. so far it
 is not yet testable or falsifiable. but I would argue there
 is very good circumstantial evidence.
 
 RS says (3) is potentially as wrong as the clockwork model
 of the universe. but, I would argue the clockwork model
 is not really wrong, only that it was a steppingstone that
 is now obsolete or incomplete relative to new data. it was an
 outstanding metaphor for reality  is arguably still a very strong
 element of all modern scientific thought.
 
 with 4, RS says this refers to algorithmic information theory
 and the jury is still out on it.
 technically this is the name for the field that is 
 involved with compressibility, i.e. chaitin-kolmogorov ideas
 (is this what RS meant?). which
 is mostly seen as a specialized subfield of computational 
 complexity theory. this is a strange reduction from my point of
 view  is definitely not the mathematical revolution associated
 with the algorithmic revolution I referred to earlier.
 
 

Yes - the usual name for it is algorithmic information theory, and
Greg Chaitin was probably the prolific contributer. Ming Li has
demonstrated some very interesting applications for the theory in
solving mathematical problems otherwise unsolvable. I wouldn't be
surprised if AIT turns out to be as important as differential
equations or Hilbert space theory, for example. As I said though, it
is still too early to say.

Cheers


A/Prof Russell Standish  Director
High Performance Computing Support Unit, Phone 9385 6967, 8308 3119 (mobile)
UNSW SYDNEY 2052 Fax   9385 6965, 0425 253119 ()
Australia[EMAIL PROTECTED] 
Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02





Re: Algorithmic Revolution?

2002-11-20 Thread H J Ruhl
At 11/21/02, you wrote:


The clockwork
universe was shown to be wrong with Qunatum Mechanics. My gut feeling
is that the computer universe will also be shown to be wrong.


In my view there are two types of universes.  Type 1 have internal rules of 
state succession that are like computers - UD's and the like could generate 
them.   Type 2 have rules that have a degree of do not care in 
determining the valid next state and this do not care component is a 
channel to an external random oracle.  However,  I attempt to show in my 
approach that Type 1 are also subject to the external random oracle but the 
channel differs.

This means that all universes have open logic systems and can have 
exploding cows and other white rabbit events.

I believe ours to be a Type 2 universe with a rule set that allows few 
rabbit events of a macro nature and many of a micro nature.

I suspect that in some aspects we agree.  I also think that Bruno's UDA to 
the extent I think I understand it is a candidate for one of a Type 1 
universe's channels to the external random oracle.  I believe Juergen's 
work to be a candidate for the rule set of some Type 1 universes.

However, I see no bias towards one type or the other in terms of quantity 
or in terms of any any other system wide measure.

Hal   




Re: Algorithmic Revolution?

2002-11-19 Thread vznuri
hi all. re the term algorithmic revolution here are a few
more ideas along this thread Id like to point out.

TCM wrote
My belief is that basic mathematics is much more important than 
computer use, in terms of understanding the cosmos and the nature of 
reality.

ok, fair disclosure, I have a BS software engr, writing code
since age ~10, and it affects my worldview bigtime. or, one could
say, I really know how to pick a winning horse, haha.. seriously,
I recognized  planned my life around the algorithmic revolution 
from a young age.

at an early point I realized that software is like 
animated mathematics.

this is a very,very deep  cosmic way of looking at algorithmics. it
captures some of the revolutionary flavor. we can suggest that
mathematics has previously attempted to grasp the concept of
change, via calculus, differential eqns etc. 

but something is fundamentally new about simulation. it captures
worlds that cannot be expressed via mathematical generalities. there
are no equations we can write down that describe the outcome of,
say, a climate simulation-- its all locally defined  then globally
simulated  the outcome is emergent.  what are the differential
equations that describe the game of life??

imho algorithmics captures the extraordinary, currently very poorly
understood property of emergence. just as
in the game of life there are thousands of glider types, none of
which one would expect/anticipate from the simple rules.

we can argue that algorithmics is a fundamentally new way to
look at mathematics. and one could argue, all mathematics up until
now has been transformed. at this point, it seems much more correct
to classify mathematics as a subbranch of algorithmics than vice
versa. I believe much mathematics of the future will be taught
from the algorithmic point of view instead.

imho, the invention  harnessing of the algorithm is roughly as significant
in human intellectual development as pythagoras's original
realization about how mathematics modelled nature. its easily on that
order of magnitude as far as a milestone in human thought, possibly
surpassing it. it seems to me, fundamentally, algorithmics entails
and surpasses mathematics as a new simultaneously conceptual and physical
tool for analyzing the universe
and its variegated phenomena.

so think. we've basically got several millenia of mathematical
thought, dating all the way back to the babylonians (who played
with perfect triangles, fractions etc), and quite well
developed in greece 2000 years ago. reaching heights of sophistication
with calculus, or the abstraction in the 20th century.
Im saying to some degree, all that
is childs play compared to the new universe of algorithmics.


re: TCMs questions about some of my points.


1st, I believe that we will eventually get the math for a TOE
that matches accelerator/particle physics 
so perfectly that it will be considered
redundant or wasteful to do the expensive supercollider experiments, because
the accelerators will never find anything that does not match the 
comprehensive theory.

that is, after all, one of the big 
reasons to look for a TOE.  but I agree, until that point, 
physicists are not going to give up the big science..

a crazy thought? perhaps. but lets look at atomic weaponry testing--
thats essentially whats happened. the US has been simulating atomic
weapons testing for many years now with powerful supercomputers. and
obviously the results are considered ***extremely*** accurate.  it
can indeed be done on some level.


2nd-- alas, I wish I could cite a reference. but software is
used extremely heavily in particle physics experiments to 
automatically analyze particles and classify them  find
anomalous events. its basically
AI-like software, extremely sophisticated. it can look at 
very complicated particle tracks  collisions and name
all the particle tracks based on analyzing the big picture.

this used to be done by humans  by hand, and (as I understand it)
the discovery of many
particles from the last decade or around that range could
not have been done with this highly sophisticated sorting
software that can run through millions of events very quickly.

so there is a hidden story behind massive particle accelerators.
the software infrastructure for them is all invisible and
mostly unknown to the public, but its a vast edifice at
the core of the analysis, and has gone through revolutionary
changes in a short amount of time, mirroring the algorithmic
revolution elsewhere.

how much is that software worth?? I cant really estimate, but
I wouldnt be surprised if a significant percent of supercollider
budgets was spent on developing it.

if anyone knows references on software used in particle physics
analysis, I would really like to know myself.

a nice reference on the culture behind accelerators is 
beamtimes  lifetimes by sharon traweek.