On 10/9/2012 12:28 PM, Bruno Marchal wrote:
On 09 Oct 2012, at 13:22, Stephen P. King wrote:
On 10/9/2012 2:16 AM, meekerdb wrote:
On 10/8/2012 3:49 PM, Stephen P. King wrote:
Question: Why has little if any thought been given in AGI to
self-modeling and some capacity to track the model of self under
the evolutionary transformations?
It's probably because AI's have not needed to operate in
environments where they need a self-model. They are not members of
a social community. Some simpler systems, like Mars Rovers, have
limited self-models (where am I, what's my battery charge,...) that
they need to perform their functions, but they don't have general
Could the efficiency of the computation be subject to modeling?
My thinking is that if an AI could rewire itself for some task to
more efficiently solve that task...
Betting on self-consistency, and variant of that idea, shorten the
proofs and speed the computations, sometimes in the "wrong direction".
Could you elaborate a bit on the betting mechanism so that it is
more clear how the shorting of proofs and speed-up of computations obtains?
On almost all inputs, universal machine (creative set, by Myhill
theorem, and in a sense of Post) have the alluring property to be
This is a measure issue, no?
Of course the trick is in "on almost all inputs" which means all,
except a finite number of exception, and this concerns more evolution
Evolution is basically computation + the halting oracle. Implemented
with the physical time (which is is based itself on computation +
self-reference + arithmetical truth).
So you are equating selection by fitness in a local environment
with a halting oracle?
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at