On 4/16/2019 11:08 PM, Telmo Menezes wrote:
On Wed, Apr 17, 2019, at 05:03, 'Brent Meeker' via Everything List wrote:
On 4/16/2019 6:10 AM, Telmo Menezes wrote:
On Tue, Apr 16, 2019, at 03:44, 'Brent Meeker' via Everything List
wrote:
You seem to make self-reference into something esoteric. Every
Mars Rover knows where it is, the state of its batteries, its
instruments, its communications link, what time it is, what its
mission plan is.
I don't agree that the Mars Rover checking "it's own" battery levels
is an example of what is meant by self-reference in this type of
discussion. The entity "Mars Rover" exists in your mind and mine,
but there is no "Mars Rover mind" where it also exists. The entity
"Telmo" exists in your mind and mine, and I happen to be an entity
"Telmo" in whose mind the entity "Telmo" also exists. This is real
self-reference.
Or, allow me to invent a programming language where something like
this could me made more explicit. Let's say that, in this language,
you can define a program P like this:
program P:
x = 1
if x == 1:
print('My variable x s holding the value 1')
The above is the weak form of self-reference that you allude to. It
would be like me measuring my arm and noting the result. Oh, my arm
is x cm long. But let me show what could me meant instead by real
self-reference:
program P:
if length(P) > 1000:
print('I am a complicated program')
else:
print('I am a simple program')
Do you accept there is a fundamental difference here?
I take your point. But I think the difference is only one of
degree. In my example the Rover knows where it is, lat and long and
topology. That entails having a model of the world, admittedly
simple, in which the Rover is represented by itself.
I would also say that I think far too much importance is attached to
self-reference. It's just a part of intelligence to run
"simulations" in trying to foresee the consequences of potential
actions. The simulation must generally include the actor at some
level. It's not some mysterious property raising up a ghost in the
machine.
With self-reference comes also self-modification. The self-replicators
of nature that slowly adapt and complexify, the brain "rewiring
itself"... Things get both weird and generative. I suspect that it
goes to the core of what human intelligence is, and what computer
intelligence is not (yet). But if you say that self-reference has not
magic property that explains consciousness, I agree with you.
On consciousness I have nothing interesting to say (no jokes about
ever having had, please :). I think that:
consciousness = existence
Existence entails self-referential machines, self-referential
evolutionary processes, the whole shebang. But not the other way around.
Can't really be an equality relation then. It's
existence=>self-reference and maybe consciousness. But I'm not sure
what "=>" symbolizes. Not logical entailment. Maybe nomological
entailment?
Brent
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.