On 26 Oct 2013, at 11:54, Craig Weinberg wrote:
On Saturday, October 26, 2013 5:18:14 AM UTC-4, Bruno Marchal wrote:
On 26 Oct 2013, at 10:41, Craig Weinberg wrote:
On Saturday, October 26, 2013 3:36:59 AM UTC-4, Bruno Marchal wrote:
On 25 Oct 2013, at 19:33, meekerdb wrote:
On 10/25/2013 3:08 AM, Telmo Menezes wrote:
Now take the game of go: human beings can still easily beat
even the most powerful computer currently available. Go is much
combinatorially explosive than chess, so it breaks the search tree
approach. This is strong empirical evidence that Deep Blue
accomplished nothing in the field of AI -- it did did accomplish
something remarkable in the field of computer engineering or maybe
even computer science, but it completely side-stepped the
"intelligence" part. It cheated, in a sense.
When I studied AI many years ago it was already said that,
"Intelligence is whatever computers can't do yet."
I think Douglas Hofstadter said that, actually. Right in the topic!
So when computers can win at GO, will they be intelligent then?
Computers are intelligent.
When they will win at GO, and other things, they might begin to
believe that they are intelligent, and this means they begin to be
Their soul will fall, and they will get terrestrial hard lives,
like us. They will fight for social security, and defend their right.
Couldn't there just be a routine that traps the error of believing
they are intelligent?
Not at all.
If you find such a routine, you will believe that you can't do that
Why not just write a routine which runs in a separate partition so
that the UM doesn't even know its running? It's just a humility
G* is a bit like that. But if you keep the thermostat separated, they
it is not part of the machine, if you link them in some way, then the
machine changes and become a new machine, and you will need a new
thermostat for her.
but that would be by itself the same error, or you lose your
Does every part of the universal machine have to be universal?
A priori no part of a (simple) universal machine will be universal.
Like no part of an adder is an adder.
Since you are a machine that understands that believing you are
intelligent is stupid, why do you still have to have a terrestrial
Enlightened states can be close to that, so by altering your
consciousness, or perhaps just "dying", you might be able to
remember that being human is not your most common state, but that
can't be used directly on the terrestrial plane.
But since you got to the terrestrial plane by falling from grace,
how can grace ever be regained in the universe if even enlightenment
does not restore it?
Well, according to some theory enlightenment restore it, for a period
of time (in the 3p description, the 1p here is harder to describe).
The hard part is when and if you come back to earth in that state,
because you regain the "reason" why you are not enlightened, you
recover the (perhaps bad) memories and experiences.
But I don't know why you say that enlightenment does not restore it,
at least locally.
There is something deep at play here, and which is a born tension
between the biological and the theological. Biology is like cannabis:
it want life to develop. Theology is like salvia, it does not care to
much on life, only on after life, parallel life, other's life, and
beyond. But the self-reference logic, even of the simple correct
machines, justifies the existence of many conflicts between all self-
points of view.
We are not human being having divine experiences from times to
times, but divine beings having human experiences from times to
times. (+/- Chardin).
I agree, although I would say that we are Absolute experiences being
qualified as human.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To post to this group, send email to email@example.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.