I am getting a bit tired of our discussion, so I will just adress the main
points:


Jason Resch-2 wrote:
> 
>>
>>
>> Jason Resch-2 wrote:
>> >
>> >>
>> >> But let's say we mean "except for memory and unlimited accuracy".
>> >> This would mean that we are computers, but not that we are ONLY
>> >> computers.
>> >>
>> >>
>> > Is this like saying our brains are atoms, but we are more than atoms? 
>> I
>> > can agree with that, our minds transcend the simple description of
>> > interacting particles.
>> >
>> > But if atoms can serve as a platform for minds and consciousness, is
>> there
>> > a reason that computers cannot?
>> >
>> Not absolutely. Indeed, I believe mind is all there is, so necessarily
>> computers are an aspect of mind and are even conscious in a sense
>> already.
>>
> 
> Do you have a meta-theory which could explain why we have the conscious
> experiences that we do?
> 
> Saying that mind is all there is, while possibly valid, does not explain
> very much (without some meta-theory).
No, I don't even take it to be a theory. In this sense you might say it
doesn't explain anything on a theoretical level, but this is just because
reality doesn't work based on any theoretical concepts (though it obviously
is described and incorporates them).


Jason Resch-2 wrote:
> 
>>
>>
>> Jason Resch-2 wrote:
>> >
>> > Short of adopting some kind of dualism (such as
>> > http://en.wikipedia.org/wiki/Biological_naturalism , or the idea that
>> God
>> > has to put a soul into a computer to make it alive/conscious), I don't
>> see
>> > how atoms can serve as this platform but computers could not, since
>> > computers seem capable of emulating everything atoms do.
>> OK. We have a problem of level here. On some level, computers can emulate
>> everything atoms can do computationally, I'll admit that.  But that's
>> simply
>> the wrong level, since it is not about what something can do in the sense
>> of
>> transforming input/output.
>> It is about what something IS (or is like).
>>
> 
> Within the simulation, isn't a simulated atom like a real atom (in our
> reality)?
There is no unambiguous answer to this question IMO.

But it only matters that the simulated atom is not like the real atom with
respect to our reality - the former can't substitute the latter with respect
to reality.


Jason Resch-2 wrote:
> 
>>
>>
>> Jason Resch-2 wrote:
>> >
>> >>
>> >> Jason Resch-2 wrote:
>> >> >
>> >> >> Jason Resch-2 wrote:
>> >> >> >
>> >> >> >> since this is all that is required for my argument.
>> >> >> >>
>> >> >> >> I (if I take myself to be human) can't be contained in that
>> >> definition
>> >> >> >> because a human is not a computer according to the everyday
>> >> >> >> definition.
>> >> >> >
>> >> >> > A human may be something a computer can perfectly emulate,
>> therefore
>> >> a
>> >> >> > human could exist with the definition of a computer.  Computers
>> are
>> >> >> > very powerful and flexible in what they can do.
>> >> >> That is an assumption that I don't buy into at all.
>> >> >>
>> >> >>
>> >> > Have you ever done any computer programming?  If you have, you might
>> >> > realize that the possibilities for programs goes beyond your
>> >> imagination.
>> >> Yes, I studied computer science for one semester, so I have programmed
>> a
>> >> fair amount.
>> >> Again, you are misinterpreting me. Of course programs go beyond our
>> >> imagination. Can you imagine the mandel brot set without computing it
>> on
>> >> a
>> >> computer? It is very hard.
>> >> I never said that they can't.
>> >>
>> >> I just said that they lack some capability that we have. For example
>> they
>> >> can't fundamentally decide which programs to use and which not and
>> which
>> >> axioms to use (they can do this relatively, though). There is no
>> >> computational way of determining that.
>> >>
>> >
>> > There are experimental ways, which is how we determined which axioms to
>> > use.
>> Nope, since for the computer no experimental ways exists if we haven't
>> determined a program first.
>>
>>
> You said computers fundamentally cannot choose which programs or axioms to
> use.
> 
> We could program a computer with a neural simulation of a human
> mathematician, and then the computer could have this capability.
That just would strengthen my point (note the words "we program" meaning "we
choose the program"). 


Jason Resch-2 wrote:
> 
>>
>> Jason Resch-2 wrote:
>> >
>> >  If the computer program had a concept for desiring novelty/surprises,
>> it
>> > would surely find some axiomatic systems more interesting than others.
>> Sure. But he could be programmed to not to have such a concept, and there
>> is
>> no way of determining whether to use it or not if we haven't already
>> programmed an algorithm for that (which again had the same problem).
>>
>> In effect you get an infinite regress:
>> How determine which program to use? ->use a program to determine it
>> But which? ->use a program to determine it
>> But which? ->use a program to determine it
>> ....
>>
>>
> Guess and check, with random variation, it worked for evolution.
But which guessing and checking program to use? ->use a more general
guessing and checking program to determine it
But which? ->use an even more more general guessing and checking program to
determine it
etc....

You still never arrive at a program, in fact your problem just becomes more
difficult each time you ask the question, because the program would have to
be more general.

 
Jason Resch-2 wrote:
> 
>> > You're crossing contexts and levels.  Certainly, a heart inside a
>> computer
>> > simulation of some reality isn't going to do you any good if you exist
>> on
>> > a
>> > different level, in a different reality.
>> So you are actually agreeing with me? - Since this is exactly the point I
>> am
>> trying to make.
>> Digital models exist on a different level than what they represent, and
>> it
>> doesn't matter how good/accurate they are because that doesn't bridge the
>> gap between model and reality.
>>
> 
> But what level something is implemented in does not restrict the
> intelligence of a process.
This may be our main disagreement.
It boils down to the question whether we assume intelligence = (turing)
computation.
We could embrace this definition, but I would rather not, since it doesn't
fit with my own conception of intelligence (which also encompasses
instantiation and interpretation).

But for the sake of discussion I can embrace this definition and in this
case I agree with you. Then we might say that computers can become more
intelligent than humans (and maybe already are), because they manifest
computations more efficiently than humans.

Jason Resch-2 wrote:
> 
>> Jason Resch-2 wrote:
>> >
>> >> And this seems to be empirically true because there is pretty much no
>> >> other
>> >> way to explain psi.
>> >>
>> >
>> > What do you mean by psi?
>> Telepathy, for example.
>>
>>
> Are you aware of any conclusive studies of psi?
That depends on what you interpret as conclusive. For hard-headed
skepticists no study will count as conclusive.

There are plenty of studies that show results that are *far* beyond chance,
though.
Also the so called "anecdotal evidence" is extremely strong.


Jason Resch-2 wrote:
> 
>>
>> Jason Resch-2 wrote:
>> >
>> >>
>> >>
>> >> Jason Resch-2 wrote:
>> >> >
>> >> >> I am not saying that nature is infinite in the way we picture it.
>> It
>> >> may
>> >> >> not
>> >> >> fit into these categories at all.
>> >> >>
>> >> >> Quantum mechanics includes true subjective randomness already, so
>> by
>> >> your
>> >> >> own standards nothing that physically exists can be emulated.
>> >> >>
>> >> >>
>> >> > The UD also contains subjective randomness, which is at the heart of
>> >> > Bruno's argument.
>> >> No, it doesn't even contain a subject.
>> >>
>> >> Bruno assumes COMP, which I don't buy at all.
>> >>
>> >>
>> > Okay.  What is your theory of mind?
>> I don't have any. Mind cannot be captured or even by described at the
>> fundamental level at all.
>>
> 
> That doesn't seem like a very useful theory.  Does this theory tell
> you whether or not you should take an artificial brain if it was the only
> way to save your life?
Of course it is not a useful theory, since it is not a theory in the first
place.
To answer your question: No. There is no theoretical way of deciding that.

benjayk

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34348098.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to