On 12/20/2007 09:18 AM,, Stan Nilsen wrote:
> I agree that machines will be faster and may have something equivalent
> to the trillions of synapses in the human brain.
>
> It isn't the modeling device that limits the "level" of intelligence,
> but rather what can be effectively modeled.  "Effectively" meaning
> what can be used in a real time "judgment" system.
I understand the essence of the point expressed here as "human beings
are about as effective as possible in their modeling already, given
constraints on what it is possible to model."  But that is not even
remotely plausible if you consider that human beings do not all have the
intellect of a William James Sidis or a John von Neumann. Do you believe
that if you had 100,000 John von Neumann intellects working
simultaneously on a problem 24-7 that that would not represent a
profound phase transition in intelligence?

We already know that intelligence vastly superior to average human
intelligence is possible, since there have existed people like William
James Sidis and John von Neumann. Even if the von Neumann box were
nothing more than a million times faster than real von Neumann, that
would be a profoundly different kind of intelligence, and it is likely
that the greater speed would allow for deeper, more complex cognitive
processes that are just not possible at 'normal' von Neumann speed.

There is of course more to intelligence than just raw speed, but an
intelligence that was fast enough to rediscover everything we know about
mathematics in 5 seconds from what was known 2500 years ago represents a
profoundly different kind of intelligence than any human intelligence
that has ever existed. And that is considering only speed, not deepness
of thought, which is surely limited by speed.
>
> Probability is the best we can do for many parts of the model.  This
> may give us decent models but leave us short of "super" intelligence.

So 100,000 von Neumanns that operate at 100,000 the speed of the flesh
and blood von Neumann would not constitute a "super intelligence"?
Please give an argument to that effect and explain what you mean by
"super intelligence" if not something that is vastly superior to any
human that has ever existed according to current criteria for judging
intelligence.
>
> Deeper thinking - that means considering more options doesn't it?  If
> so, does extra thinking provide benefit if the evaluation system is
> only at level X?

The same cognitive processes that allow the most intelligent humans to
think faster and deeper would occur that much faster and thereby allow
for even deeper thought per unit time in the von Neumann box.

>
> Yes, "faster" is better than slower, unless you don't have all the
> information yet.  A premature answer could be a jump to conclusion
> that   we regret in the near future.
All other things being equal, faster is better than slower, regardless
of anything else. Prematurely jumping to a conclusion is a cognitive
error. *Faster* in no way implies jumping to conclusions prematurely.
You seem to be inferring that because some humans prematurely jump to
erroneous conclusions because they don't take enough time to think
things through, there is some kind of causal connection between speed of
thought and prematurely jumping to conclusions. The connection is
between flawed reasoning and jumping to conclusions prematurely. It has
nothing to do with speed in and of itself. Simpletons have also been
known to jump prematurely to conclusions.

> Again, knowing when to act is part of being intelligent.  Future
> intelligences may value high speed response because it is measurable -
> it's harder to measure the quality of the performance.  This could be
> problematic for AI's.

Future AIs could also realize that it would be foolish in the extreme to
pay attention only to speed of response and not to quality. If you are
able to consider this, what makes you think that the point would not
occur to 100,000 von Neumanns?
>
> Beliefs also operate in the models.  I can imagine an intelligent
> machine choosing not to trust humans.  Is this intelligent?
>

If you mean never trust any human being, ever, then probably not
intelligent, unless an awful lot happens between now and then. If you
mean blindly trust all human beings, then surely unintelligent. I
believe that 100,000 von Neumanns would take an intermediary position
and trust or not trust on the basis of past behavior and character of
the individual, consequences of trusting or not trusting, the
particulars of the case under consideration, and probably factors that
have not even occurred to us.

In general, you seem to be starting from the conclusions that you would
like to be the case ("faster has no relation to better", "faster has no
relation to 'able to think deeper per unit time'", "humans are as good
as it can possibly get"), and then stating without supporting evidence
that "it might turn out to be this way." I'm not sure if you meant your
statements to be compelling in any way -- and not just idle musings
about 'what might turn out to be the case', like 'if pigs could fly,
...'. If the latter, then ignore everything I've said.

-j.k.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=78280299-9bd5b2

Reply via email to