Samantha Atkins wrote:
On Dec 20, 2007, at 9:18 AM, Stan Nilsen wrote:
Ed,
I agree that machines will be faster and may have something equivalent
to the trillions of synapses in the human brain.
It isn't the modeling device that limits the "level" of intelligence,
but rather what can be effectively modeled. "Effectively" meaning
what can be used in a real time "judgment" system.
Probability is the best we can do for many parts of the model. This
may give us decent models but leave us short of "super" intelligence.
In what way? The limits of human probability computation to form
accurate opinions are rather well documented. Why wouldn't a mind that
could compute millions of times more quickly and with far greater
accuracy be able to form much more complex models that were far better
at predicting future events and explaining those aspects of reality with
are its inputs? Again we need to get beyond the [likely religion
instilled] notion that only "absolute knowledge" is real (or "super")
knowledge.
Allow me to address what I think the questions are (I'll paraphrase):
Q1. in what way are we going to be "short" of super intelligence?
resp: The simple answer is that the most intelligent of future
intelligences will not be able to make decisions that are clearly
superior to the best of human judgment. This is not to say that weather
forecasting might not improve as technology does, but meant to say that
predictions and decisions regarding the "hard" problems that fill
reality, will remain hard and defy the intelligentsia's efforts to fully
grasp them.
Q2. why wouldn't a mind with characteristics of ... be able to form more
complex models?
resp: By "more complex" I presume you mean having more "concepts" and
"relevance" connections between concepts. If so, I submit that
wikipedia estimate of synapse of the human brain at 1 to 5 quadrillion
is major complexity, and if all those connections were properly tuned,
that is awesome computing. Tuning seems to be the issue.
Q3 why wouldn't a mind with characteristics of ... be able to build
models that "are far better at predicting future events"?
resp: This is very closely related to the limits of intelligence, but
not the only factor contributing to intelligence. Predictable events
are easy in a few domains, but are they an abundant part of life?
Abundant enough to say that we will be able to make "super" predictions?
Billions of daily decisions are made, and any one of them could have a
butterfly effect.
Q4 why wouldn't a mind... be far better able to explain "aspects of
reality"?
resp: may I propose a simple exercise? Consider yourself to be Bill
Gates in philanthropic mode (ready to give to the world.) Make a few
decisions about how to do so, then explain why you chose the avenue you
took. If you didn't delegate this to committee, would you be able to
explain how the checks you wrote were the best choices in "reality"?
Deeper thinking - that means considering more options doesn't it? If
so, does extra thinking provide benefit if the evaluation system is
only at level X?
What does this mean? How would you separate "thinking" from the
"evaluation system"? What sort of "evaluation system" do you believe
can actually exist in reality that has characteristics different from
those you appear to consider woefully limited?
Q5 - what does it mean, or how do you separate thinking from an
evaluation system?
resp: Simple example in two statements:
1. Apple A is bigger than Apple B.
2. Apples are better than oranges.
Does it matter how much you know about apples and oranges? Will deep
thinking about the DNA of apples, the proteins of apples, the color of
apples or history of apples, help to prove the second statement? Will
deep analysis of oranges prove anything?
Will fast and accurate recall of every related fact about Apples and
oranges help in our proof of statement 2? Even if the second statement
had been 2. Apple A is better than Apple B, we would have had trouble
deciding if the superior color of A is greater than the better taste of B.
This is what I mean by evaluation system. Foolish example? Think
instead "economic prosperity" is better than "CO2 pollution" if you want
to be real world.
Q6 - what sort of "evaluation system" can exist that has characteristics
differing from what I consider woefully limited.
resp: I'm not clear what communicated the idea that I consider either
the machine intelligence or the human intelligence to be woefully
limited. I concede that machine intelligence will likely be as good as
human intelligence and maybe better than the average human. Is this
super?
Was the "woefully inadequate" in reference to a personal opinion? Those
are not my words, I consider human intelligence a work of art, brilliant.
Yes, "faster" is better than slower, unless you don't have all the
information yet. A premature answer could be a jump to conclusion
that we regret in the near future. Again, knowing when to act is
part of being intelligent. Future intelligences may value high speed
response because it is measurable - it's harder to measure the quality
of the performance. This could be problematic for AI's.
Beliefs also operate in the models. I can imagine an intelligent
machine choosing not to trust humans. Is this intelligent?
If they have no more clarity than is exhibited here then yes, that is
probably an intelligent decision.
- samantha
comment to the comment that there is not clarity here.
resp: Please forgive. This is why I am working hard on a website where
I can distill and clarify - be it to the demise of my assertion or
support of it. I suspect that a long winded discussion on this alias is
not appreciated. It is a little off topic from achieving AGI.
stan
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=79295914-87e591