Ben, I got your point, but it seems my wasn't clear enough... It is:

*My theoretical view has a scale, it lies in a metric space, there are
measures for precedence, succession and distance, and one can always say
how well an AGI covers what, or how far an AGI is from given human-level
goal, based on the model human history.
*
The other "theoretical" views lie on nothing, but random facts and measures
which are not well justified for "human-levelness".

Regarding Kurzweil's view, he obviously loves to impress exactly the
laypeople who didn't have any clue about what he was talking about.* (see
below also, about the self-driving cars)

So, on what scale does Watson's achievements lay (besides the... laypeople
and some AI-ers applause)? What was its previous step, and what's the next
step that it implies and on what theoretical basis the prediction is made?
This theoretical basis should be intrinsic to Watson, not to the general
trend of higher performance of the computing hardware.

Let me analyse some of the achievements as I saw them when heard about this
"amazing" machine:

- Speech recognition, speech to text - done with a 2010-2011 supercomputer,
for one voice -  whoaaa that is truly amazing! That's a feature of PCs for
dictating and commands etc. from Windows XP?
- Information Retrieval and finding exact matching text? Indexing, bags of
words, regex, parsing, semantic similarity/synonymy... Incredible!
- Putting data into databases? OCR? Whoa... Inputting books to databases
(typing...), segmenting, indexing them, based on what types of questions
are expected and what types of answers?
- Speech synthesis? - that's the worst part, an absurdly poor synthesis, a
student can develop a better one from scratch in a month.

And people say "whoaaa" - the effect of lacking a clue, seeing something on
a top-rating TV/TV-show, by a high-ranked company, praised by "experts"
(like the "doctors" and "dentists" from TV advertisements).

Maybe I'm too harsh, but the true achievement to me is one: *demonstration
of computing performance,* but it's achieved by running on a supercomputer,
so it's hard to compare is its design efficient.

http://www.eweek.com/c/a/IT-Infrastructure/IBMs-Watson-16-Things-You-May-Not-Know-About-Its-Hardware-Muscle-491431/
"...Watson contains 15 terabytes of RAM and 2,870 processor cores. It can
operate at 80 teraflops (80 trillion operations) per second."

>And certainly, many laypeople feel Watson is major progress toward AGI. Â
Based on their own commonsense observation, not based on listening to any
snobbish
>community of experts ;p ...

That's not any better and I don't fully agree about the experts - laypeople
believe that the encyclopediae are intelligent and are amazed by answering
random facts. As of the journalists - they used to call computers
"electronic brains" in the past, even recently Bulgarian reporters called
an IBM supercomputer installed in Sofia an "artificial intelligence". :))

As of the snobbish community of experts - it is still there: the label of
the big company, the rating of the TV, the credit to the reports who speak
"whoooa incredible, ladies and gentlemen!"

"Jeopardy" is a dumber game than "Who wants to be a millionaire", it
requires pure remembering of random facts, and still people considered
those TV games as display of "intelligence" - yes, human-level superior
intelligence goes with better memory and curiosity, which implies
capability to know/hold in memory more.

However people mistake information retrieval and fact-remembering for
human-level intelligence, because it seems "hard", at school they are
trained to study meaningless facts, or it's their fault that they don't get
the big pictures and remember things as pure facts.

*In this regard, in human-level measures, "Watson" is an idiot-savant. An
autist, a "Rainman"... No - much less than a "Rainman".*

People are impressed by idiot-savants' skills, because generally
"human-level memory" is poor.

However*, the issue* *with the scales and measurements is that:*

*In human scale, idiot-savants are not expected to display a progress from
their condition, unlike a healthy 1-year old baby that can answer only a
few simple questions. :)*

As of the true previous steps of Watson: (which I know)
- Cat neocortex simulation
- Deep Blue
- Deep Thought
(...)
- Stretch

>Yet, you apparently don't agree that Watson constitutes major progress
toward AGI, because Watson does not deal with a baby's cognitive business
;/ ...
>This is the point I was making. Â Based on your theoretical view, Watson
seems not to be major incremental progress toward human-level AGI. Â Based
on Ray Kurzweil's
>theoretical view, it does seem to be.... Â Many laypeople agree with him,
many agree with you.... Â Even though most laypeople and you and Ray would
all pretty much
>agree on what constitutes reaching the end goal of human-level AGI...

Many agree also, that self-driving cars were an amazing new revolutionary
invention etc. too (I've noticed Kurzweil on this, too, his "unbelievable"
prediction of their construction, and how people thought that it was a SF).

In fact it's not that this prediction is "amazing", it's that people are
clueless and don't know nothing about history, the state-of-the-art and
trends.

Of course especially the ones who heard about that for a first time in 2009
saw it as an alien technology, but there were such works and real
self-driving cars driving through the USA as early as in the 90-ies, if I'm
not mistaken, and that's one of the obviously solvable problems. It's clear
what has to be done and there are clear real-time and precision
check-points - it has to be at least that-fast, to recognize those signs,
those paths, those risks of collision, those other cars etc.

I hope at least some of the guys here are aware that at Stanford, the
winners of DARPA Urban Challenge, they had autonomous self-driving carts,
which navigated using vision, also stereo vision, back in the 70-ies.
http://www.stanford.edu/~learnest/cart.htm

There were industrial robots back in the early 70-ies, which used computer
vision to operate - such as a Hitachi's model with PDP-6, a 320x240
5-bit gray scale camera, 32 KWords x 16-bit RAM, 4,5 microseconds per
addition - I've marked it in my old books readers diary. :)) Slow, but
operational - 40 years back.


....* Todor "Tosh" Arnaudov ....*
*
.... Twenkid Research:*  http://research.twenkid.com

.... *Author of the world first University courses in AGI  (2010, 2011)*:
http://artificial-mind.blogspot.com/2010/04/universal-artificial-intelligence.html

*.... Todor Arnaudov's Researches Blog**: *
http://artificial-mind.blogspot.com





> From: Ben Goertzel <[email protected]>
> To: [email protected]
> Subject: Re: [agi] Partial Validation and Incremental Evidence Does Not
> Have to be Theory-Circular
> Date: Tue, 1 Jan 2013 20:43:34 -0500
> Todor,
> Â
> If a system can't deal even with baby's cognitive business and it requires
> some kind of a snobbish "community" to agree that some work is a progress
> or not, then it's apparently not a real progress.
> Artificial "consensus" is not a progress, it's politics, vanity fair, a
> way to persuade other people who don't have a clue that  this is something
> "scientific" and that there's a progress - some numbers now are bigger than
> they were.
> Â
> Some experts think that Watson is major progress toward AGI. Â Ray
> Kurzweil, e.g., has said so...
> And certainly, many laypeople feel Watson is major progress toward AGI. Â
> Based on their own commonsense observation, not based on listening to any
> snobbish community of experts ;p ...
> Yet, you apparently don't agree that Watson constitutes major progress
> toward AGI, because Watson does not deal with a baby's cognitive business
> ;/ ...
> This is the point I was making. Â Based on your theoretical view, Watson
> seems not to be major incremental progress toward human-level AGI. Â Based
> on Ray Kurzweil's theoretical view, it does seem to be.... Â Many laypeople
> agree with him, many agree with you.... Â Even though most laypeople and
> you and Ray would all pretty much agree on what constitutes reaching the
> end goal of human-level AGI...
> -- Ben G



On Wed, Jan 2, 2013 at 2:07 AM, Todor Arnaudov <[email protected]> wrote:

> It's more than obvious:
>
> So far there's one only meaningful measure and example of "intelligence"
> and "general intelligence", that's what we want to create, an entity that's
> functionally comparable and better.
>
> There's one way that one can estimate how far a particular person (with
> their particular history/behavioral records, cognitive
> capacity/capabilities/talents/skills) is:
>
> - from graduating from MIT (one must specify what exactly is meant by
> half-way - half the age from 0, or half in the curricullum) or
> - from saying sentences with 3 word
> - from running or jumping
> - from using past tense (in particular language, with particular mistakes)
> - from playing a tune on a piano,
> - from asking a question about so-and so
> - from giving a particular typical answer of a particular typical
> question, with her particular personal history/experience in particular
> environments and particular interactions with other people, particular
> vocabulary etc.
>
> It's by comparing to a model human who has gone through it or averaged
> model humans.
>
> That's the only meaningful and justified/natural way to measure how far
> another non-human system is from a human-level AGI, and the only
> "human-level".
>
> No additional justification and "consensus" are required, and nobody can
> question it - well, they can, but that would be like questioning the thing
> you call "intelligence" (that yet "nobody knows what it is") of you
> yourself or of your child.
>
> If a system can't deal even with baby's cognitive business and it requires
> some kind of a snobbish "community" to agree that some work is a progress
> or not, then it's apparently not a real progress.
>
> Artificial "consensus" is not a progress, it's politics, vanity fair, a
> way to persuade other people who don't have a clue that  this is something
> "scientific" and that there's a progress - some numbers now are bigger than
> they were.
>
> As of real results and real progress - they will be obvious for a dog, the
> "numbers" will become infinitely bigger than what they were before.
>
>
>
>>
>>    - *From:* Ben Goertzel <[email protected]>
>>
>>
>>    - *To:* [email protected]
>>
>>
>>    - *Subject:* Re: [agi] Partial Validation and Incremental Evidence
>>    Does Not Have to be Theory-Circular
>>
>>
>>    - *Date:* Tue, 1 Jan 2013 14:15:32 -0500
>>
>> Jim,
>> My pointwass as follows. Â Suppose one is trying to achieve a certain
>> complex goal, G.
>> Suppose that everyone in a certain community agrees on a test that would
>> validate "achievement of G."
>> Even so, the community may not be able to agree on a test to validate
>> "having proceeded 50% of the way to G." Â Given two partial achievements of
>> G, they may not agree on which constitutes greater progress.
>> If G is "race 1000 meters along a track", then probably everyone involved
>> can agree that once someone has raced 500 meters along that track, they are
>> halfway there...
>> But suppose G is "make a robot that can graduate from MIT."
>> Then, consider
>> -- one group has made a robot that can walk around the MIT campus, sit in
>> the desks there, and generally carry out the physical movements required to
>> go to MIT. Â It can also deal with some of the physical aspects of social
>> interaction -- staying out of the way of human students in the hallway,
>> looking at the professor when he's talking, etc. Â But it doesn't
>> understand what the professors are saying. Â (call this A)
>> -- another group has made a program that can pass the exams for a number
>> of MIT classes, when fed the exams in a structured XML format (call this B)
>> How close is A to goal G? Â How close is B to goal G? Â Is group A almost
>> to the end-goal G, or have they just dealt with trivial, mechanical parts
>> of the problem? Â Is group B almost to the end-goal G, or have they just
>> dealt with some parts of the problem in such an artificial way that it
>> doesn't genuinely constitute progress toward G?
>> My point is, the assessment of the distance between A and G, and the
>> distance between B and G, depends on one's theory of AGI...
>> For those who think human-level AGI is mostly about embodied interaction
>> with the world, A is almost to G, and B doesn't really constitute much
>> progress toward G.
>> For those who think human-level AGI is mostly about rational, symbolic,
>> linguistic thinking, B is almost to G, and A doesn't really constitute much
>> progress...
>> This exemplifies what I mean when I say that measurement of incremental
>> progress toward human-level AGI is theory-dependent...
>> Even if different people agree on the end goal G, they may disagree on
>> which aspects of G are most difficult or most critical, and hence they may
>> differ radicaly in the amount of successfulness they attribute to various
>> partial achievements of the end goal (like A and B) ...
>> -- Ben G
>> On Tue, Jan 1, 2013 at 12:53 PM, Jim Bromer <[email protected]> wrote:
>> On Mon, Dec 31, 2012 at 11:18 AM, Ben Goertzel <[email protected]> wrote:
>> That's the problem with partial validation and incremental evidence --
>> its interpretation is highly theory-dependent...
>> Â
>> Â
>> The interpretation is theory-dependent since you are validating or
>> examining evidence of the theory.  That is trivially true.  If you are
>> suggesting that you can only use the theory to test or examine the evidence
>> then that is just plain wrong. Even using the theories and methods that you
>> used in the programming, by sometimes using critical attacks against your
>> own theories you can find flaws and weaknesses. And you can use other kinds
>> of theories to examine the nature of your theoriess and programs.
>> Â
>> If you want me to comment on your comments less often, just let me know.
>>
>
> Jim Bromer
>
> --
....* Todor "Tosh" Arnaudov ....*
*
.... Twenkid Research:*  http://research.twenkid.com

.... *Self-Improving General Intelligence Conference*:
http://artificial-mind.blogspot.com/2012/07/news-sigi-2012-1-first-sigi-agi.html

*.... Todor Arnaudov's Researches Blog**: *
http://artificial-mind.blogspot.com



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to