PM: "Everyone seems to be appropriating the definition of AGI for himself /
herself.
"This is what AGI means." rather than "This is what AGI means to me."

Partly yes, mainly no. There is enough agreement about what an AGI has to
do. It has to be

A) General vs specialist -  i) extend any skill into new fields - walk
round not just one familiar factory, but endless new terrains (like humans)
 -
                                                          deal with new
objects by producing new actions
                                             ii) independently acquire new
skills - as humans do,with some but incomplete guidance - which means
                                                          deal with new
objects by producing new actions - e..g switch from walking to skating,
playing tennis to hockey

B) Real world vs artificial world - operate in
                                             i) real world physical
environments, such as fields, streets, domestic rooms as distinct from the
present artificial
     environments, e.g. factories, labs,
                                                 - and therefore deal with
the novel objects/obstacles that real world environments throw up -as
humans do -
                                                   by producing new actions
                                             ii) real world mental
environments - i.e. be capable of real world reasoning, as typified in
science/ technology/
                                                    arts/history/ everyday
reasoning - as distinct from classic logicomathematical reasoning  -
                                                    and therefore deal with
novel mental objects  - and produce
                                                    new (mental) actions -
e.g. new explanations, comparisons, classifications
                                             iia)  ideally understand real
world language - be capable of absorbing and employing new concepts - by
coming up                                                         with new
references
                                             iib)  attain real world vision
-
                                                     be capable of
perceiving new objects or old objects in radically new positions

C) Creative vs rational/formulaic reasoning
                                                - classic divergent - come
up with new solutions to problems - e.g. divergent thinking, e.g. how many
new uses
                                                     can you devise for a
brick, or new objects to fit in a hole?

IOW is there anything that distinguishes AGI that does NOT involve the
capacity to deal both proactively and reactively, physically and mentally
                                                        with NEW/NOVEL
 OBJECTS - wh in one form or other, classifies as creativity?

That isn't my personal, whimsical interpretation - that I suggest is simply
irrefutable truth. If you/anyone else can produce an example/application of
AGI that is simply a "scaled up" version of narrow AI which does *not*
require dealing with novelty, be my guest.

What is easy to produce is endless examples of AGI-ers trying to *avoid*
the challenge of creativity/novelty/AGI by claiming that chess or Watson or
some other obvious form of narrow AI, deserves to be considered as AGI. And
I can guarantee, for example, that your 1000 pages will have no proof of
concept explanation of how your project will deal with novelty - and that
Opencog also still has no explanation of such.










On 27 November 2013 20:30, Piaget Modeler <[email protected]> wrote:

> Everyone seems to be appropriating the definition of AGI for himself /
> herself.
> "This is what AGI means." rather than "This is what AGI means to me."
>
> Same goes for the definition of intelligence.
>
> Omoshiroi.
>
> ~PM
>
> ------------------------------
> Date: Wed, 27 Nov 2013 19:02:25 +0000
> Subject: Re: [agi] AI Complete
> From: [email protected]
> To: [email protected]
>
>
> P.S. I should add that AGI-ers will find my remarks difficult because
> there is no established culture of creativity, as opposed to rationality.
> As the recent thread on Computational Creativity showed, there are myriad
> definitions of creativity/intelligence/divergent vs convergent thinking.
>  So I need to lay out my thinking in a systematic way to make it clear.
> Nevertheless, the truth is as stark as I am saying - AGI-ers don't
> understand that AGI is about producing new courses of action, not old ones
> (or "new" iterations of old formulae) - and are doing totally the opposite
> of what they should be doing (wh. is why the field has always been totally
> stuck).
>
>
> On 27 November 2013 18:53, tintner michael <[email protected]>wrote:
>
> Steve:They programmed and trained on one corpus, and then competed on
> another corpus that none of the competitors had previously seen. Each
> corpus was carefully analyzed by hand before turning the computers loose on
> it, so scoring was simply a measure of how close the computers came to the
> hand analysis. No one came close to achieving a perfect score.This seems
> like a pretty good "measure" to me.
>
> Nope. Wrong kind and culture of intelligence. Rationality. Production of
> the old.
>
> The attempt is to find things that are variations on a given formula.
>
> AGI isn't about that. It is about creativity. Production of the new.
>
> Can the computer - well actually it will only be a robot - draw in - and
> recognize - an endless range of NEW styles of handwriting? (If that's the
> example).
>
> You can measure the old, you can't measure the new - and there are
> billions of examples existing in our culture. Nowhere is creativity
> measured, only crudely graded. How would you measure a NEW program that
> s.o. has just written in terms of value/intelligence? Or a new
> political/economic/business "program" ? Silly question.
>
> Wherever you see people talk about measuring AGI/creativity - you will see
> people who simply don't *understand* AGI/creativity. Reread Deutsch - he is
> saying that AGI is creativity because it is still *news* within the AGI
> community.
>
>
>
>
> On 27 November 2013 18:30, Steve Richfield <[email protected]>wrote:
>
> Michael,
>
> The Hobbs paper was about one system that competed with other systems to
> analyze a corpus of real-world terrorist threat reports, complete with
> contorted and ambiguous statements, to determine particular things. The
> Hobbs system was SECOND best among the contestants.
>
> They programmed and trained on one corpus, and then competed on another
> corpus that none of the competitors had previously seen. Each corpus was
> carefully analyzed by hand before turning the computers loose on it, so
> scoring was simply a measure of how close the computers came to the hand
> analysis. No one came close to achieving a perfect score.
>
> This seems like a pretty good "measure" to me.
>
> Given the combination of this paper and the *60 Minutes* report about the
> NSA bugging the Internet backbone, it seems inconceivable (to me) that this
> posting isn't being analyzed by a similar system.
>
> Steve
> ===================
> On Wed, Nov 27, 2013 at 2:06 AM, tintner michael <[email protected]
> > wrote:
>
> Russ:PDF:
> http://louisville.edu/speed/computer/tr/UL_CECS_02_2011.pdf/at_download/file )
> is a good read on that question.
>
>  The paper concludes:
>
> "Progress in the field of artificial intelligence requires access to well
> defined problems of measurable
> complexity."
>
>  ...and all AGI problems including language use and vision are
> ILL-defined, creative and not measurable as opposed to well-defined,
> rational and measurable. Think just of essays, papers and projects which
> compose well over 50% of education as distinct from IQ, SAT, knowledge
> tests and the like - they cannot be measured, only graded.
>
> Creative/AGI intelligence is a whole different world and level of
> problemsolving/intelligence from rational/narrow AI intelligence.
> High-level as opposed to low-level intelligence.
>
> (At least this paper has a few glimmers of the breadth of human
> problemsolving rather than being purely mathematical/logical).
>
>
>
> On 27 November 2013 04:05, Russ Hurlbut <[email protected]> wrote:
>
> It is good practice to find truth in statements such as these before
> dismissing them. This often requires adopting one or more contexts.
>
> In this case, if one assumes a traditional definition of "AI-complete" by
> extending Hobbs statement to imply actually creating an artificial
> intelligence, then anything short of AI-Complete would be fall under Hobb's
> definition of "computer science." If one chooses to apply the dual process
> theory ( http://en.wikipedia.org/wiki/Dual_process_theory#Systems ), then
> one could argue that an Expert System would fit Hobbs definition of fast,
> computer science. Conversely, the massively parallel unconscious processing
> that humans regularly perform (e.g. in speech, vision) would require
> enormous computing resources and considerable time - even more so using
> resources available twenty years ago.
>
> Does solving syntactic ambiguity really result in creating an artificial
> intelligence? Yampolskiy's paper AI-Complete, AI-Hard, or AI-Easy:
> Classification of Problems in Artificial Intelligence (PDF:
> http://louisville.edu/speed/computer/tr/UL_CECS_02_2011.pdf/at_download/file) 
> is a good read on that question.
>
>
> On Tue, Nov 26, 2013 at 3:19 PM, Piaget Modeler <[email protected]
> > wrote:
>
> Hobbs' statement:
>
>
> *Q:  What is the difference between computer science and artificial
> intelligence? *
> *A:  In computer science you write programs to do quickly what people do
> slowly. In artificial intelligence, it is just the opposite.*
>
> In AI we don't write programs to do slowly what people do quickly.  In
> Expert Systems in particular, once it is known what people
> do symbolically,  an expert system often does the symbol manipulations faster
>  that a person. Also, Expert Systems can perform
> those symbol manipulations 24 x 7 x 365.  Thereby bringing consistency,
> accuracy, and endurance to the formerly human task.
>
> This statement is clearly false.
>
> ~PM
>     *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/18488709-8cf25195> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
>
>     *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
>
>
>
> --
> Full employment can be had with the stoke of a pen. Simply institute a six
> hour workday. That will easily create enough new jobs to bring back full
> employment.
>
>     *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to