My perspective on grounding is partially summarized here

www.goertzel.org/papers/*PostEmbodied*AI_June7.htm

-- Ben G

On Mon, Aug 4, 2008 at 6:33 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> My opinion on grounding is that it depends on the application. I have
> argued in 
> http://cs.fit.edu/~mmahoney/compression/rationale.html<http://cs.fit.edu/%7Emmahoney/compression/rationale.html>that
>  text compression is at least as hard as passing the Turing test.
> (Predicting text requires lots of real-world knowledge). This is
> (non-grounded) AI as defined by Turing. However, some may argue this is not
> AGI because a language model cannot see or control a robot.
>
> This is an engineering question. Is it easier to teach a system that the
> sky is blue using pure text based I/O or by adding vision and embodiment? If
> it is text based, is it easier to use statistics (e.g. "blue sky" returns
> more Google hits than "red sky"), or is it easier to explicitly encode the
> knowledge?
>
> An engineering problem starts with a specification, not a design. Do we
> want text based AI or do we want a robot? What is the problem we are trying
> to solve?
>
> AGI has the potential to replace nearly all human labor worldwide, which is
> valued at US $2 to $5 quadrillion over the next 30 years. (Currently $66
> trillion per year and increasing 5% annually). We should expect the cost of
> AGI to be of this magnitude. It is what we are willing to pay to get it now.
> Note this is a different problem than building artificial human brains. We
> do need to solve the language and vision problems in order for machines to
> do certain types of work that currently must be done by humans. This does
> not mean building robots that look or act like humans.
>
> I believe that the problem can be solved using faster hardware and
> otherwise mature technology, consisting of lots of narrow-AI specialists, a
> protocol that links them together (routing messages to the right experts),
> and an economy that rewards the most useful experts and routers. In
> http://www.mattmahoney.net/agi.html I proposed a P2P protocol called
> competitive message routing (CMR). In CMR, each peer understands a subset of
> natural language relevant to its domain of expertise and learns about
> experts in related topics in a hostile environment. Peers are administered
> by humans who have an incentive to gain the trust of other peers and provide
> useful services in exchange for not having their outgoing messages blocked.
> I believe the resulting network will be a powerful and useful intelligence
> that nobody would mistake for human. It would be intelligent in the sense
> that a calculator or Google is intelligent, but far more useful.
>
> An alternative approach is recursive self improvement (RSI). If humans can
> produce superhuman AGI, then so can those agents, launching a singularity. I
> don't believe that will happen because I don't believe that for any level
> and reasonable definition of intelligence of the agent's choosing that it is
> possible for it to produce a more intelligent agent. We currently lack
> software and mathematical models of RSI, even in very restricted or simple
> environments. There are no known problems that are provably hard to solve
> but easy to verify (e.g. factoring, NP-complete problems, or cryptographic
> puzzles) that an AI could use to test its children. Furthermore, humans and
> other animals do not recognize higher intelligence than themselves, for
> example, we can measure an IQ of 200 in children but not adults. However, I
> have no proof that RSI is impossible either.
>
> Even without RSI, CMR is not without risks. Agents compete for resources
> and could still make modified copies of themselves. This is an evolutionary
> environment where fitness does not equal usefulness or friendliness. It
> could still produce a singularity in the sense of greatly accelerated
> evolution where humans are no longer the dominant intelligence. We would
> observe the CMR protocol quickly changing from natural language to something
> incomprehensible, and we would no longer know what our computers were doing.
> I don't have a good solution to this problem.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to