Matt,

On 12/6/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> --- On Sat, 12/6/08, Steve Richfield <[EMAIL PROTECTED]> wrote:
>
> > Internet AGIs are the technology of the future, and always will be. There
> will NEVER EVER in a million years be a thinking Internet silicon
> intelligence that will be able to solve substantial real-world problems
> based only on what exists on the Internet. I think that my prior email was
> pretty much a closed-form proof of that. However, there are MUCH simpler
> methods that work TODAY, given the metadata that is presently missing from
> the Internet.
>
> The internet has about 10^14 to 10^15 bits of knowledge as searchable text.
> AGI requires 10^17 to 10^18 bits.


This presumes that there isn't some sort of "agent" at work that filters a
particular important type of information, so that even a googol of text
wouldn't be any closer. As I keep explaining, that agent is there and
working well, to filter the two things that I keep mentioning. Hence, you
are WRONG here.

If we assume that the internet doubles every 1.5 to 2 years with Moore's
> Law, then we should have enough knowledge in 15-20 years.


Unfortunately, I won't double my own postings, and few others will double
their own output. Sure, there will be some additional enlargement of the
Internet, but its growth is linear once past its introduction, which we are,
and short of exponential growth of population, which is on a scale of a
century or so. In short, Moore's law simply doesn't apply here, any more
than 9 women can make a baby in a month.

However, much of this new knowledge is video, so we also need to solve
> vision and speech along with language.


Which of course has been stymied by the lack of metadata - my point all
along.

> While VERY interesting, your proposal appears to leave the following
> important questions unanswered:
> > 1.  How is it an AGI? I suppose this is a matter of definitions. It looks
> to me more like a protocol.
>
> AGI means automating the economy so we don't have to work. It means not
> just solving the language and vision problems, but also training the
> equivalent of 10^10 humans to make money for us. After hardware costs come
> down, custom training for specialized roles will be the major expense. I
> proposed surveillance as the cheapest way for AGI to learn what we want. A
> cheaper alternative might be brain scanning, but we have not yet developed
> the technology. (It will be worth US$1 quadrillion if you can do it).
>
> Or another way to answer your question, AGI is a lot of dumb specialists
> plus an infrastructure to route messages to the right experts.


I suspect that your definition here is unique. Perhaps other on this forum
would like to proclaim which of us is right/wrong. I thought that the
definition more or less included an intelligent *computer*.

> 2.  As I explained earlier on this thread, all human-human languages have
> severe semantic limitations, such that (applying this to your porposal),
> only very rarely will there ever exist an answer that PRECISELY answers a
> question, so some sort of "acceptable error" must go into the equation. In
> the example you used in your paper, Jupiter is NOT the largest planet that
> is known, as the astronomers have identified larger planets in other solar
> systems. There may be a good solution to this, e.g. provide the 3 best
> answers that are semantically disjoint.
>
> People communicate in natural language 100 to 1000 times faster than any
> artificial language, in spite of its supposed limitations. Remember that the
> limiting cost is transferring knowledge from human brains to AGI, 10^17 to
> 10^18 bits at 2 bits per second per person.


Unfortunately, when societal or perceptual filters are involved, there will
remain HUGE holes in even an infinite body of data. Of course, our society
has its problems precisely because of those holes, so more data doesn't
necessarily get you any further.

As for Jupiter, any question you ask is going to get more than one answer.
> This is not a new problem.
> http://www.google.com/search?q=what+is+the+largest+planet%3F
>
> In my proposal, peers compete for reputation and have a financial incentive
> to provide useful information to avoid being blocked or ignored in an
> economy where information has negative value.


Great! At least that way, I know that the things I see will be good
Christian content.

 This is why it is important for an AGI protocol to provide for secure
> authentication.
>
> > 3.  Your paper addresses question answering, which as I have explained
> here in the past, is a much lower form of art than is problem solving, where
> you simply state an unsatisfactory situation and let the computer figure out
> why things are as they are and how to improve them.
>
> Problem solving pre-dates AGI by decades. We know how to solve problems in
> many narrow domains. The problem I address is finding the right experts.


Hmmm, an even higher form, but will it work? In my experience of solving a
few cases having supposedly "incurable" illnesses, and where I needed expert
help to solve them, the experts I needed and found had VERY narrow
expertise. In one case, the cure seemed obvious - but would it kill the
patient? The ONLY apparently "expert" I could find was a doctor who had lost
his license by giving people the same sort of stuff - and only killing one
of them. In this game of "you bet your life", I had to decide on the basis
of THIS guy's words. If ever anyone would get "filtered out" on the basis of
poor credentials, this guy was IT. Anyway, he said that my cure wouldn't
work, but that the drug wouldn't hurt the patient. The cure worked.

You seem to be making the SAME newbie AI error that others here keep
accusing me of making, namely, of making over-broad claims and
extrapolations for a basically sound core concept. You have an easily
testable concept that could easily be implemented on a USENET group with
clever client software, or alternatively and with less ability to scale, on
a web site with the postings all tucked away in a giant knowledge base. In
the beginning, you could announce a very limited domain and crunch the data
with string searches just to test the user interfaces, etc.

I like your core concept, but your extrapolations are based on some really
flaky assumptions as noted above. Success can only come by "facing your
demons", slaying the ones you can, and finding ways of living with the ones
you can's slay.

Steve



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to