Matt,

FINALLY, someone here is saying some of the same things that I have been
saying. With general agreement with your posting, I will make some
comments...

On 9/4/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> --- On Thu, 9/4/08, Valentina Poletti <[EMAIL PROTECTED]> wrote:
> >Ppl like Ben argue that the concept/engineering aspect of intelligence is
> >independent of the type of environment. That is, given you understand how
> >to make it in a virtual environment you can then tarnspose that concept
> >into a real environment more safely.


This is probably a good starting point, to avoid beating the world up during
the debugging process.

>
> >Some other ppl on the other hand believe intelligence is a property of
> >humans only.


Only people who haven't had a pet believe such things. I have seen too many
animals find clever solutions to problems.

So you have to simulate every detail about humans to get
> >that intelligence. I'd say that among the two approaches the first one
> >(Ben's) is safer and more realistic.
>
> The issue is not what is intelligence, but what do you want to create? In
> order for machines to do more work for us, they may need language and
> vision, which we associate with human intelligence.


Not necessarily, as even text-interfaced knowledge engines can handily
outperform humans in many complex problem solving tasks. The still open
question is: What would best do what we need done but can NOT presently do
(given computers, machinery, etc.). So far, the talk here on this forum
has been about what we could do and how we might do it, rather than about
what we NEED done.

Right now, we NEED resources to work productively in the directions that we
have been discussing, yet the combined intelligence of those here on this
forum is apparently unable to solve even this seemingly trivial problem.
Perhaps something more than raw intelligence is needed?

But building artificial humans is not necessarily useful. We already know
> how to create humans, and we are doing so at an unsustainable rate.
>
> I suggest that instead of the imitation game (Turing test) for AI, we
> should use a preference test. If you prefer to talk to a machine vs. a
> human, then the machine passes the test.


YES, like what is it that our AGI can do that we need done but can NOT
presently do?

Prediction is central to intelligence. If you can predict a text stream,
> then for any question Q and any answer A, you can compute the probability
> distribution P(A|Q) = P(QA)/P(Q). This passes the Turing test. More
> importantly, it allows you to output max_A P(QA), the most likely answer
> from a group of humans. This passes the preference test because a group is
> usually more accurate than any individual member. (It may fail a Turing test
> for giving too few wrong answers, a problem Turing was aware of in 1950 when
> he gave an example of a computer incorrectly answering an arithmetic
> problem).


Unfortunately, this also tests the ability to incorporate the very
misunderstandings that presently limit our thinking. We need to give credit
for compression algorithms that cleans up our grammar, corrects our
technical errors, etc., as these can probably be done in the process of
better "compressing" the text.

Text compression is equivalent to AI because we have already solved the
> coding problem. Given P(x) for string x, we know how to optimally and
> efficiently code x in log_2(1/P(x)) bits (e.g. arithmetic coding). Text
> compression has an advantage over the Turing or preference tests in that
> that incremental progress in modeling can be measured precisely and the test
> is repeatable and verifiable.
>
> If I want to test a text compressor, it is important to use real data
> (human generated text) rather than simulated data, i.e. text generated by a
> program. Otherwise, I know there is a concise code for the input data, which
> is the program that generated it. When you don't understand the source
> distribution (i.e. the human brain), the problem is much harder, and you
> have a legitimate test.


Wouldn't it be better to understand the problem domain while ignoring human
(mis)understandings? After all, if humans need an AGI to work in a difficult
domain, it is probably made more difficult by incorporating human
misunderstandings.

Of course, humans state human problems, so it is important to be able to
semantically communicate, but also useful to separate the communications
from the problems.

I understand that Ben is developing AI for virtual worlds. This might
> produce interesting results, but I wouldn't call it AGI. The value of AGI is
> on the order of US $1 quadrillion. It is a global economic system running on
> a smarter internet. I believe that any attempt to develop AGI on a budget of
> $1 million or $1 billion or $1 trillion is just wishful thinking.


I think that a billion or so, divided up into small pieces to fund EVERY
disparate approach to see where the "low hanging fruit" is, would go a LONG
way in guiding subsequent billions. I doubt that it would take a trillion to
succeed.

As for the value, the really BIG money is in replicating specific people
while collecting much of their fortunes for the service, and in the process
enhancing their operation. Given that most people would gladly sign an IOU
to continue living, this could actually be worth more than the entire
present value of the earth and all it contains. There simply is no
comparable investment opportunity. Sure this is a tough nut to crack, but
AGI is certainly a strong step in the right direction. Hence, I see AGI as a
stepping stone and NOT as the final goal that so many people here see it as
being.

Steve Richfield



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to