On Apr 9, 7:39 pm, Jason Resch <jasonre...@gmail.com> wrote:
> You would need to design a very general fitness test for measuring
> intelligence, for example the shortness and speed at which it can find
> proofs for randomly generated statements in math, for example.  Or the
> accuracy and efficiency at which it can predict the next element given
> sequenced pattern, the level of compression it can achieve (shortest
> description) given well ordered information, etc.  With this fitness test
> you could evolve better intelligences with genetic programming or a genetic
> algorithm.

Those tests are good components of a general AI... but it still feels
like building a fully independent agent would involve a lot of
engineering. If we want to achieve an intelligence explosion, or TS,
we need some way of expressing that goal to the AI. ISTM it would take
a lot of prior knowledge.

If the agent was embodied in an actual robot, it would need to be able
to reason about humans. A simple goal like "stay alive" won't do
because it might decide to turn humans into biofuel. On the other
hand, if the agent was put in a virtual world things would be easier
because its interactions could be easily restricted... but it would
need some way of performing experiments in the real world to develop
new technologies. Unless it could achieve IE through pure mathematics.

Anyway, I think humans are going to fiddle with AIs as long as they
can, because it's more economical that way. We could plug in speech
recognition, vision, natural language, etc. modules to the AI to
bootstrap it, but even that could lead to problems. If there are any
loopholes in a fitness test (or reward function, or whatever) then the
AI will take advantage of them. For example, it could learn to
position itself in such a way that its vision system wouldn't
recognize a human, and then it could kill the human for fuel.

So I'm still suspecting that what we want a general AI to do wouldn't
be general at all but something very specific and complex. Are there
simple goals for a general AI?

On Apr 9, 7:39 pm, Jason Resch <jasonre...@gmail.com> wrote:
> That kind of reminds me of the proposals in many countries to tax virtual
> property, like items in online multiplayer games.  It is rather absurd, it
> is nothing but computations going on inside some computer which lead to
> different visual output on people's monitors.  Then there are also things
> such as network neutrality, which threaten the control of the Internet.  I
> agree with you that there are dangers from the established interests fearing
> loss of control as things go forward, and it is something to watch out for,
> however I am hopeful for a few reasons.  One thing in technology's favour is
> that for the most part it changes faster than legislatures can keep up with
> it.  When Napster was shut down new peer-to-peer protocols were developed to
> replace it.  When China tries to censor what its citizens see its populace
> can turn to technologies such as Tor, or secure proxies.

Maybe I'm too paranoid... I'm assuming that on issues of great
strategic importance, like TS, they'd act decisively. Like the PATRIOT
Act was enacted less than 2 months after 9/11.

It's really hard to say what the state of the world will be in 2050 or
so. There are some trends, though. I think the race to the bottom w/rt
wages will require authoritarian solutions (economic inequality tends
to erode democratic institutions), and so will the intensifying
tensions between the major powers (people have to persuaded to accept
wars). If destructive technologies continue to outpace defensive ones
then that will mean more control, too (or we'll just blow ourselves

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to