On Mon, Aug 25, 2008 at 11:09 PM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> --- On Sun, 8/24/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam
>> What is the point of building general intelligence if all
>> it does is
>> takes the future from us and wastes it on whatever happens
>> to act as
>> its goal?
>
> Indeed. Personally, I have no desire to build anything smarter
> than humans. That's a deal with the devil, so to speak, and one
> I believe most ordinary folks would be afraid to endorse, especially
> if they were made aware of the risks. The Singularity is not an
> inevitability, if we demand approaches that are safe in principle.
> And self-modifying approaches are not safe, assuming that
> they could work.

But what is safe, and how to improve safety? This is a complex goal
for complex environment, and naturally any solution to this goal is
going to be very intelligent. Arbitrary intelligence is not safe
(fatal, really), but what is safe is also intelligent.


> I'm all for limiting the intelligence of our creations before they ever get
> to the point that they can build their own or modify themselves. I'm against
> self-modifying approaches, largely because I don't believe it's possible
> to constrain their actions in the way Eliezer hopes. Iterative, recursive
> processes are generally emergent and unpredictable (the interesting ones,
> anyway). Not sure what kind of guarantees you could make for such systems
> in light of such emergent unpredictability.

There is no law that makes large computations less lawful than small
computations, if it is in the nature of computation to preserve
certain invariants. A computation that multiplies two huge numbers
isn't inherently more unpredictable than computation that multiplies
two small numbers. If device A is worse than device B at carrying out
action X, device A is worse for the job, period. The fact that you
call device A more intelligence than B is irrelevant. Being a more
complicated computation is a consequence, not the cause, of being
*better* at carrying out the task. You don't build *a* more
intelligent machine, hope that it will be better, but find out that
it's actually very good at being fatal. Instead, you build a machine
that will be better, and as a side effect it turns out to be more
intelligent, or more complicated.

Likewise, self-modification in not an end in itself, but means to
implement the complexity and efficiency required for better
performance. The complexity that gets accumulated this way is not
accidental, it doesn't make the AI less reliable, because it's being
implemented precisely for the purpose of making AI better, and if it's
expected to make it worse, then it's not done. You have intuitive
expectation that making Z will make AI uncontrollable, which will lead
to a bad outcome, and so you point out that this design that suggests
doing Z will turn out bad. But the answer is that AI itself will check
whether Z is expected to lead to a good outcome before making a
decision to implement Z.


> I don't deny the possibility of disaster. But my stance is, if the only 
> approach
> you have to mitigate disaster is being able to control the AI itself, well, 
> the
> game is over before you even start it. It seems profoundly naive to me that
> anyone could, even in principle, guarantee a super-intelligent AI to 
> "renormalize",
> in whatever sense that means. Then you have the difference between theory
> and practice... just forget it. Why would anyone want to gamble on that?
>

This remark makes my note that the field of AI actually did something
for the last 50 years not that minor. Again you make an argument from
ignorance: I do not know how to do it, nobody knows how to do it,
therefore it can not be done. Argue from knowledge, not from
ignorance. If you know the path, follow it, describe it. If you know
that the path has a certain property, show it. If you know that a
class of algorithms doesn't find a path, say that these algorithms
won't give the answer. But if you are lost, if your map is blank,
don't assert that the territory is blank also, for you don't know.


>> (answering to the article)
>>
>> Intelligence was created by a blind idiot evolutionary
>> process that
>> has no foresight and no intelligence. Of course it can be
>> designed.
>> Intelligence is all that evolution is, but immensely
>> faster, better
>> and flexible.
>
> In certain domains, this is true (and AI has historically been about
> limiting research to those domains). But intelligence, as we know it,
> is limited in ways that evolution is not. Intelligence is limited to reasoning
> about causality, a causality we structure by modeling the world around us
> in such a way that we can predict it. Models, however, are not perfect.
> Evolution does not suffer from this limitation, because as you say, it has
> no intelligence. Whatever works, works.

Human intelligence is limited, and indeed this argument might be
valid, for example chimps are somewhat intelligent, immensely
intelligent compared to evolution, in fact, but they have no hope of
implementing intelligence in silicon, ever, and all their efforts are
restricted by their context, they can't break out of it and improve on
it as we can.

Causal models are not perfect, you say. But perfection is causal,
physical laws are the most causal phenomenon. All the causal rules
that we employ in our approximate models of environment are not
strictly causal, they have exceptions. Evolution has the advantage of
optimizing with the whole flow of environment, but evolution doesn't
have any model of this environment, the counterpart of human models in
evolution is absent. What it has is a simple regularity in the
environment, natural selection. Will all the imperfections, human
models of environment are immensely more precise than this regularity
that relies on natural repetition of context. Evolution doesn't have a
perfect model, it has an exceedingly simplistic model, so simple in
fact that it managed to *emerge* by chance. Humans with their
admittedly limited intelligence, on the other hand, already manage to
create models far surpassing their own intelligence in ability to
model the environment (computer simulations and mathematical models).

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to