On Wed, Aug 27, 2014 at 8:16 PM, Terren Suydam <[email protected]>
wrote:

>
> On Wed, Aug 27, 2014 at 12:21 PM, Telmo Menezes <[email protected]>
> wrote:
>
>>
>> On Wed, Aug 27, 2014 at 1:53 PM, Terren Suydam <[email protected]>
>> wrote:
>>
>>>
>>> The space of possibilities quickly scales beyond the wildest imaginings
>>> of computing power. Chess AIs are already better than humans, because they
>>> more or less implement this approach, and it turns out you "only" need to
>>> computer a few hundred million positions per second to do that. Obviously
>>> that's a toy environment... the possibilities inherent in the real world
>>> are even be enumerable according to some predefined ontology (i.e. that
>>> would be required to specify in a minimax type AI).
>>>
>>
>> Ok, but of course minimax was also a toy example. Several algorithms that
>> already exist could be combined: deep learning, bayesian belief networks,
>> genetic programming and so on. A clever combination of algorithms plus the
>> still ongoing exponential growth in available computational power could
>> soon unleash something impressive. Of course I am just challenging your
>> intuition, mostly because it's a fun topic :) Who knows who's right...
>>
>
> I think these are overlapping intuitions. On one hand, there is the idea
> that given enough computing/data resources, something can be created that -
> regardless of how limited its domain of operation - is still a threat in
> unexpected ways. On the other hand is the idea that AIs which pose real
> threats - threats we are not capable of stopping - require a quantum leap
> forward in cognitive flexibility, if you will.
>

Agreed.


>
> Although my POV is aligned with the latter intuition, I actually agree
> with the former, but consider the kinds of threats involved to be bounded
> in ways we can in principle control. Although in practice it is possible
> for them to do damage so quickly we can't prevent it.
>
> Perhaps my idea of intelligence is too limited. I am assuming that
> something capable of being a real threat will be able to generate its own
> ontologies, creatively model them in ways that build on and relate to
> existing ontologies, simulate and test those new models, etc., generate
> value judgments using these new models with respect to overarching utility
> function(s). It is suspiciously similar to human intelligence.
>

I wonder. What you describe seems like the way of thinking of a person
trained in the scientific method (a very recent discovery in human
history). Is this raw human intelligence? I suspect raw human intelligence
is more like a kludge. It is possible to create rickety structures of order
on top of that kludge, by a process we call "education".


> The difference is that as an *artificial* intelligence with a different
> embodiement and different algorithms, the modeling they would arrive at
> could well be strikingly different from how we see the world, with all the
> attendant problems that could pose for us given the eventually superior
> computing power.
>

Ok.


>
>
>> Another interesting/scary scenario to think about is the possibility of a
>> self-mutating computer program proliferating under our noses until it's too
>> late (and exploiting the Internet to create a very powerful meta-computer
>> by stealing a few cpu cycles from everyone).
>>
>
> I think something like this could do a lot of damage very quickly, but by
> accident... in a similar way perhaps to the occasional meltdowns caused by
> the collective behaviors of micro-second market-making algorithms.
>

Another example is big societies designed by humans.


>  I find it exceedingly unlikely that an AGI will spontaneously emerge from
> a self-mutating process like you describe. Again, if this kind of thing
> were likely, or at least not extremely unlikely, I think it suggests that
> AGI is a lot simpler than it really is.
>

This is tricky. The Kolmogorov complexity of AGI could be relatively low --
maybe it can be expressed in 1000 lines of lisp. But the set of programs
expressible in 1000 lines of lisp includes some really crazy,
counter-intuitive stuff (e.g. the universal dovetailer). Genetic
programming has been shown to be able to discover relatively short
solutions that are better than anything a human could come up with, due to
counter-intuitiveness.


>
>
>>
>>
>>>
>>>
>>>>
>>>>
>>>>>  You're talking about an AI that arrives at novel solutions, which
>>>>> requires the ability to invent/simulate/act on new models in new domains
>>>>> (AGI).
>>>>>
>>>>
>>>> Evolutionary computation already achieves novelty and invention, to a
>>>> degree. I concur that it is still not AGI. But it could already be a
>>>> threat, given enough computational resources.
>>>>
>>>
>>> AGI is a threat because it's utility function would necessarily be
>>> sufficiently "meta" that it could create novel sub-goals. We would not
>>> necessarily be able to control whether it chose a goal that was compatible
>>> with ours.
>>>
>>> It comes down to how the utility function is defined. For Google Car,
>>> the utility function probably tests actions along the lines of "get from A
>>> to B safely, as quickly as possible". If a Google Car is engineered with
>>> evolutionary methods to generate novel solutions (would be overkill but
>>> bear with me), the novelty generated is contained within the utility
>>> function. It might generate a novel route that conventional map algorithms
>>> wouldn't find, but it would be impossible for it to find a solution like
>>> "helicopter the car past this traffic jam".
>>>
>>
>> What prevents the car from transforming into an helicopter and flying is
>> not the utility function but the set of available actions. I have been
>> playing with evolutionary computation for some time now, and one thing I
>> learned is to not trust my intuition on the real constraints implied by
>> such set of actions.
>>
>
> I was actually talking about contracting a helicopter ride which seems
> easier :-)  The set of actions available to an AI is limited to the way it
> models the world.  Without a capacity for intelligently expanding its world
> model, no AI is going to do anything outside of the domain it is defined
> in. Google Car won't ever think to contract a helicopter ride until either
> A) Google engineers program it to consider that as an option or B) Google
> engineers give the Car the ability to start modelling the world on its own
> terms.
>

Ok, but the google engineers might be giving the system more freedom than
they assume.

Telmo.


> If B then it could be a long time before the Car discovers what a
> helicopter is, what it's capable of, how it could procure one, etc.  The
> helicopter example is a bad one actually because it's a solution you or me
> can easily conceive of, so it seems mundane or easy.
>
> Terren
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to