On Wed, Aug 27, 2014 at 6:31 AM, Telmo Menezes <[email protected]>
wrote:

>
>
> On Wed, Aug 27, 2014 at 1:14 AM, Terren Suydam <[email protected]>
> wrote:
>
>> Hi Telmo,
>>
>> I think if it were as simple as you make it seem, relative to what we
>> have today, we'd have engineered systems like that already.
>>
>
> It wasn't my intention to make it look simple. What I claim is that we
> already have a treasure trove of very interesting algorithms. None of them
> is AGI, but what they can do becomes more impressive with more computing
> power and access to data.
>

I agree that can be made to do impressive things. Watson definitely
impressed me.

Take google translator. It's far from perfect, but way ahead anything we
> had a decade ago. As far as I can tell, this was achieved with algorithms
> that had been known for a long time, but that now can operate on the
> gigantic dataset and computer farm available to google.
>
> Imagine what a simple minimax search tree could do with immense computing
> power and data access.
>

The space of possibilities quickly scales beyond the wildest imaginings of
computing power. Chess AIs are already better than humans, because they
more or less implement this approach, and it turns out you "only" need to
computer a few hundred million positions per second to do that. Obviously
that's a toy environment... the possibilities inherent in the real world
are even be enumerable according to some predefined ontology (i.e. that
would be required to specify in a minimax type AI).


>
>
>>  You're talking about an AI that arrives at novel solutions, which
>> requires the ability to invent/simulate/act on new models in new domains
>> (AGI).
>>
>
> Evolutionary computation already achieves novelty and invention, to a
> degree. I concur that it is still not AGI. But it could already be a
> threat, given enough computational resources.
>

AGI is a threat because it's utility function would necessarily be
sufficiently "meta" that it could create novel sub-goals. We would not
necessarily be able to control whether it chose a goal that was compatible
with ours.

It comes down to how the utility function is defined. For Google Car, the
utility function probably tests actions along the lines of "get from A to B
safely, as quickly as possible". If a Google Car is engineered with
evolutionary methods to generate novel solutions (would be overkill but
bear with me), the novelty generated is contained within the utility
function. It might generate a novel route that conventional map algorithms
wouldn't find, but it would be impossible for it to find a solution like
"helicopter the car past this traffic jam".


>
>
>> I'm not saying this is impossible, in fact I see this as inevitable on a
>> longer timescale. I'm saying that I doubt that the military is committing
>> any significant resources into that kind of research when easier approaches
>> are much more likely to bear fruit... but I really have no idea what the
>> military is researching, so it's just a hunch.
>>
>
> Why does it matter if it's the military that does this? To a sufficiently
> advanced AI, we are just monkeys throwing rocks at each other. It will
> surely figure out a way to take control of our resources, including
> weaponry.
>
>

I think the thread started with a focus on killing machines. But your point
is taken.


>
>> What I would wager on is that the military is developing drones along the
>> same lines as what Google has achieved with its self-driving cars. Highly
>> competent, autonomous drones that excel in very specific environments. The
>> utility functions involved would be specified explicitly in terms of
>> "hard-coded" representations of stimuli. For AGI they would need to be
>> equipped to invent new models of the world, articulate those models with
>> respect to self and with respect to existing goal structures, simulate
>> them, and act on them. I think we are a long way from those kinds of AIs.
>> The only researcher I see making inroads towards that kind of AI is Steve
>> Grand.
>>
>
> But again, a reasonable fear is that a sufficiently powerful conventional
> AI is already a threat (due to increasing autonomy and data access + our
> possible inability to cover all the loopholes in utility functions).
>
>
The threats involved with AIs are contained within the scope of their
utility functions. As it turns out, the moment you widen the utility
function beyond a very narrow (and specifiable) domain, AI gets much, much
harder.

Terren


> Cheers
> Telmo.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to