On Fri, Aug 29, 2014 at 6:23 AM, Telmo Menezes <[email protected]>
wrote:

>
>
>
> On Thu, Aug 28, 2014 at 10:05 PM, Terren Suydam <[email protected]>
> wrote:
>
>>
>> On Thu, Aug 28, 2014 at 7:30 AM, Telmo Menezes <[email protected]>
>> wrote:
>>
>>>
>>>  Although my POV is aligned with the latter intuition, I actually agree
>>>> with the former, but consider the kinds of threats involved to be bounded
>>>> in ways we can in principle control. Although in practice it is possible
>>>> for them to do damage so quickly we can't prevent it.
>>>>
>>>> Perhaps my idea of intelligence is too limited. I am assuming that
>>>> something capable of being a real threat will be able to generate its own
>>>> ontologies, creatively model them in ways that build on and relate to
>>>> existing ontologies, simulate and test those new models, etc., generate
>>>> value judgments using these new models with respect to overarching utility
>>>> function(s). It is suspiciously similar to human intelligence.
>>>>
>>>
>>> I wonder. What you describe seems like the way of thinking of a person
>>> trained in the scientific method (a very recent discovery in human
>>> history). Is this raw human intelligence? I suspect raw human intelligence
>>> is more like a kludge. It is possible to create rickety structures of order
>>> on top of that kludge, by a process we call "education".
>>>
>>>
>>
>> I don't mean to imply formal learning at all. I think this even applies
>> to any animal that dreams during sleep (say). Modeling the world is a very
>> basic function of the brain, even if the process and result is a kludge.
>> With language and the ability to articulate models, humans can get very
>> good indeed at making them precise and building structures, rickity or
>> otherwise, upon the basic kludginess you're talking about.
>>
>>
>>>  I think something like this could do a lot of damage very quickly, but
>>>> by accident... in a similar way perhaps to the occasional meltdowns caused
>>>> by the collective behaviors of micro-second market-making algorithms.
>>>>
>>>
>>> Another example is big societies designed by humans.
>>>
>>
>> Big societies act much more slowly. But they are their own organisms, we
>> don't design them anymore than our cells design us. We are not really that
>> good at seeing how they operate, for the same reason we find it hard to
>> perceive how a cloud changes through time.
>>
>>
>>>
>>>
>>>>  I find it exceedingly unlikely that an AGI will spontaneously emerge
>>>> from a self-mutating process like you describe. Again, if this kind of
>>>> thing were likely, or at least not extremely unlikely, I think it suggests
>>>> that AGI is a lot simpler than it really is.
>>>>
>>>
>>> This is tricky. The Kolmogorov complexity of AGI could be relatively low
>>> -- maybe it can be expressed in 1000 lines of lisp. But the set of programs
>>> expressible in 1000 lines of lisp includes some really crazy,
>>> counter-intuitive stuff (e.g. the universal dovetailer). Genetic
>>> programming has been shown to be able to discover relatively short
>>> solutions that are better than anything a human could come up with, due to
>>> counter-intuitiveness.
>>>
>>
>> I suppose it is possible and maybe my estimate of how likely it is is too
>> low. All the same I would be rather shocked if AGI could be implemented in
>> 1000 lines of code. And no cheating - each line has to be less than 80
>> chars ;-)  Bonus points if you can do it in Arnold
>> <https://github.com/lhartikk/ArnoldC>.
>>
>
> Arnold is excellent! :)
> I raise you Piet:
> http://www.dangermouse.net/esoteric/piet.html
>
>
Wow, Piet is awesome, so creative - thanks for the link! However, it's
probably not the best for color-blind folks like myself.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to