I mean, you'd think any mind that had just spent a hundred hours or a
subjective eternity poring over spreadsheets or proteins would be a
little spun out when it was  brought into VR for a status update. Just
for instance. The kind of minds we're envisioning would have to be
able to do this kind of thing with a grace we would find very
unsympathetic. Surely


On 10/5/08, Eric Burton <[EMAIL PROTECTED]> wrote:
>>Well, for the purpose of creating the first human-level AGI, it seems
>> important **to**
>>wire in humanlike bias about space and time ... this will greatly ease the
>> task of
>>teaching the system to use our language and communicate with us
>> effectively...
>
> The same thing occurred to me while browsing this thread. We really
> want an AGI that can function with human-like ease in a simulated
> Earth-like environment for reasons of training and communication. But
> by no means should this be the limit of its abilities! You'd like an
> intelligence that functions with native casualness while embodied in
> VR but can make smooth transitions into entirely different modalities
> without the time it spends there biasing its behavior in other ones.
>
> I mean, you want an AGI to be a master of all modes and to be able to
> switch between them seamlessly, neither of which are particularly
> lauded properties of human intelligence. It kind of goes to reinforce
> the notion that strong AI will likely be strongly non-human.
>
> On 10/5/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>>
>>> 3. I think it is extremely important, that we give an AGI no bias about
>>> space and time as we seem to have. Our intuitive understanding of space
>>> and
>>> time is useful for our life on earth but it is completely wrong as we
>>> know
>>> from theory of relativity and quantum physics.
>>>
>>> -Matthias Heger
>>
>>
>>
>> Well, for the purpose of creating the first human-level AGI, it seems
>> important **to** wire in humanlike bias about space and time ... this will
>> greatly ease the task of teaching the system to use our language and
>> communicate with us effectively...
>>
>> But I agree that not **all** AGIs should have this inbuilt biasing ... for
>> instance an AGI hooked directly to quantum microworld sensors could become
>> a
>> kind of "quantum mind" with a totally different intuition for the physical
>> world than we have...
>>
>> ben g
>>
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to