Right. Most of them work off of a variant of depth first search which would 
usually lead to a combinatorial explosion, or some kind of heuristic to cut 
down on search time at some other cognitive expense...

Not to mention most of them run off human made rules, rather than learning it 
for themselves through subjective experience.

I highly doubt even Murray’s bizarre system subjectively learns. There are hand 
coded concepts in the beginning of his trippy source code.

How can your system overcome this? How can it subjectively learn without human 
intervention?

Sent from ProtonMail Mobile

On Mon, Jun 11, 2018 at 1:51 AM, YKY via AGI <[email protected]> wrote:

> On Mon, Jun 11, 2018 at 2:39 PM, MP via AGI <[email protected]> wrote:
>
>> It seems you’re taking a deep learning approach to the classic cognitive 
>> architectures - like you’re reengineering ACT-R.
>>
>> What’s different? Why is this any better?
>
> ​Deep learning's learning algorithm (back-prop / gradient descent) is way 
> more efficient than classical logic-based learning (based on discrete 
> combinatorial search inside a humongous​ lattice).  Ben Goertzel first told 
> me that some years ago 😝
> [Artificial General Intelligence List](https://agi.topicbox.com/latest) / AGI 
> / see [discussions](https://agi.topicbox.com/groups/agi) + 
> [participants](https://agi.topicbox.com/groups/agi/members) + [delivery 
> options](https://agi.topicbox.com/groups) 
> [Permalink](https://agi.topicbox.com/groups/agi/T731509cdd81e3f5f-Mdf1b6da8d994f7e447d3e46a)
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T731509cdd81e3f5f-M9c15c2ce6f7965ebb6bdbf86
Delivery options: https://agi.topicbox.com/groups

Reply via email to