>
>
> ​AGI's bottleneck must be in *learning*, any​one who focuses on something
> else is barking under the wrong tree...
>

Not just a bottleneck, it's the very definition of GI, the fitness /
objective function of intelligence.
Specifically, unsupervised / value-free learning, AKA pattern
recognition. Supervision
and reinforcement are simple add-ons.
Anything else is a distraction. "Problem solving" is meaningless: there is
no such thing as "problem" in general, except as defined above.


> Now think about this:  we already have the weapon (deep learning) which is
> capable of learning *arbitrary* function mappings.
>

Yeah, we have some random hammer, so every problem looks like a nail.


> We are facing a learning problem which we already know the formal
> definition of.
>

No, you don't. There no constructive theory behind ANN, it's just a hack
vaguely inspired by another hack: human brain.
Which is a misshapen kludge, and this list makes it  perfectly obvious.
Sorry, can't help it.

Anyway, this is my alternative:  https://github.com/boris-kz/CogAlg

>
>

On Mon, Jun 11, 2018 at 3:41 AM YKY via AGI <[email protected]> wrote:

> On Mon, Jun 11, 2018 at 2:55 PM, MP via AGI <[email protected]> wrote:
>
>> Right. Most of them work off of a variant of depth first search which
>> would usually lead to a combinatorial explosion, or some kind of heuristic
>> to cut down on search time at some other cognitive expense...
>>
>> Not to mention most of them run off human made rules, rather than
>> learning it for themselves through subjective experience.
>>
>> I highly doubt even Murray’s bizarre system subjectively learns. There
>> are hand coded concepts in the beginning of his trippy source code.
>>
>> How can your system overcome this? How can it subjectively learn without
>> human intervention?
>>
>
>
> ​AGI's bottleneck must be in *learning*, any​one who focuses on something
> else is barking under the wrong tree...
>
> Now think about this:  we already have the weapon (deep learning) which is
> capable of learning *arbitrary* function mappings.  We are facing a
> learning problem which we already know the formal definition of.  So we
> just need to apply that weapon to the problem.  How hard can that be?
>
> Well, it turns out it's very hard to understand the abstract (algebraic)
> structure of logic, that took me a long time to master, but now I have a
> pretty clear view of its structure.
>
> Inductive learning in logic is done via some kind of depth-first search in
> the space of logic formulas, as you described.  The neural network can also
> perform a search in the weight space, maximizing some objective functions.
> So the weight space must somehow *correspond* to the space of logic
> formulas.
>
> In my proposal (that has just freshly failed), I encoded the formulas as
> the output of the neural network.  That is an example, albeit I neglected
> the first-order logic aspects.
>
> Does this answer your question?
>
> And thanks for asking, because that helps me to clarify my thinking as
> well... ☺
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups> Permalink
> <https://agi.topicbox.com/groups/agi/T731509cdd81e3f5f-M91e23dbb79884c9e185a9424>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T731509cdd81e3f5f-M85c87b4f6aeac8dd7343b99c
Delivery options: https://agi.topicbox.com/groups

Reply via email to