I do not remember hearing of transfer learning before although I have
thought about things like that. I am still a little skeptical about deep
learning because there has been such a lack of obvious cross-application.
As I understand it, transfer learning can be applied when there are a
number of significant features in a trained model which are also
signnificant to another model or problem. This is something I would like to
try but I don't really have time to do so. The problem with deep learning,
as I see it, is that a DL model can be trained but it does not have insight
into the very features that it is trained to detect. (That does occur with
naïve students.) And even if transfer-learning was used the result would
not be a single integrated DL model but numerous discrete DL models. This
is OK but I believe that it is evidence that AGI - like knowledge has to be
a hybrid of network learning and discrete learning. So I am really thinking
of a discrete network kind of AI where 'transfer learning' could take place
more automatically. In the kind of discrete network that I have in mind
'transfer learning' would be an integrated part of all learning. The
program would be looking for features in the data which could be found and
used in numerous ways. The problem with this idea is that there are so many
different ways to group individual data points (into potential groups and
generalizations) that the system would be overwhelmed at the start with a
combinatorial explosion of possible groupings or generalizations. This
combinatorial explosion is the complexity problem. So a neural network has
one way to simplify the problem but it is not the only possible way to do
so. I think this is a serious problem and the difficulty of using a trained
DL model to recognize the individual features that occur in the data that
contains something that it is trained to detect is why DL is not AGI. It is
not even narrow AGI. It may become narrow AGI but it definitely is not
there yet.
Jim Bromer


On Thu, Aug 8, 2019 at 12:46 PM Brett N Martensen <[email protected]>
wrote:

> Jim, You are right on the money!
> It's called transfer learning and comes from having generalization in a
> compositional hierarchy  in which more complex things are composition of
> simpler but more general things. And the lowest level simplest things yet
> most general are the stimuli that come from sensors and that also makes it
> grounded.
> Brett
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T187cd9f14076b86f-Mdd205164b317e14857615940>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T187cd9f14076b86f-Md7a72ea7b52e0eea42e4706f
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to