I really cannot figure out what you are saying Steve. And I don't know how
what you are saying could actually apply to me. "...there could be NO way
to understand and/or debug such a thing," sounds like it might have some
relevant meaning but then to interpret it I have to go back to, "without
this model," which refers to "constraining learning to ONLY learn things
that fit this model." So my best guess is that you are saying a
'mathematical method cannot be learned by a computer program because there
is no way that it could be debugged, but that does not really make much
sense. A mathematical method is a way to compute something. An abstraction
- even a dynamic abstraction - is typically going to refer or work with a
class of particulars or data objects (that are subject to the dynamic
abstraction as operands are subject to a program step.) If my guess about
what you are saying is in the ball park, then I can put it another way. Can
a computer program learn a sub-program (from the IO data environment)? Yes.
Whenever a computer program adapts to the IO data environment it is
rearranging its program. This is true even for a word processor. My gmail
program is different than yours. What I mean is that it is -effectively-
different than yours. It does not matter that the 'program' is 'separate'
from the 'data'. From this technical view, one can make a small shift and
think about some available sub-programs being rearranged in response to the
user input. To make the program robust this rearrangement should not be
able to introduce a bug into the program. However, it is possible that such
a system could be refined via learning so that it could adapt to a greater
variety of IO events of the 'kind' that it had learned to respond to. It
could go through a process of refinement according to some goal for the
different events of the 'kind' it was reacting to. There is the possibility
that such a program could just go out on it's own, rearranging the
sub-programming to fit the data and then fitting the data to the
rearrangement of the sub programming - but that is just a complication, not
an insurmountable impossibility.
I am not sure I know what you were saying, but it is funny, as I tried to
write this out I started thinking about feasible programs that could have
some of the characteristics that I was thinking about. I mean that I was
thinking of the possibility that different programs could be written that
would demonstrate what I was trying to talk about with simple examples and
as the examples got more sophisticated variations of the programming could
be introduced to demonstrate the slightly higher level of sophistication.
What I am getting at is that these ideas could be tested in highly
controlled tests. If I was working for a software company I might propose
writing some of these controlled tests in order to better study the kinds
of things that I am talking about.
Jim Bromer


On Wed, Jun 12, 2019 at 9:55 PM Steve Richfield <[email protected]>
wrote:

> Jim,
>
> It is (nearly?) impossible to "learn" in a way that preserves value (e.g.
> 50%), dimensionality (e.g. probability), and probably significance (e.g.
> +/-10%) without constraining learning to ONLY learn things that fit this
> model. Without this model, it is just numerology that can NEVER EVER be
> made to work - because there could be NO way to understand and/or debug
> such a thing, either through automation (deep learning) or manually (as I
> have tried).
>
> Steve
>
>
> On Wed, Jun 12, 2019, 6:42 PM Jim Bromer <[email protected]> wrote:
>
>> The 'formal' part of the system can be acquired through learning.
>> Jim Bromer
>>
>>
>> On Wed, Jun 12, 2019 at 9:14 PM <[email protected]> wrote:
>>
>>> Yeh Steve - maybe that helps it try novel situations better???
>>> newsflash from me ->  i think that formalizing the system manually ends
>>> up a shallower system than what needs to be there for a developing system.
>>>   because its cheating it to do things,  is it where the term "deep
>>> learning" comes from?
>>>
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T395236743964cb4b-M935ffba0b4b0bfcf42422818>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T395236743964cb4b-M1474d9a9c8d132b6f9ac78ff
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to