That sounds interesting, please look it up if you can.

On Wed, 12 Sep 2018 at 15:26, Nanograte Knowledge Technologies via AGI <
[email protected]> wrote:

> Jim
> 
> Bootstrapping a computational platform with domain knowledge (seeding with
> insights), was already done a few years ago by the ex head of AI research
> in France. I need to find his blogs again, but apparently he had amazing
> results with regards re-solving classical mathematical problems.
> 
> Our question is; would that constitute AGI?
> 
> I  appreciate your comment on how such an approach would not be considered
> radical at all. However, the claim you make immediately thereafter; that
> the approach would help to think of the problem in a different way, is
> refutable.
> 
> The thinking in terms of relationships suffer the same fate. Not radical,
> and not thinking in a new or different way.
> 
> As such, we need to think as radically as we could possibly do. We need to
> find a few radical approaches and see if they could be focused on a few
> avenues of pragmatic research. May the best approach win.
> 
> For example, instead of relationships, thinking free-will (random)
> associations. This is not a semantic ploy, but a radical departure in terms
> of AGI architecture.
> 
> Furthermore, instead of thinking of seeding, rather allowing the
> computational platform to Find, Frame, Make and Share. This would denote
> another radical departure in current thinking (I did come across a similar
> approach recently).
> 
> Rob
> 
> ------------------------------
> *From:* Jim Bromer via AGI <[email protected]>
> *Sent:* Wednesday, 12 September 2018 2:25 PM
> *To:* [email protected]
> *Subject:* [agi] Growing Knowledge
> 
> The idea that an AGI program has to be able to 'grow' knowledge is not
> conceptually radical but the use of the idea that a program might be
> seeded with certain kinds of insights does make me think about the
> problem in a slightly different way. By developing a program to work
> along principles that are meant to incorporate some way to build on
> the basis of insights that are provided as the program explores
> different kinds of subjects I think I might be able to see this theory
> in the terms of a transition from programming discrete instructions
> that correspond to a particular sequence of computer operations into
> programming with instructions that have a potential to grow
> relationships between the knowledge data. The kinds of relationships
> do not need to be absolutely pre-determined because the use of basic
> relationships and references to specific ideas can implicitly develop
> into more sophisticated relationships that would only need to be
> recognized. For example, an abstraction of generalization seems pretty
> fundamental to Old AI. However, I believe that just by using more
> basic relationships which can refer to other specific ideas and to
> groups of ideas, the relationships that will effectively refer to a
> kind of abstraction may develop naturally - in primitive forms. It
> would be necessary to 'teach' the AGI program to recognize and
> appreciate these abstractions so that it could then use abstraction
> more explicitly.
> Jim Bromer
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T032c6a46f393dbd9-M44ff28f8a47bd56696724c2f>


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T032c6a46f393dbd9-Mb40e6e3e04e04b8a3b8069b9
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to