A conceptual structure is most straightforwardly expressed in OpenCog as a
hypergraph whose nodes/links may either be concrete ones from the
AtomSpace, or VariableNodes representing "slots" to be filled

Learning these may be done via PLN inference, or via Pattern Mining for
frequent/surprising conceptual-structure patterns in the AtomSpace

This is important but not something I have a need to discuss at the moment,
it's kind of a "solved problem" on the representation level, and we are
working on getting the reasoning/learning aspect of such structures to work
better

-- Ben G

On Sat, Dec 27, 2014 at 1:46 PM, Jim Bromer via AGI <[email protected]> wrote:

> The two of you haven't actually said anything about the subject of the
> thread. You were able to respond to the personal part of my statement
> (which is probably the underlying reason why I make personal statements
> like that) but you did not discuss some of your knowledge about the
> subject.
>
> I didn't get it at first - because, as I am trying to say, you did not
> actually say anything about the topic. However, because I have a little
> familiarity with PM's program I finally made a guess about what he is
> talking about.
>
> I think you are probably confusing my notion of 'conceptual structure'
> with some kind of fundamental abstract structure of pre-programmed
> relations, like the GOFAI representations of conceptual relations where
> concepts were often values that filled 'slots' (in the pre-defined
> structure). So PM's conceptual structure would be the relations that were
> pre-defined by him. Right there is one difference between (what I believe
> is) his notion of conceptual structure and my own. I believe that concepts
> in AGI have to be learned so I question whether conceptual structures can
> be adequately predefined by PM or anyone else.
>
> So even though I still haven't been able to get anyone to discuss this
> topic with me I have been able to read the tea leaves of their pretensions
> and learned something. If conceptual structure was something that could be
> represented with a few combinations of pre-defined abstractions then the
> topic would be obvious and trivial because it permeates GOFAI methodologies.
>
> This makes so much sense that I must have reached this conclusion before.
> If you are able to truly understand the notion of a conceptual structure
> (so that you could discuss it intelligently) then it would itself have to
> be a concept in itself.
>
> Thank you. I guess it is really is time for me to move on. I know how
> annoying I can be but maybe you should ask yourselves whether your
> responses were anything other than tedious and trivial.
>
>
> Jim Bromer
>
> On Sat, Dec 27, 2014 at 12:18 AM, Piaget Modeler via AGI <[email protected]>
> wrote:
>
>> And some of us are onto other things.
>>
>> ~PM
>> --------------
>>
>> > Date: Sat, 27 Dec 2014 00:11:41 -0500
>> > Subject: [agi] Conceptual Structure?
>> > From: [email protected]
>> > To: [email protected]
>>
>> >
>> > No one in these groups has been interested in discussing conceptual
>> > structure with me. I think that is a bit odd. I suppose I should draw
>> > some conclusions from that, accept it and move on.
>> >
>> > Structure is more than correlation. You might 'discover' structure
>> > using correlation but only if your program was able to create theories
>> > about structure and apply them via some mechanism other than
>> > correlation. One possibility is that structure is conceptually
>> > abstract so a handful of relations would be adequate to handle the
>> > representation of an immense variety of structural relations. But if
>> > that is true, then that should make conceptual structure easy to apply
>> > and to study. And that should mean that conceptual structure is
>> > something that should generate a lot of discussion in AI / AGI groups
>> > like this.
>> >
>> > The only conclusion I can come to is that most of the people in this
>> > group are not actually working on viable projects, so they are more
>> > preoccupied by more familiar mainstream discussions and discussions
>> > about outlier conjectures that could have a major impact on the
>> > feasibility of AGI if they were themselves feasible.
>> > Jim Bromer
>> >
>> >
>> > -------------------------------------------
>> > AGI
>> > Archives: https://www.listbox.com/member/archive/303/=now
>> > RSS Feed:
>> https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc
>> > Modify Your Subscription: https://www.listbox.com/member/?&;
>> > Powered by Listbox: http://www.listbox.com
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/212726-deec6279> | Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
http://goertzel.org

"The reasonable man adapts himself to the world: the unreasonable one
persists in trying to adapt the world to himself. Therefore all progress
depends on the unreasonable man." -- George Bernard Shaw



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to