I was asked if the differences of my theories from the mainstream
theories and the theories behind the AI / AGI Frameworks that are
being devised are just a matter of semantics. I don't think they are.
Although misunderstandings occur all of the time I have been trying to
get people to discuss the matter of conceptual relativism with me for
some years without much success. While my ideas are derived from
problems that are visible to other people who have studied the problem
carefully, I think that my analysis of the problem is based on what I
think are necessary insights that will need to be used to solve the
problems. But if someone does not think that I have taken the first
step then he is going to be less likely to talk to me about it because
my analysis of the problem is not going to look the same as his. So I
will try to end this thread with a brief reexplanation of my current
theories about Concepts and Conceptual Structures.

A true AGI program will need to derive concepts about its interactions
with the IO data environment that it is exposed to.
It is going to take other concepts to interpret a concept. A concept
will be read, so to speak, by using other concepts. This means that a
concept may be interpreted differently. And it suggests that concepts
can play roles as they act with other concepts. Concepts will be
specialized.  So from this I deduce that an awareness of conceptual
relativism is going to be a key insight into producing AGI. The
structure of the relationships between Concepts is going to be
relative as well.

Now if I am right about this and I had an effective methodology to
deal with conceptual relativism I should be able to produce a
meaningful advancement in AGI sooner than the others. However, I don't
have an effective mechanism. The reason I keep trying to discuss this
idea is that I don't have all this figured out. However, if my
thinking is along the right lines I should be able to produce
something of value even before I show how this might work in an AGI
program.

Jim Bromer


On Sat, Dec 27, 2014 at 12:11 AM, Jim Bromer <[email protected]> wrote:
> No one in these groups has been interested in discussing conceptual
> structure with me. I think that is a bit odd. I suppose I should draw
> some conclusions from that, accept it and move on.
>
> Structure is more than correlation. You might 'discover' structure
> using correlation but only if your program was able to create theories
> about structure and apply them via some mechanism other than
> correlation. One possibility is that structure is conceptually
> abstract so a handful of relations would be adequate to handle the
> representation of an immense variety of structural relations. But if
> that is true, then that should make conceptual structure easy to apply
> and to study. And that should mean that conceptual structure is
> something that should generate a lot of discussion in AI / AGI groups
> like this.
>
> The only conclusion I can come to is that most of the people in this
> group are not actually working on viable projects, so they are more
> preoccupied by more familiar mainstream discussions and discussions
> about outlier conjectures that could have a major impact on the
> feasibility of AGI if they were themselves feasible.
> Jim Bromer


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to