I may have misunderstood what you were saying but it has been a great
help to me regardless.

When a concept is defined as being shaped by a value, it is a
definition of a dependent concept. So then simply defining a concept
as being dependent is not enough from my point of view. A concept has
to be defined within a system of interacting (interdependent)
concepts. That is what I realized last night, and that is how my
fundamental theories are different than everyone who is using a value
as a method for shaping a concept (a concept-like object) in their
fundamental definitions.  Part of a fundamental definition of a
concept within the Conceptual Relativism theory is that one has to be
able to give a fair accounting of how the concept might be developed
and subsequently refined (or rejected) within a system of interrelated
concepts and that means that a fundamental definition of such systems
have to be made. The question of, what does the concept do, is a
question of how would the concept work with other concepts. Different
concepts have different structures just as the structure of a
computational syntax is based on different data objects such as types,
operands and operations. Only a conceptual type can be much more
varied than that.

Concepts may be defined as independent but typically a concept is not
going to be independent. So a concept that can shape another concept
may in turn be shaped by that other concept. Isn't this what happens
when we analyze a situation using other knowledge that we possess or
acquire? Don't we learn that the methods used in the analysis of a
situation can be refined or shaped to be more effective when examining
the situation? The idea that the concepts used in understanding could
never be molded in response to the attempt to use them while shaping
another concept is nonsense.

A system of concepts used together that may mutually shape the other
concepts as they are being used is a good definition of conceptual
relativity. So the (conceptual) tools that are used to examine some
concept structure may themselves be shaped by the subject concepts of
the examination. That is an example of conceptual relativity.

I believe that a fundamental definition of a concept which defers the
definition of the sources of the values that might shape the concept
is characteristic of the old failed methods of narrow AI. It is
obvious that there has been something fundamental missing in these
kinds of definitions and it may be the discovery and definition of the
mechanisms of the interrelations of interdependent concepts that is
needed to make a strong AGI definition.

No I am not saying that there is anything wrong with traditional
approaches to discussing and defining the objects of AI programs. What
I am saying is that we have to have some mechanisms to keep these
referential complications from disabling the actions of the objects
derived from the fundamental definitions. The first step is by
understanding that there are interrelations like those that I call
Conceptual Relativity at work and then try to find the mechanisms that
can discover these interrelations in the referential world and manage
them so that they do not create a devastating loss of traction when
they are applied to some problem.

These definitions do not have to be extensive and they can be made
with abstractions and generalizations. So it can be used in simple
feasibility testing.

A well known problem with (what I call) Conceptual Relativity in human
thought is that of changing the mechanisms a test to measure or
analyze a concept so that the results will fit the desired results. So
then this is one kind of problem that my theory of Conceptual
Relativism might be expected to produce. How can I develop my
definitions of conceptual relativity to avoid that particular
situation? Well I can't be sure that it will never occur but I can
give the system some tendency to avoid it. However, it is a tricky
situation because there are times when you need to change your
methodology of evaluation in order to develop greater insight into a
problem. But by thinking about the issue this way I might be able to
emphasize how certain aspects of the concepts play different roles in
the situation that I am concerned about and work it out so that the
different 'characters' in my conceptual theater will act to balance
the trickier changes introduced by another character in the system. If
you move the goal posts then you have to change the game. You can't
redefine the goals and say it is the exact same game as it was before.
That is a simple mechanism which might be defined and managed by using
the assignment of roles in the conceptual system.

Jim Bromer


On Fri, Oct 10, 2014 at 3:19 PM, Mike Archbold via AGI <[email protected]> wrote:
> On 10/9/14, Jim Bromer <[email protected]> wrote:
>>  Mike Archbold said:
>> Jim, I think about the issue you emphasize of no 'independent
>> concepts' frequently.  It plays a role in my latest approximate
>> design.  Mike A
>>
>>
>>
>> The idea of using systems of interdependent concepts is something that
>> can be simulated easily since the interdependence is something that
>> can be abstracted in computational terms. So, if someone wanted just
>> to try it with meaningless abstract tokens or objects he could do
>> that. He could use values or discreet interrelations. New relations
>> could be introduced and studied. The concepts could be used with
>> different syntactic structures (or different functional relations
>> based on different characteristics) and so on. The same kind of thing
>> can be used with concepts that have very simplistic meanings.
>>
>> Because this idea that you described as "no independent concepts" is
>> (itself) computationally feasible that can lead to all sorts of simple
>> programmable possibilities. I am definitely going to try this
>>
>> So a simulation of highly interdependent concepts and interdependent
>> conceptual 'roles' is something that can be very simple. Even though
>> conceptual relativism is something that goes beyond the
>> interdependence of the concepts, your abstraction of this essential
>> feature of the concept may be a help to my finding a good starting
>> point for my next attempt to write a simple AGI program
>> Jim Bromer
>>
>
> There has been some talk here lately of Tononi's integrated
> information theory.  I only know the summary generalities, but it is a
> (I think) holistic approach with mathematics holding it up.  In the
> wiki, at least, they are not talking about having a bunch of stand
> alone "independent concepts."
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-653794b5
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to