Is there a paper or document I can read on conceptual relativism? I think
what I'm lacking is simply a full-bodied description of the idea. Either
(1) I've already implemented it or (2) I'm just not getting it. A pair of
words just isn't much to go by when trying to capture a concept that
operates at this level of abstraction. When you say them, I immediately
think of a long list of capabilities my system has, but you've rejected
most of the ones I've named off.

My system:
  * is able to very flexibly deal with concepts, abstractions, and "truth".
  * is built on a data structure, the semantic network, which is capable of
modeling any other data structure. (Is there a class of data structures
that is analogous to the class of Turing-complete languages? I think it's
obvious, but I've never seen a paper on it. Someone ought to give it a go.
I would, but it's not a priority for me given my limited schedule.)
  * has a representational scheme within this semantic network is capable
of fully capturing the meaning of an arbitrary sentence, verifiably so
because meanings can be converted back into sentences without loss of
information (or will shortly, when I've finished writing the code for it).
  * is fully capable of representing meta meanings, meanings that
correspond to statements about other meanings or statements.
  * doesn't define kinds or classes of things by an explicit definition,
but rather through context, usage, and experience, allowing the meaning of
concepts to shift over time.

I just don't see what's missing. Which of these comes closest to the mark
on capturing the functionality you are getting at? Reading up on this, the
ideas of paradigm shifts and reality filters come to mind. My personal
motto has for some time been, "Expectation skews perception," which fits
right in with what I see in the literature I have come across on this
topic. Given this, I would expect the last of the points above, that my
system doesn't work off explicit definitions but rather through
context/usage/experience, captures what it is you've been trying to convey.
But then again, maybe I'm just filtering my perception of reality with my
own expectations.


On Tue, Oct 23, 2012 at 6:10 PM, Jim Bromer <[email protected]> wrote:

> Aaron,
> You can't pin conceptual relativism down by providing an example of
> a 'thought' that someones unimplemented AGI program would be unable to
> consider without resorting to a profoundly imaginary example and abstract
> language.
>
> I have been saying that (many) old fashioned AI programs would be capable
> of dealing with conceptual relativism if they were given the capability to
> do so.  So I haven't been saying that your AGI model is definitely
> incapable of producing or even dealing with conceptual relativism. I am
> saying that you have to deal with sooner or later and sooner is better than
> later.
>
> You cannot pin conceptual relativism down with concrete examples because
> once you did you can come up with a way to "explain" the effect away
> (because you are capable of intelligent thought)  if that is your primary
> motivation.
>
> I cannot understand why this isn't obvious.
>
> The first step is to deal with conceptual relativism in your own thinking.
>
> My goal, for sometime now, has been to get the core elements of AGI and
> then start with a really simple model that I can then test and develop in a
> cyclical fashion.  So by starting with a system that enables conceptual
> relativism in an extremely simple model I can start working with the system
> sooner. It is a little like trying to make a semantic network (or perhaps I
> should call it a flexible definitive conceptual network) as simple as a
> neural network.  But because it is a definitive network, I can easily
> modify it to make it more sophisticated which is something that is almost
> impossible to do with a neural network.
>
> Jim Bromer
>
>
>
>
> On Tue, Oct 23, 2012 at 11:34 AM, Aaron Hosford <[email protected]>wrote:
>
>> I'm not asking for a concrete example because I don't believe you. I'm
>> asking for it because I'm not sure what you're saying my design should be
>> capable of. I'm trying to narrow down the meaning of your words to what
>> you're actually trying to convey. The examples you've given didn't look
>> like examples, they looked like summaries of something I need more details
>> in order to follow. It's not that you're talking about abstractions. It's
>> that you're using abstract language to talk about them. I can agree on the
>> general case of what you're saying without seeing how my system
>> specifically fails to implement it. You have called it conceptual
>> relativism, without saying what that actually means. There are many ways to
>> make concepts relative, and some of them are nothing like others. Which one
>> are you talking about? And why does my system need it? Give me an example
>> of a thought my system cannot have without this sort of functionality.
>> That's the method by which I move my design forward. I find something my
>> design can't do and that a human being can, and I modify my design until it
>> includes that capability. I'm just struggling to see what it is you're
>> saying my system lacks.
>>
>> I'm not starting with language because it's easy. I'm starting with it
>> because I think it connects most directly with those higher cognitive
>> faculties. And right now, I'm not implementing reasoning in any way. I'm
>> simply implementing the format in which the knowledge is to be represented
>> during reasoning. I've got to have this foundation before I can do anything
>> with reasoning, not because I'm avoiding the task, but because this is a
>> necessary subgoal of the task. When we think, we are manipulating meaning
>> based on "rules" (more like hints, heuristics, and tendencies, but "rule"
>> is convenient to say) we have previously observed in the behaviors of
>> meanings extracted from perception. A thought is thus the generation of a
>> new meaning from existing ones. How can I implement thinking of *any* sort,
>> conceptually relative or not, without having an underlying data structure
>> to represent meaning?
>>
>> Yes, a restricted implementation of meaning will lead to restricted
>> capability in the reasoning carried out on it. If a thought can't be
>> represented, it can't be generated. But my system can represent thoughts
>> about thoughts, thoughts about meaning. I haven't *yet* implemented the act
>> of thinking those thoughts, but the capability is there because I knew it
>> would be necessary later. There is nothing that a human being can't think
>> about, once we're aware of it. That's what I'm trying to build. If my
>> system can represent thoughts in the general case, what precisely is it
>> missing other than the mechanisms to generate new thoughts from old ones,
>> which is a recognized need already? I'm of the opinion that anything a
>> human being can think can be expressed in language (other than the nature
>> of raw qualia, which doesn't need to be communicated anyway, since the
>> other person's version will do just as well), even if it takes years
>> to find the right words. If there's something I've missed, I need to know
>> what that is. I'm a little frustrated, because you're telling me I've
>> missed something, but you won't come out and say exactly what that is.
>>
>>
>>
>>
>> On Tue, Oct 23, 2012 at 8:04 AM, Jim Bromer <[email protected]> wrote:
>>
>>> On Mon, Oct 22, 2012 at 9:28 PM, [email protected] <
>>> [email protected]> wrote:
>>>
>>>> So links can act as nodes, basically, as in a generalized hypergraph?
>>>> That's also built into my system. The Link class is a subclass of the Node
>>>> class. Nothing particularly difficult or unpleasant there.
>>>>
>>>> A story can define a distinction between kinds in my system, but it
>>>> would do so implicitly, through context, rather than explicitly through a
>>>> formalized mechanism.
>>>>
>>>> While neither the links-as-nodes nor the story-as-concept is
>>>> specifically used or accounted for in my design, it is easily extensible in
>>>> both of these directions. What I'm looking for is a particular use case, a
>>>> reason for paying special attention to this sort of functionality, as
>>>> opposed to merely including the capability should it later be found to need
>>>> that special attention.
>>>>
>>>>
>>>>
>>> -----------------------------------------
>>>
>>> What I am saying - to you - is that I think many guys who I have talked
>>> to seem to have the sense that the kinds of things that I am talking about
>>> are high level effects, just as you and Piaget Modeller (I can
>>> never remember his name) did.  So then, their low level implementation
>>> would have the *potential* to represent these issues of conceptual
>>> relativism once their programs got to the point where they understood basic
>>> sentences. It is as if they are so focused on (what they consider to be
>>> the) low level implementation issues that they then imagine that once their
>>> programs are able to deal insightfully with simple expressions (or
>>> observations and interactions) that the rest will be easy.  What I am
>>> saying is that you have to work these capabilities into your basic
>>> programming because these are the essence of intelligence.  It is this
>>> genuinely rational-creative talent which is what drives
>>> intelligence.  These skills are not (just) high level capabilities,
>>> they are the essence of what it is that we are are talking about when we
>>> talk about intelligence.  So if you are going to create a program that can
>>> learn to use natural language then the program must be implement these
>>> skills from the start (even though it might take some time for the
>>> program to learn something that would demonstrate how these can be used
>>> effectively.)
>>>
>>> It is interesting that you are, like Mike, demanding a concrete
>>> example. My simply telling you that a program that is to be able to learn
>>> to work with a human language has to be able to develop skills to develop
>>> abstractions, generalizations and categorical definitions from stories
>>> (story-like conversation) isn't enough to convince you that these so called
>>> higher level capabilities should be implemented at a low level of
>>> implementation.  Stories (and examples) occur at different levels of
>>> abstraction.  These levels are relative, there is no such thing as a purely
>>> concrete example or a pure abstraction.  So the truth is that I have
>>> already given you quite a few examples, it is just that they have been
>>> expressed as abstractions.
>>>
>>> Saying that your model would be potentially capable of representing the
>>> kinds of relations that I am talking about is somewhat superficial. You are
>>> saying that the superficial aspects of representation would be powerful
>>> enough to handle these kinds of effects as if you were not fully realizing
>>> that your programming has to be explicitly written to actually implement
>>> these kinds of effects.
>>>
>>> By implementing these ideas at a lower level of the design
>>> what happens?  The program suddenly becomes quite unwieldy.  That means
>>> that your program has to deal with all the problems of creative thinking
>>> from the start.  Ok, but so what.  That is exactly where you want to be.
>>> Jump in and get to work.  Stop trying to focus on what you once conceived
>>> as the starting point for developing an AGI project and start working on
>>> the central currents of reasoning.  You think that by starting with
>>> something that can be broken into simpler pieces that you can locate the
>>> ideal starting point but you haven't.  You broke it up in the wrong
>>> way. The right way is to examine, not glimpse, but examine the central
>>> issue of rational creativity and take a look at the fact that it can be and
>>> should be implemented at a "low level".
>>>
>>> You cannot pick out the parts of low level implementation in such a way
>>> as to avoid the complications of genuine AGI.  A dedicated AGI programmer
>>> is going to need to deal with them eventually.
>>>
>>> Jim Bromer
>>>
>>>
>>>
>>>
>>> On Mon, Oct 22, 2012 at 9:28 PM, [email protected] <
>>> [email protected]> wrote:
>>>
>>>> So links can act as nodes, basically, as in a generalized hypergraph?
>>>> That's also built into my system. The Link class is a subclass of the Node
>>>> class. Nothing particularly difficult or unpleasant there.
>>>>
>>>> A story can define a distinction between kinds in my system, but it
>>>> would do so implicitly, through context, rather than explicitly through a
>>>> formalized mechanism.
>>>>
>>>> While neither the links-as-nodes nor the story-as-concept is
>>>> specifically used or accounted for in my design, it is easily extensible in
>>>> both of these directions. What I'm looking for is a particular use case, a
>>>> reason for paying special attention to this sort of functionality, as
>>>> opposed to merely including the capability should it later be found to need
>>>> that special attention.
>>>>
>>>>
>>>>
>>>>
>>>> -- Sent from my Palm Pre
>>>>
>>>> ------------------------------
>>>>  On Oct 22, 2012 8:04 PM, Jim Bromer <[email protected]> wrote:
>>>>
>>>> A relatively concrete categorical definition of a concept might be a
>>>> very short "story" denoting the distinction between two or more cases of a
>>>> kind of thing.  Although the distinction might be made briefer, that does
>>>> not mean that it would be made better by such a device.
>>>> Jim Bromer
>>>>
>>>> On Mon, Oct 22, 2012 at 8:57 PM, Jim Bromer <[email protected]>wrote:
>>>>
>>>>> A concept may be defined by a word, a group of words, a sentence or a
>>>>> group of sentences (or even a fragment of a word).  A category that such a
>>>>> concept might be said to belong to is also a concept.  So the only
>>>>> distinction between a link (or an edge) and a node of a semantic network 
>>>>> is
>>>>> relative to some purpose of relation or categorization (or description).
>>>>>
>>>>> Mike refuses to try to understand what I am saying because he would
>>>>> have to give up his sense of a superior point of view in order to
>>>>> understand it.  Yes you have a more enlightened view point when it comes 
>>>>> to
>>>>> trying to understand ideas that other people are trying to explain.  But
>>>>> you resist 'understanding' what I am saying because it does not easily 
>>>>> fall
>>>>> into an orderly point system that seems like it is immediately 
>>>>> programmable.
>>>>>
>>>>> So you understand the words that I am using but I think you are simply
>>>>> refusing to understand the implications of those words because it is more
>>>>> unwieldy then your current beliefs.
>>>>> Jim Brom
>>>>>
>>>>>>
>>>>>>>         <http://www.listbox.com/>
>>>>       <http://www.listbox.com/>
>>>>
>>>
>>>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com/>
>>>
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to