I'm not asking for a concrete example because I don't believe you. I'm
asking for it because I'm not sure what you're saying my design should be
capable of. I'm trying to narrow down the meaning of your words to what
you're actually trying to convey. The examples you've given didn't look
like examples, they looked like summaries of something I need more details
in order to follow. It's not that you're talking about abstractions. It's
that you're using abstract language to talk about them. I can agree on the
general case of what you're saying without seeing how my system
specifically fails to implement it. You have called it conceptual
relativism, without saying what that actually means. There are many ways to
make concepts relative, and some of them are nothing like others. Which one
are you talking about? And why does my system need it? Give me an example
of a thought my system cannot have without this sort of functionality.
That's the method by which I move my design forward. I find something my
design can't do and that a human being can, and I modify my design until it
includes that capability. I'm just struggling to see what it is you're
saying my system lacks.

I'm not starting with language because it's easy. I'm starting with it
because I think it connects most directly with those higher cognitive
faculties. And right now, I'm not implementing reasoning in any way. I'm
simply implementing the format in which the knowledge is to be represented
during reasoning. I've got to have this foundation before I can do anything
with reasoning, not because I'm avoiding the task, but because this is a
necessary subgoal of the task. When we think, we are manipulating meaning
based on "rules" (more like hints, heuristics, and tendencies, but "rule"
is convenient to say) we have previously observed in the behaviors of
meanings extracted from perception. A thought is thus the generation of a
new meaning from existing ones. How can I implement thinking of *any* sort,
conceptually relative or not, without having an underlying data structure
to represent meaning?

Yes, a restricted implementation of meaning will lead to restricted
capability in the reasoning carried out on it. If a thought can't be
represented, it can't be generated. But my system can represent thoughts
about thoughts, thoughts about meaning. I haven't *yet* implemented the act
of thinking those thoughts, but the capability is there because I knew it
would be necessary later. There is nothing that a human being can't think
about, once we're aware of it. That's what I'm trying to build. If my
system can represent thoughts in the general case, what precisely is it
missing other than the mechanisms to generate new thoughts from old ones,
which is a recognized need already? I'm of the opinion that anything a
human being can think can be expressed in language (other than the nature
of raw qualia, which doesn't need to be communicated anyway, since the
other person's version will do just as well), even if it takes years
to find the right words. If there's something I've missed, I need to know
what that is. I'm a little frustrated, because you're telling me I've
missed something, but you won't come out and say exactly what that is.




On Tue, Oct 23, 2012 at 8:04 AM, Jim Bromer <[email protected]> wrote:

> On Mon, Oct 22, 2012 at 9:28 PM, [email protected] 
> <[email protected]>wrote:
>
>> So links can act as nodes, basically, as in a generalized hypergraph?
>> That's also built into my system. The Link class is a subclass of the Node
>> class. Nothing particularly difficult or unpleasant there.
>>
>> A story can define a distinction between kinds in my system, but it would
>> do so implicitly, through context, rather than explicitly through a
>> formalized mechanism.
>>
>> While neither the links-as-nodes nor the story-as-concept is specifically
>> used or accounted for in my design, it is easily extensible in both of
>> these directions. What I'm looking for is a particular use case, a reason
>> for paying special attention to this sort of functionality, as opposed to
>> merely including the capability should it later be found to need that
>> special attention.
>>
>>
>>
> -----------------------------------------
>
> What I am saying - to you - is that I think many guys who I have talked to
> seem to have the sense that the kinds of things that I am talking about are
> high level effects, just as you and Piaget Modeller (I can never remember
> his name) did.  So then, their low level implementation would have the
> *potential* to represent these issues of conceptual relativism once their
> programs got to the point where they understood basic sentences. It is as
> if they are so focused on (what they consider to be the) low level
> implementation issues that they then imagine that once their programs are
> able to deal insightfully with simple expressions (or observations and
> interactions) that the rest will be easy.  What I am saying is that you
> have to work these capabilities into your basic programming because these
> are the essence of intelligence.  It is this genuinely rational-creative
> talent which is what drives intelligence.  These skills are not (just) high
> level capabilities, they are the essence of what it is that we are are
> talking about when we talk about intelligence.  So if you are going to
> create a program that can learn to use natural language then the
> program must be implement these skills from the start (even though it might
> take some time for the program to learn something that would demonstrate
> how these can be used effectively.)
>
> It is interesting that you are, like Mike, demanding a concrete
> example. My simply telling you that a program that is to be able to learn
> to work with a human language has to be able to develop skills to develop
> abstractions, generalizations and categorical definitions from stories
> (story-like conversation) isn't enough to convince you that these so called
> higher level capabilities should be implemented at a low level of
> implementation.  Stories (and examples) occur at different levels of
> abstraction.  These levels are relative, there is no such thing as a purely
> concrete example or a pure abstraction.  So the truth is that I have
> already given you quite a few examples, it is just that they have been
> expressed as abstractions.
>
> Saying that your model would be potentially capable of representing the
> kinds of relations that I am talking about is somewhat superficial. You are
> saying that the superficial aspects of representation would be powerful
> enough to handle these kinds of effects as if you were not fully realizing
> that your programming has to be explicitly written to actually implement
> these kinds of effects.
>
> By implementing these ideas at a lower level of the design what happens?
> The program suddenly becomes quite unwieldy.  That means that your program
> has to deal with all the problems of creative thinking from the start.  Ok,
> but so what.  That is exactly where you want to be.  Jump in and get to
> work.  Stop trying to focus on what you once conceived as the starting
> point for developing an AGI project and start working on the central
> currents of reasoning.  You think that by starting with something that can
> be broken into simpler pieces that you can locate the ideal starting point
> but you haven't.  You broke it up in the wrong way. The right way is to
> examine, not glimpse, but examine the central issue of rational creativity
> and take a look at the fact that it can be and should be implemented at a
> "low level".
>
> You cannot pick out the parts of low level implementation in such a way as
> to avoid the complications of genuine AGI.  A dedicated AGI programmer is
> going to need to deal with them eventually.
>
> Jim Bromer
>
>
>
>
> On Mon, Oct 22, 2012 at 9:28 PM, [email protected] 
> <[email protected]>wrote:
>
>> So links can act as nodes, basically, as in a generalized hypergraph?
>> That's also built into my system. The Link class is a subclass of the Node
>> class. Nothing particularly difficult or unpleasant there.
>>
>> A story can define a distinction between kinds in my system, but it would
>> do so implicitly, through context, rather than explicitly through a
>> formalized mechanism.
>>
>> While neither the links-as-nodes nor the story-as-concept is specifically
>> used or accounted for in my design, it is easily extensible in both of
>> these directions. What I'm looking for is a particular use case, a reason
>> for paying special attention to this sort of functionality, as opposed to
>> merely including the capability should it later be found to need that
>> special attention.
>>
>>
>>
>>
>> -- Sent from my Palm Pre
>>
>> ------------------------------
>>  On Oct 22, 2012 8:04 PM, Jim Bromer <[email protected]> wrote:
>>
>> A relatively concrete categorical definition of a concept might be a very
>> short "story" denoting the distinction between two or more cases of a kind
>> of thing.  Although the distinction might be made briefer, that does not
>> mean that it would be made better by such a device.
>> Jim Bromer
>>
>> On Mon, Oct 22, 2012 at 8:57 PM, Jim Bromer <[email protected]> wrote:
>>
>>> A concept may be defined by a word, a group of words, a sentence or a
>>> group of sentences (or even a fragment of a word).  A category that such a
>>> concept might be said to belong to is also a concept.  So the only
>>> distinction between a link (or an edge) and a node of a semantic network is
>>> relative to some purpose of relation or categorization (or description).
>>>
>>> Mike refuses to try to understand what I am saying because he would have
>>> to give up his sense of a superior point of view in order to understand
>>> it.  Yes you have a more enlightened view point when it comes to trying to
>>> understand ideas that other people are trying to explain.  But you resist
>>> 'understanding' what I am saying because it does not easily fall into an
>>> orderly point system that seems like it is immediately programmable.
>>>
>>> So you understand the words that I am using but I think you are simply
>>> refusing to understand the implications of those words because it is more
>>> unwieldy then your current beliefs.
>>> Jim Brom
>>>
>>>>
>>>>>         <http://www.listbox.com/>
>>       <http://www.listbox.com/>
>>
>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to