Ok, "implemented" is the wrong word, since it's still evolving. "Accounted for" 
is a better choice.

I too have a very healthy respect for the complexity and subtlety of human 
thought. This project grew in large part out of an extreme dissatisfaction I 
feel towards traditional approaches that make too many assumptions and breeze 
past too many important aspects of the problem. I do my best to hold myself 
accountable to the broad scope of the problem by looking at the most difficult, 
awkward examples of human behavior and capabilities.

In language things like puns, idioms, incomplete sentences, inline corrections, 
etc. Many of these are not implemented as yet, but I have restricted myself to 
design choices that provide a clear path forward to their implementation. When 
I come across new, strange things, I force myself to modify my design until 
they start to make sense. Things can remain unimplemented, but they cannot 
remain unexplained.

In this way, I hope to move towards a solid foundation of understanding that 
encompasses the full spectrum of language and thought, not just a toy system 
that works off of unfounded assumptions and oversimplifications. If my concepts 
of how language and thought can't explain everything I stumble across, they're 
inadequate, and I refuse to leave them unrevised.

I'm learning a lot as I build, and on more than one occasion I've scratched 
& started over because of some subtle flaw that initially went 
unrecognized. The positive side is that each time I do this, my system becomes 
more robust and is easier to implement, the latter mainly due to experience and 
familiarity on those portions not being redesigned from the ground up. The 
latest revision was to properly work in the distinct behavior of determiners 
versus quantifiers, since (taking a mathematical analogy a bit far) determiners 
act more like named constants and quantifiers act more like re-bindable 
variables. I'll get there eventually, but I'm not cutting myself any slack or 
taking any shortcuts.



-- Sent from my Palm Pre
On Oct 23, 2012 8:46 PM, Jim Bromer <[email protected]> wrote: 

Aaron,I did not say that you are missing something.But let's look at something 
that you did say. 
"Either (1) I've already implemented it or (2) I'm just not getting it."
 Since your program is not complete you obviously have not "already 
implemented it."  So the first thing is that you are calling your 
expectation, which is largely built on an expectation of representational 
potential, an implementation. Potential representational range is not the 
same as an implementation.  An implementation must include effective means 
of dealing with the problems that the program will encounter and produce.  
That is what you are not getting.  You are not getting the idea that 
conceptual relativity, which is a catch phrase that I use to stress the 
elusive nature of implementing human-like thought, suggests that the 
complexities of intelligence have to be dealt with from the start.  You 
cannot start with something that will simplify the problem too much because 
that will also severely restrain the efficacy of the program.  In 
other words, if you refuse to deal with the complications of conceptual 
relativity now your program will not be very interesting until you realize that 
it is not turning out as you hoped it would and decide to deal with 
the complexities of thought.
 Jim Bromer  
 On Tue, Oct 23, 2012 at 8:16 PM, Aaron Hosford 
<[email protected]> wrote:

Is there a paper or document I can read on conceptual relativism? I think what 
I'm lacking is simply a full-bodied description of the idea. Either (1) I've 
already implemented it or (2) I'm just not getting it. A pair of words just 
isn't much to go by when trying to capture a concept that operates at this 
level of abstraction. When you say them, I immediately think of a long list of 
capabilities my system has, but you've rejected most of the ones I've named off.


My system:  * is able to very flexibly deal with concepts, 
abstractions, and "truth".  * is built on a data structure, the semantic 
network, which is capable of modeling any other data structure. (Is there a 
class of data structures that is analogous to the class of Turing-complete 
languages? I think it's obvious, but I've never seen a paper on it. Someone 
ought to give it a go. I would, but it's not a priority for me given my limited 
schedule.)

  * has a representational scheme within this semantic network is capable 
of fully capturing the meaning of an arbitrary sentence, verifiably so because 
meanings can be converted back into sentences without loss of information (or 
will shortly, when I've finished writing the code for it).

  * is fully capable of representing meta meanings, meanings that 
correspond to statements about other meanings or statements.  * doesn't 
define kinds or classes of things by an explicit definition, but rather through 
context, usage, and experience, allowing the meaning of concepts to shift over 
time.


I just don't see what's missing. Which of these comes closest to the mark on 
capturing the functionality you are getting at? Reading up on this, the ideas 
of paradigm shifts and reality filters come to mind. My personal motto has for 
some time been, "Expectation skews perception," which fits right in with what I 
see in the literature I have come across on this topic. Given this, I would 
expect the last of the points above, that my system doesn't work off explicit 
definitions but rather through context/usage/experience, captures what it is 
you've been trying to convey. But then again, maybe I'm just filtering my 
perception of reality with my own expectations.



On Tue, Oct 23, 2012 at 6:10 PM, Jim Bromer <[email protected]> wrote:


Aaron,You can't pin conceptual relativism down by providing an example of 
a 'thought' that someones unimplemented AGI program would be unable 
to consider without resorting to a 
profoundly imaginary example and abstract language.


 I have been saying that (many) old fashioned AI programs would be capable 
of dealing with conceptual relativism if they were given the capability to do 
so.  So I haven't been saying that your AGI model is definitely incapable 
of producing or even dealing with conceptual relativism. I am saying that 
you have to deal with sooner or later and sooner is better than later.


 You cannot pin conceptual relativism down with concrete examples because 
once you did you can come up with a way to "explain" the effect away (because 
you are capable of intelligent thought)  if that is your primary 
motivation.


 I cannot understand why this isn't obvious. The first step is to 
deal with conceptual relativism in your own thinking. My goal, for 
sometime now, has been to get the core elements of AGI and then start with a 
really simple model that I can then test and develop in a cyclical 
fashion.  So by starting with a system that enables conceptual 
relativism in an extremely simple model I can start working with the 
system sooner. It is a little like trying to make a semantic network (or 
perhaps I should call it a flexible definitive conceptual network) as simple as 
a neural network.  But because it is a definitive network, I can easily 
modify it to make it more sophisticated which is something that is almost 
impossible to do with a neural network.


 Jim Bromer  
 On Tue, Oct 23, 2012 at 11:34 AM, Aaron Hosford 
<[email protected]> wrote:



I'm not asking for a concrete example because I don't believe you. I'm asking 
for it because I'm not sure what you're saying my design should be capable of. 
I'm trying to narrow down the meaning of your words to what you're 
actually trying to convey. The examples you've given didn't look like 
examples, they looked like summaries of something I need more details in order 
to follow. It's not that you're talking about abstractions. It's that you're 
using abstract language to talk about them. I can agree on the general 
case of what you're saying without seeing how my system specifically fails 
to implement it. You have called it conceptual relativism, without saying what 
that actually means. There are many ways to make concepts relative, and some of 
them are nothing like others. Which one are you talking about? And why does my 
system need it? Give me an example of a thought my system cannot have 
without this sort of functionality. That's the method by which I move my design 
forward. I find something my design can't do and that a human being can, and I 
modify my design until it includes that capability. I'm just struggling to see 
what it is you're saying my system lacks.




 
I'm not starting with language because it's easy. I'm starting with it because 
I think it connects most directly with those higher cognitive faculties. And 
right now, I'm not implementing reasoning in any way. I'm simply implementing 
the format in which the knowledge is to be represented during reasoning. I've 
got to have this foundation before I can do anything with reasoning, not 
because I'm avoiding the task, but because this is a necessary subgoal of the 
task. When we think, we are manipulating meaning based on "rules" (more like 
hints, heuristics, and tendencies, but "rule" is convenient to say) we have 
previously observed in the behaviors of meanings extracted from perception. A 
thought is thus the generation of a new meaning from existing ones. How 
can I implement thinking of *any* sort, conceptually relative or not, without 
having an underlying data structure to represent meaning?




 
Yes, a restricted implementation of meaning will lead to restricted capability 
in the reasoning carried out on it. If a thought can't be represented, it can't 
be generated. But my system can represent thoughts about thoughts, thoughts 
about meaning. I haven't *yet* implemented the act of thinking those thoughts, 
but the capability is there because I knew it would be necessary later. There 
is nothing that a human being can't think about, once we're aware of it. That's 
what I'm trying to build. If my system can represent thoughts in the general 
case, what precisely is it missing other than the mechanisms to generate new 
thoughts from old ones, which is a recognized need already? I'm of the opinion 
that anything a human being can think can be expressed in language (other than 
the nature of raw qualia, which doesn't need to be communicated anyway, since 
the other person's version will do just as well), even if it takes years 
to find the right words. If there's something I've missed, I need to know 
what that is. I'm a little frustrated, because you're telling me I've missed 
something, but you won't come out and say exactly what that is.




 


 
On Tue, Oct 23, 2012 at 8:04 AM, Jim Bromer <[email protected]> wrote:


On Mon, Oct 22, 2012 at 9:28 PM, [email protected] 
<[email protected]> wrote:






So links can act as nodes, basically, as in a generalized hypergraph? That's 
also built into my system. The Link class is a subclass of the Node class. 
Nothing particularly difficult or unpleasant there.





A story can define a distinction between kinds in my system, but it would do so 
implicitly, through context, rather than explicitly through a formalized 
mechanism.

While neither the links-as-nodes nor the story-as-concept is specifically used 
or accounted for in my design, it is easily extensible in both of these 
directions. What I'm looking for is a particular use case, a reason for paying 
special attention to this sort of functionality, as opposed to merely including 
the capability should it later be found to need that special attention.





 
-----------------------------------------
 
What I am saying - to you - is that I think many guys who I have talked to seem 
to have the sense that the kinds of things that I am talking about are high 
level effects, just as you and Piaget Modeller (I can never remember his 
name) did.  So then, their low level implementation would have the 
*potential* to represent these issues of conceptual relativism once their 
programs got to the point where they understood basic sentences. It 
is as if they are so focused on (what they consider to be the) low 
level implementation issues that they then imagine that once their programs are 
able to deal insightfully with simple expressions (or observations and 
interactions) that the rest will be easy.  What I am saying is that you 
have to work these capabilities into your basic programming because these 
are the essence of intelligence.  It is this genuinely 
rational-creative talent which is what drives intelligence.  These 
skills are not (just) high level capabilities, they are the essence 
of what it is that we are are talking about when we talk about 
intelligence.  So if you are going to create a program that can learn 
to use natural language then the program must be implement these 
skills from the start (even though it might take some time for the 
program to learn something that would demonstrate how these can be 
used effectively.)




 
It is interesting that you are, like Mike, demanding a concrete 
example. My simply telling you that a program that is to be able to learn 
to work with a human language has to be able to develop skills to develop 
abstractions, generalizations and categorical definitions from stories 
(story-like conversation) isn't enough to convince you that these so called 
higher level capabilities should be implemented at a low level of 
implementation.  Stories (and examples) occur at different levels of 
abstraction.  These levels are relative, there is no such thing as a 
purely concrete example or a pure abstraction.  So the truth is that I 
have already given you quite a few examples, it is just that they have been 
expressed as abstractions.




 
Saying that your model would be potentially capable of representing the kinds 
of relations that I am talking about is somewhat superficial. You are 
saying that the superficial aspects of representation would be powerful enough 
to handle these kinds of effects as if you were not fully realizing that your 
programming has to be explicitly written to actually implement these kinds of 
effects.




 
By implementing these ideas at a lower level of the design 
what happens?  The program suddenly becomes quite unwieldy.  
That means that your program has to deal with all the problems of creative 
thinking from the start.  Ok, but so what.  That is exactly where you 
want to be.  Jump in and get to work.  Stop trying to focus on what 
you once conceived as the starting point for developing an AGI project and 
start working on the central currents of reasoning.  You think that by 
starting with something that can be broken into simpler pieces that you can 
locate the ideal starting point but you haven't.  You broke it up in the 
wrong way. The right way is to examine, not glimpse, but examine the 
central issue of rational creativity and take a look at the fact that it can be 
and should be implemented at a "low level".




 
You cannot pick out the parts of low level implementation in such a way as to 
avoid the complications of genuine AGI.  A dedicated AGI programmer 
is going to need to deal with them eventually.
 
Jim Bromer
 
 
 
 
On Mon, Oct 22, 2012 at 9:28 PM, [email protected] 
<[email protected]> wrote:





So links can act as nodes, basically, as in a generalized hypergraph? That's 
also built into my system. The Link class is a subclass of the Node class. 
Nothing particularly difficult or unpleasant there.





A story can define a distinction between kinds in my system, but it would do so 
implicitly, through context, rather than explicitly through a formalized 
mechanism.

While neither the links-as-nodes nor the story-as-concept is specifically used 
or accounted for in my design, it is easily extensible in both of these 
directions. What I'm looking for is a particular use case, a reason for paying 
special attention to this sort of functionality, as opposed to merely including 
the capability should it later be found to need that special attention. 





-- Sent from my Palm Pre




On Oct 22, 2012 8:04 PM, Jim Bromer <[email protected]> wrote: 


A relatively concrete categorical definition of a concept might be a very short 
"story" denoting the distinction between two or more cases of a kind of 
thing.  Although the distinction might be made briefer, that does not mean 
that it would be made better by such a device.




Jim Bromer


On Mon, Oct 22, 2012 at 8:57 PM, Jim Bromer <[email protected]> wrote:


A concept may be defined by a word, a group of words, a sentence or a group of 
sentences (or even a fragment of a word).  A category that such a concept 
might be said to belong to is also a concept.  So the only distinction 
between a link (or an edge) and a node of a semantic network is relative to 
some purpose of relation or categorization (or description).




 
Mike refuses to try to understand what I am saying because he would have to 
give up his sense of a superior point of view in order to understand 
it.  Yes you have a more enlightened view point when it comes to 
trying to understand ideas that other people are trying to explain.  But 
you resist 'understanding' what I am saying because it does not easily 
fall into an orderly point system that seems like it is immediately 
programmable.




 
So you understand the words that I am using but I think you are simply refusing 
to understand the implications of those words because it is more unwieldy then 
your current beliefs.
Jim Brom














    





    









AGI | Archives  | Modify Your Subscription 








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  






-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to