If we're lucky, PAM.P2 will make a baby step forward.
(Pun intended.)
~PM

Date: Sun, 1 Feb 2015 22:48:06 -0500
Subject: Re: [agi] Multiple Conceptual Level Networks
From: [email protected]
To: [email protected]

An attempt to consciously use coherence (more specifically than just using some 
unqualified reference to weighted reasoning) is a step forward. But, I believe 
the problem of replacing old information with better information is only one 
case where interactions between the levels of statements and actions on 
statements are needed. And I believe that even that process cannot take place 
without more direct knowledge of how such interactions can take place. So a 
reaction to some statements should not only instigate highly controlled action 
on some statements but a meta awareness of the action must be made available to 
the program and the program must be able to 'play' around with the processes. 
That is something that I am pretty sure is going to be missing in your program.
An equivalence case can be made that under the right circumstances a program 
which has constraints or severe limitations on the play between statement and 
actions on the statements could still attain a higher level of intelligence. 
However, (imo) the reason that this is unlikely is because there will be such 
great gaps between knowledge statements. Coherence is one way to start to deal 
with the problem, but it is not, in itself, going to be enough. Let's say that 
it was possible to have every logical statement that was needed to fully detail 
the understanding of some subject (along with the principals of learning). Then 
a tightly controlled interaction between the conceptual levels of statements 
and actions might be sufficient. This should illuminate one of the problems 
that I have been trying to get people to think about. With only a sketchy 
collection of those statements you are going to need a lot of juice to jump the 
gaps. This is where 'play' (for example) could serve as a fundamental and an 
inspiration for the program. And meta awareness is a necessity.Jim Bromer

On Sun, Feb 1, 2015 at 7:29 PM, Piaget Modeler via AGI <[email protected]> wrote:



Coherence is one way for old information to be rejected as well. Belief 
parameters for coherence in PAM.P2 includes a set of accepted and 
rejectedpropositions as well as a positive or negative coherence score.  As new 
information is received, it may cause old information to become less 
coherentand rejected.  A cohence propagator continually revises and propagates 
the coherence score for activated propositions.  The coherence propagation 
agentin PAM.P2 is based on  Paul Thagard's work at the University of Waterloo.
 ~PM
From: [email protected]
To: [email protected]
Subject: RE: [agi] Multiple Conceptual Level Networks
Date: Sun, 1 Feb 2015 12:42:06 -0800




So I think what you are referring to is new information overriding old 
information.
This could happen for many reasons but the essence is that some belief 
parameters(e.g., certainty, arousal, valence, activation, etc.) on the new 
information cause the new information to be considered before or superior to 
the older information.
In PAM.P2 the basic connection between statements and action is that an 
intention is formed, representing the desirability of bringing about  situation 
(or proposition), and an action selector (or solver) works to find (or create) 
a solution that can achieve the situation (or proposition). 
~PM
Date: Sun, 1 Feb 2015 10:28:18 -0500
Subject: Re: [agi] Multiple Conceptual Level Networks
From: [email protected]
To: [email protected]; [email protected]

So, as I can best understand what you are saying is that:"neural propositions 
as its knowledge representation""Propositions are disconnected from the 
underlying agents that refer to and create them. ""There are intention 
prototypes and solution prototypes.  Solutions are tried by the system, the 
actions of which are attempted."
So while your program presumably has many conceptual layers and can create 
more, there are constraints on the kinds of interactions that can occur between 
some of the most essential 'conceptual layers' (as I called them.) (Perhaps I 
should not call them conceptual layers in the context of agi programs. Maybe I 
should just call them layers or something like that.)
Suppose someone convinces a young, somewhat naïve adult that he should forget 
everything he has ever learned. Of course he can't do that. However, he would 
be able to start to ignore certain principals which he feels that were taught 
to him but which he never fully accepted. As he goes on he can teach himself to 
ignore more and more of those principles. For example, as I recognize that an 
idea that I thought was my own was actually instilled by advertising I can 
choose to selectively ignore it. It is my feeling that this kind of example 
shows that interactions between ideas (or propositions) *and their application 
to thoughts* are essential to intelligence. I suspect that General Intelligence 
is impossible if an idea about shaping one's own thinking cannot be applied. 
The direction for this shaping process may require some kind of justification 
but that justification will sometimes require a great deal of thought.  It 
can't only come from some external source of verification. 
There has to be some kind of constraints on the interactions between these 
levels. The program cannot forget everything it knows just because someone 
makes an imperative statement to that effect. So there has to be some kind of 
buffer between the propositional level and the action level. And the argument 
can be made, especially for a logical system, that the agents that act on the 
propositional levels are effectively capable of doing the kind of thing that I 
am talking about. (Or if they are not they can be tweaked so that they are.) 
However, my point here is that the management of something like that will 
introduce new kinds of problems (and situations) that require new kinds of sub 
programs to work on it.
I am going to read the paper and watch the video that you referenced.
Jim Bromer

On Sat, Jan 31, 2015 at 1:27 PM, Piaget Modeler via AGI <[email protected]> 
wrote:





> Date: Sat, 31 Jan 2015 10:45:27 -0500
> Subject: Re: [agi] Multiple Conceptual Level Networks
> From: [email protected]
> To: [email protected]
> 
> I am reading your (Piaget Modeler's) paper, "The Neural Proposition:
> Structures for Cognitive Systems," but I am trying to reread it more
> carefully to better understand it.
> 
> So let me ask you a few questions about your project.
> Is it an AGI application or an AGI Platform?
PAM.P2  is a cognitive architecture (  https://www.academia.edu/9997454/PAM.P2 
)that uses neural propositions as its knowledge representation.  The ovals in 
the diagram represent prototypes, instances of which are referenced by the 
depictedagents.
> You know about reification and gerunds. How does your program turn a
> statement into an action?
There are intention prototypes and solution prototypes.  Solutions are tried by 
the system, the actions of which are attempted.  attempts are sent to a device 
running a psyche application and results are returned as to whetherthe attempt 
succeeded or failed. 
> How does your program prevent a statement like, "Forget everything
> that you know" from becoming an action that causes it to forget
> everything that it knows?
Propositions are disconnected from the underlying agents that refer to and 
create them.  That being said, an agent could run amok if incorrectly 
programmedand delete all the propositions of the system.  So the agents  have 
to be carefully programmed.
~PM
> 
> 
> Jim Bromer
> 
> 
> On Thu, Jan 29, 2015 at 12:19 PM, Piaget Modeler via AGI
> <[email protected]> wrote:
> > Do you mean like "Neural Propositions: Structures for Cognitive Systems" ?
> >
> > ~PM
> >
> >> Date: Thu, 29 Jan 2015 06:04:02 -0500
> >> Subject: [agi] Multiple Conceptual Level Networks
> >> From: [email protected]
> >> To: [email protected]
> >
> >>
> >> I came up with a great concept-theory using cross generalizations on
> >> logic so I decided to write about it. As I thought about it I
> >> remembered seeing some introductory text about network theory
> >> somewhere and the first examples that they mentioned used binary
> >> nodes. Some of the examples were effectively about kinds of logical
> >> cross-generalizations. So what happened to my great new theory?
> >> Somehow it fizzled into something that was from some introductory
> >> text about networks. The thing is, I don't think current network
> >> theory is very interesting.
> >>
> >> In order to create more interesting networks you have to have multiple
> >> layers. Not just multiple processing layers but multiple conceptual
> >> layers. But these concept layers should not be associated only by a
> >> simplistic associations (on concept nodes for instance) but by the
> >> potential for nodes on one layer to interact dramatically with other
> >> layers. Of course this can be implemented using contemporary
> >> conventions about nodal networks. So why is the idea of multiple
> >> concept layers important? Because of the potential of the layered
> >> networks to represent cross-categorical relations which might be
> >> needed to solve difficult problems and which might be more susceptible
> >> to effective methods of analysis.
> >>
> >> When Internet traffic is being analyzed, for example, the analysis
> >> occurs on a different conceptual level than the traffic itself. In
> >> this case, there is very limited interaction with the traffic and the
> >> analysis. If the analysis is sent to a web manager then the analytical
> >> function is itself producing some traffic on the same system. The
> >> number of conceptual levels in this example is extremely constricted
> >> (there are 2 levels) and the interaction between the levels is tightly
> >> constrained as well.
> >>
> >> But it is easy to imagine systems where there are many different kinds
> >> of conceptual levels and a lot of different ways interaction can
> >> occur. Can you do this with conventional notions about sub-networks?
> >> Ok, but there are times when you need to free your mind from
> >> conventional thinking.
> >> Jim Bromer
> >>
> >>
> >> -------------------------------------------
> >> AGI
> >> Archives: https://www.listbox.com/member/archive/303/=now
> >> RSS Feed: https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc
> >> Modify Your Subscription: https://www.listbox.com/member/?&;
> >> Powered by Listbox: http://www.listbox.com
> > AGI | Archives | Modify Your Subscription
> 
> 
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  




                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to