[agi] MindForth 15.JAN.2008

2008-01-16 Thread A. T. Murray
Mind.Forth Programming Journal (MFPJ) Tues.15.JAN.2008

Yesterday on 14 January 2008 the basic scaffolding for 
the Moving Wave Algorithm of artificial intelligence 
was installed in Mind.Forth and released on the Web. 
Now it is time to clean up the code a little and to 
deal with some stray activations that interfere with 
the proper generation of meandering streams of thought. 

First in psiDamp we are re-introducing the single call 
to psiDecay, so that post-thought "lopsi" concepts 
will gradually lose their activation over time. 

We need to get rid of the "newpsi" and "prequel" and 
"psicrest" variables, because with "lopsi" and "hipsi" 
we were able to get the job done in exemplary fashion. 

Having eliminated or commented out the obsolete variables, 
now we are trying to debug the stray activations. When 
we enter a known word like "kids" or "robots" and we 
press [RETURN], we get a line of output such as follows. 

Robot:  ROBOTS  WHAT DO ROBOTS DO

The AI speaks the word "ROBOTS" because it is starting 
an SVO sentence with "ROBOTS" as the subject. At first, 
the activation of ROBOTS sends a "spike" of twenty (20) 
to the verb NEED -- which has no subconscious activation 
because it exists in enBoot and not as a recent thought. 

ROBOTS #39 w. act 48 at I = 186 sending spike 20 to 
seq #74 NEED at act 0 yields 20 and zone = 181
 20 (lim = 63) for t=183 NEED engram; spike = 20 R

The enBoot verb "NEED" gets rejected with a message-line. 

  verbPhr: detour because verb-activation is only 12

We see from the following diagnostic output display that 
the Audition module has been calling psiDamp to de-activate 
the ROBOTS concept after the hearing of each individual 
letter in the ROBOTS word. 

R
psiDamp called for urpsi = 39  by module ID #104 Audition
  psiDecay called to reduce all conceptual activations.
O
psiDamp called for urpsi = 39  by module ID #104 Audition
  psiDecay called to reduce all conceptual activations.
B
psiDamp called for urpsi = 39  by module ID #104 Audition
  psiDecay called to reduce all conceptual activations.
O
psiDamp called for urpsi = 39  by module ID #104 Audition
  psiDecay called to reduce all conceptual activations.
T
psiDamp called for urpsi = 39  by module ID #104 Audition
  psiDecay called to reduce all conceptual activations.
S
psiDamp called for urpsi = 39  by module ID #104 Audition
  psiDecay called to reduce all conceptual activations.
psiDamp called for urpsi = 39  by module ID #104 Audition
  psiDecay called to reduce all conceptual activations.
psiDamp called for urpsi = 39  by module ID #104 Audition
  psiDecay called to reduce all conceptual activations.

Well, isn't that result weird? By briefly changing the 
"module ID #" above to "42" for external input and to "35" 
for internal flow, we discovered multiple psiDamp calls 
during the internal reentry of each word being thought. 

R
psiDamp called for urpsi = 39  by module ID #35
  psiDecay called to reduce all conceptual activations.
O
psiDamp called for urpsi = 39  by module ID #35
  psiDecay called to reduce all conceptual activations.
B
psiDamp called for urpsi = 39  by module ID #35
  psiDecay called to reduce all conceptual activations.
O
psiDamp called for urpsi = 39  by module ID #35
  psiDecay called to reduce all conceptual activations.
T
psiDamp called for urpsi = 39  by module ID #35
  psiDecay called to reduce all conceptual activations.
S
psiDamp called for urpsi = 39  by module ID #35
  psiDecay called to reduce all conceptual activations.
psiDamp called for urpsi = 39  by module ID #35
  psiDecay called to reduce all conceptual activations.

The concept of ROBOTS keeps being set to the same "residuum" 
by psiDamp, but the NEED concept keeps getting psi-decayed 
until its activation drops too low for validation as a good 
verb to go with the word ROBOTS as a subject. 

We vaguely suspect that the reentry of each character 
in ROBOTS is being treated as if the character were a 
whole word by itself, so that the "trough" triggger 
code gets activated not merely once, but many times. 

Aha! During reentry, the SPEECH module is setting 
"pov" to "35" and calling AUDITION for each character 
being "pronounced" by the SPEECH module. Therefore 
the Audition "trough" trigger is being set to one (1) 
for each and every character being reentered from 
the SPEECH mind-module. There should be some easy 
fix for this bug, such as perhaps creating a special 
flag to indicate that reentry is in progress. However, 
at this point we would like to remark that, after the 
extremely difficult lopsi/hipsi coding of yesterday, 
we may finally be in the close-to-True-AI phase 
where the major bugs have been solved and we are 
only clearing out minor bugs -- which nevertheless 
prevent the AI from functioning flawlessly as True AI. 

We had better check the table of variables and the 
Jav

Re: [agi] AGI and Deity

2008-01-16 Thread Stan Nilsen

James,
your comments are appreciated.
 a few comments below
Stan


James Ratcliff wrote:
Your train of reasoning is lacking somewhat in many areas, and does not 
directly point to your main assertion.
thanks for the feedback.  As I follow other discussions and read the 
papers they refer to, I realize that my writings are lacking.  Perhaps 
they are more blog like than scientific.




The problem of calculating values of certain states is a difficult one, 
and one that a goo AGI MUST be able to do, using facts of the world, and 
subjective beliefs and measures as well.
I'm not sure I get the MUST part.  Is this for troubleshooting purpose 
or for trust issues? Or is it required for "steering" the contemplation 
or attention of the machine?


  Whether healthcare or education spending is most beneficial must be 
calculated, and compared against eachother, based on facts, beliefs, 
past data and statistics, and trial and error.
  And these subjective beliefs are ever changing and cyclical.  As a 
better example, would be a limited AGI whose job was to balance the 
National budget, its job would be to choose the best projects to spend 
money on.
  Maximizing Benefit Units (BU) here as a measure of 'worth' of each 
project is needed and required.
  One intelligence (human) may be overwelmed with the sheer amount of 
data and statistics to come to the best decision.  An AGI with 
subjective beliefs about the benefit of of each could use potentially 
more of teh data to come to a more maximized solution.


It is the "future scenarios" that are often the most compelling 
justification or "evidence" for value of something, and in my opinion 
the most unreliable. Whether it is man or machine giving his case, there 
will be speculation involved in the common sense domain.


Will the scenario be "You say this... now prove it. If you can't prove 
it don't use that in the justification..."  Very limiting.




On your other note about any explanation being too long or too 
complicated to understand.. Any decision must be able to be 
explained.  It can be done at different levels, and expanded as much as 
the AGI is told to do so, but there should be NO decicions that you ask 
the machine, Why do you decide X? and the answer is nothing, or 'I dont 
know' 


If the architecture of the machine is "flow" based, that is, the prior 
events helped determine current events, then the burden of explaining 
would overwhelm the system.  Even if only logic based, as you pointed 
out the "values" will be dynamic and to explain one would need to keep a 
record of the values that went into the decision process - a snapshot of 
the "world" as it was at the time.


What if the system attempted to explain and finally concluded "if I were 
making the decision right now, it would be different."  We wouldn't 
consider it especially brilliant since we hear it all the time.



Any machine we create that has answers without the reasoning, is very scary.


and maybe more than scary if it is optimized to offer reasoning that 
people will buy, especially the line "trust me."




James Ratcliff



*/Stan Nilsen <[EMAIL PROTECTED]>/* wrote:

Greetings Samantha,

I'll not bother with detailed explanations since they are easily
dismissed with a hand wave and categorization of irrelevant.

For anyone who might be interested in the question of:
Why wouldn't a super intelligence be better able to explain the aspects
of reality? (assuming the point is providing explanation for choices.)
I've placed an example case online at

http://www.footnotestrongai.com/examples/bebillg.html

It's an "exploration" based on becoming Bill Gates, (at least having
control over his money) and how a supercomputer might offer
"explanations" given the situation. Pretty painless, easy read.

I find the values based nature of our world highly relevant to the
concept of an emerging "super brain" that will make super decisions.

Stan Nilsen


Samantha Atkins wrote:
 >
 > On Dec 26, 2007, at 7:21 AM, Stan Nilsen wrote:
 >
 >> Samantha Atkins wrote:
 >>>
 >>
 >>> In what way? The limits of human probability computation to form
 >>> accurate opinions are rather well documented. Why wouldn't a mind
 >>> that could compute millions of times more quickly and with far
 >>> greater accuracy be able to form much more complex models that
were
 >>> far better at predicting future events and explaining those
aspects
 >>> of reality with are its inputs? Again we need to get beyond the
 >>> [likely religion instilled] notion that only "absolute
knowledge" is
 >>> real (or "super") knowledge.
 >>
 >> Allow me to address what I think the questions are (I'll
paraphrase):
 >>
 >> Q1. in what way are we going to be "short" of super intelligence?
 >>
 >> resp: The simple answer is that the most intelligent of future
 >> intelligences will