However science is also a form of competition  between agents (humans
being a type of agent), the winner being the most cited.
 
Let us say that your type of Intelligence becomes prevalent, it would
become very easy to predict what this type of intelligence would find
interesting (just feed it all the research that is commonly fed it,
and then test it). People would then tailor there own research to be
interesting to this type of system (regardless of whether it was
innovative or ground breaking). It would stultify research.
 
"For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled." -- Richard P. Feynman
 
Though there is competition in the sense of getting the most market share.  In this sense my goal is to build the better AGI in terms of performance.
 
You can do what you wish. I'm going to study softwiring now.
 
I totally agree that softwiring is a good idea.  My point is to avoid re-inventing the wheel and to make the first AGI design as simple as possible.

This is only the case if we have words that are the same as the
concept that has emerged. In science there is a large amount of
creation of new concepts. What happens if in studying astonomic data
the system comes across a new type of star that varies its colour
slightly, the AGI decides to call it a chromar. The sort of system you
are describing doesn't seem able to do this.
 
I agree that new concepts can be discovered from unsupervised learning, and indeed this ability should be built in the first AGI.  What I suggest is that the rules of inference can be hardwired, for efficiency's sake.

I can see that a lot of learning will be supervised. But other types
will have to be unsupervised if we want it to discover new things.
 
Actually I agree with this.  The AGI should be able to discover new concepts such as numbers, "greater than", "addition", etc.

Not directly no. But then I am suggesting layered approach with
supervised learning to do most of the knowledge maintenance. I am also
interested in procedural learning, hence the difference in emphasis.
 
Well, my opinion on procedural learning is still undecided.  If, through procedural learning, we could build an AGI that acts and learns things autonomously, this may be end up faster than trying building an AGI capable of only knowledge maintenance (where we have to teach it everything explicitly).
 
Can you explain what you mean by the "layered" approach?  In my approach I think there should be a sensory layer at the bottom, followed by symbolic layers of increasing levels of abstraction.

It is more than pattern recognition, because we also take into
consideration information from other people into who to trust. For
example if Bob, someone you trust says, "Trust Mary", you will
probably put greater store by what Mary tells you. Or in a scientific
setting, an Author that you trust citing another author that is
unknown will raise the unknown author in your opinion.
 
Recognizing trustworthy people is a pattern recognition process in the broadest sense.  Your case points out the need for more knowledge as the input to that recognition process.

The opposite command "Don't trust Mary" is even more complex if you
already trust Mary. How do you determine whether to trust mary or not?

A naive pattern recognition approach is liable to exploitation of the
type I suggested in the no free lunch area.
 
I don't want to get too sophisticated at this point.  I think the first job is to build an AGI that's capable of human-like intelligence.  It's not a big problem if it's not too smart as first.

The methods of inference are fine as long as everything is translated
properly to the method of inference you are using. It is this
translation that is need of always being changed and updated. I tend
to look at it all as a package that needs changing because inference
is so dependent upon having the correct input data in the correct
representation.
 
For example, upon seeing 100 apples which are either red or green, the AGI concludes that apples has only 2 varieties.  This is inductive inference.  Likewise, we have deduction and abduction.  Can you cite an example of inference where none of these rules apply?  My conjecture is that these rules are sufficient for intelligence.
 
[ By the way, I've used "dumb blonde" as an example of inductive inference but I later realized that it may offend some people.  I apologize for that.  In real life I don't know many blond people, having lived in China most of my life.  The only exception was a foreign kid in high school who happened to be a bully and he gave me a negative (and most likely wrong) first impression. ]
 
Why won't this make our systems likely to forget things like eclipses
and infrequent comets?

I do agree with use it or lose it just not on the scale of individual
data, but on the scale of competencies. How each competency deals with
its own data is up to it though...
 
Eclipses may be remembered because they are unusual (ie saliency, degree of surprise, information content).  Perhaps the easiest things to forget are those that occur repeatedly without variation (ie has low information content).  This can be computationally measured.
 
yky


To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to