Re: [agi] Building a machine that can learn from experience

2008-12-20 Thread Charles Hixson

Ben Goertzel wrote:


Hi,



Because some folks find that they are not subjectively
sufficient to explain everything they subjectively experience...

That would be more convincing if such people were to show evidence
that they understand what algorithmic processes are and can do.
 I'm almost tempted to class such verbalizations as meaningless
noise, but that's probably too strong a reaction.



Push comes to shove, I'd have to say I'm one of those people.
But you aren't one who asserts that that *IS* the right answer.  Big 
difference.  (For that matter, I have a suspicion that there are 
non-algorithmic aspects to consciousness.  But I also suspect that they 
are implementation details.)

...



ben g


*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription 	[Powered by Listbox] http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Charles Hixson

Ben Goertzel wrote:



On Fri, Dec 19, 2008 at 9:10 PM, J. Andrew Rogers 
and...@ceruleansystems.com mailto:and...@ceruleansystems.com wrote:



On Dec 19, 2008, at 5:35 PM, Ben Goertzel wrote:

The 



I suppose it would be more accurate to state that every process we
can detect is algorithmic within the scope of our ability to
measure it.  Like with belief in god(s) and similar, the point can
then be raised as to why we need to invent non-algorithmic
processes when ordinary algorithmic processes are sufficient to
explain everything we see. 



Because some folks find that they are not subjectively sufficient to 
explain everything they subjectively experience...
That would be more convincing if such people were to show evidence that 
they understand what algorithmic processes are and can do.  I'm almost 
tempted to class such verbalizations as meaningless noise, but that's 
probably too strong a reaction.


 


 Non-algorithmic processes very conveniently have properties
identical to the supernatural, and so I treat them similarly.  ...
Like the old man once said, entia non sunt multiplicanda praeter
necessitatem.

Cheers,

J. Andrew Rogers
http://www.listbox.com

Still, it's worth remembering that occasionally Occam's razor cuts too 
closely.  It tends to yield an excellent first guess, as long as you 
remember that it *IS* a guess.





--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org mailto:b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx


*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription 	[Powered by Listbox] http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Charles Hixson

Hector Zenil wrote:

On Mon, Dec 1, 2008 at 6:20 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
  

On Sun, Nov 30, 2008 at 11:48 PM, Hector Zenil [EMAIL PROTECTED] wrote:


On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
  

But I don't get your point at all, because the whole idea of
...


...


Oh I see! I think that's of philosophical taste as well. I don't think
everybody would agree with you. Specially if you poll physicists like
those that constructed the standard model of computation! We cannot
ask Feynman, but I actually asked Deutsch. He does not only think QM
is our most basic physical reality (he thinks math and computer
science lie in quantum mechanics), but he even takes quite seriously
his theory of parallel universes! and he is not alone. Speaking by...
when I do not agree with them (since AIT does not require
non-deterministic randomness) I think it is not that trivial since
even researchers think they contribute in some fundamental (not only
philosophical) way.

  

-- Ben G


Still, one must remember that there is Quantum Theory, and then there 
are the interpretations of Quantum Theory.  As I understand things there 
are still several models of the universe which yield the same 
observables, and choosing between them is a matter of taste.  They are 
all totally consistent with standard Quantum Theory...but ...well, which 
do you prefer?  Multi-world?  Action at a distance?  No objective 
universe? (I'm not sure what that means.)  The present is created by the 
future as well as the past?  As I understand things, these cannot be 
chosen between on the basis of Quantum Theory.  And somewhere in that 
mix is Wholeness and the Implicate Order.


When math gets translated into Language, interpretations add things.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Charles Hixson

A response to:

I wondered why anyone would deface the
expression of his own thoughts with an emotional and hostile message,

My theory is that thoughts are generated internally and forced into words via a 
babble generator.  Then the thoughts are filtered through a screen to remove 
any that don't match ones intent, that don't make sense, etc.  The value 
assigned to each expression is initially dependent on how well it expresses 
one's emotional tenor.

Therefore I would guess that all of the verbalizations that the individual 
generated which passed the first screen were hostile in nature.  From the 
remaining sample he filtered those which didn't generate sensible-to-him 
scenarios when fed back into his world model.  This left him with a much 
reduced selection of phrases to choose from when composing his response.

In my model this happens a phrase at a time rather than a sentence at a time.  
And there is also a probabilistic element where each word has a certain 
probability of being followed by divers other words.  I often don't want to 
express the most likely probability, as by choosing a less frequently chosen 
alternative I (believe I) create the impression a more studied, i.e. 
thoughtful, response.  But if one wishes to convey a more dynamic style then 
one would choose a more likely follower.

Note that in this scenario phrases are generated both randomly and in parallel. 
 Then they are selected for fitness for expression by passing through various 
filter.

Reasonable?


Jim Bromer wrote:

Hi.  I will just make a quick response to this message and then I want
to think about the other messages before I reply.

A few weeks ago I decided that I would write a criticism of
ai-probability to post to this group.  I wasn't able remember all of
my criticisms so I decided to post a few preliminary sketches to
another group.  I wasn't too concerned about how they responded, and
in fact I thought they would just ignore me.  The first response I got
was from an irate guy who was quite unpleasant and then finished by
declaring that I slandered the entire ai-probability community!  He
had some reasonable criticisms about this but I considered the issue
tangential to the central issue I wanted to discuss. I would have
responded to his more reasonable criticisms if they hadn't been
embedded in his enraged rant.  I wondered why anyone would deface the
expression of his own thoughts with an emotional and hostile message,
so I wanted to try the same message on this group to see if anyone who
was more mature would focus on this same issue.

Abram made a measured response but his focus was on the
over-generalization.  As I said, this was just a preliminary sketch of
a message that I intended to post to this group after I had worked on
it.

Your point is taken.  Norvig seems to say that overfitting is a
general problem.  The  method given to study the problem is
probabilistic but it is based on the premise that the original data is
substantially intact.  But Norvig goes on to mention that with pruning
noise can be tolerated. If you read my message again you may see that
my central issue was not really centered on the issue of whether
anyone in the ai-probability community was aware of the nature of the
science of statistics but whether or not probability can be used as
the fundamental basis to create agi given the complexities of the
problem.  So while your example of overfitting certainly does deflate
my statements that no one in the ai-probability community gets this
stuff, it does not actually address the central issue that I was
thinking of.

I am not sure if Norvig's application of a probabilistic method to
detect overfitting is truly directed toward the agi community.  In
other words: Has anyone in this grouped tested the utility and clarity
of the decision making of a fully automated system to detect
overfitting in a range of complex IO data fields that one might expect
to encounter in AGI?

Jim Bromer



On Sat, Nov 29, 2008 at 11:32 AM, Abram Demski [EMAIL PROTECTED] wrote:
  

Jim,

There is a large body of literature on avoiding overfitting, ie,
finding patterns that work for more then just the data at hand. Of
course, the ultimate conclusion is that you can never be 100% sure;
but some interesting safeguards have been cooked up anyway, which help
in practice.

My point is, the following paragraph is unfounded:



This is a problem any AI method has to deal with, it is not just a
probability thing.  What is wrong with the AI-probability group
mind-set is that very few of its proponents ever consider the problem
of statistical ambiguity and its obvious consequences.
  

The AI-probability group definitely considers such problems.

--Abram

On Sat, Nov 29, 2008 at 10:48 AM, Jim Bromer [EMAIL PROTECTED] wrote:


One of the problems that comes with the casual use of analytical
methods is that the user becomes inured to their habitual misuse. When
a casual familiarity is combined with a 

Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-29 Thread Charles Hixson
A general approach to this that frequently works is to examine the 
definitions that you are using for ambiguity.  Then to look for 
operational tests.  If the only clear meanings lack operational tests, 
then it's probably worthless to waste computing resources on the problem 
until those problems have been cleared up.  If the level of ambiguity is 
too high (judgment call) then the first order of business is to ensure 
that you are talking about the same thing.  If you can't do that, then 
it's probably a waste of time to compute intensively about it.


Note that this works, because different people draw their boundaries in 
different places, so different people spend time on different 
questions.  It results in an approximately reasonable allocation of 
effort, which changes as knowledge accumulates.  If everyone drew the 
bounds in the same place, then it would be a lamentably narrow area 
being explored intensively, with lots of double coverage.  (There's 
already lots of double coverage.  Patents for the telephone, I believe 
it was, were filed by two people within the same week.  Or look at the 
history of the airplane.  But there's a lot LESS double coverage than if 
everyone drew the boundary in the same place.)


As for What is consciousness?... DEFINE YOUR TERMS.  If you define how 
you recognize consciousness, then I can have a chance of answering your 
question, otherwise you can reject any answer I give with But that's 
not what I meant!


Ditto for time.  Or I could slip levels and tell you that it's a word 
with four letters (etc.).


Also, many people are working intensively on the nature of time.  They 
know in detail what they mean (not that they all necessarily mean the 
same thing).  To say that they are wasting their time because questions 
about the nature of time are silly is, itself, silly.  Your question 
about the nature of time may be silly, but that's because you don't have 
a good definition with operational tests.  That says nothing about what 
the exact same words may mean when someone else says them. (E.g. [off 
the top of my head], time is a locally monotonically increasing measure 
of state changes within the local environment is a plausible definition 
of time.  It has some redeeming features.  It, however, doesn't admit of 
a test of why it exists.  That would need to be posed within the context 
of a larger theory which implied operational tests.)


There are linguistic tricks.  E.g., when It's raining, who or what is 
raining.  But generally they are relatively trivial...unless you accept 
language as being an accurate model of the universe.  Or consider Who 
is the master who makes the grass green?  That's not a meaningless 
question in the proper context.  It's an elementary problem for the 
student. 


(don't peek)


(do you know the answer?)


It's intended to cause the student to realize that things do not have 
inherent properties that are caused by sensations interpreted by the 
human brain.  But other reasonable answers might be the gardener, who 
waters and fertilizes it or perhaps a particular molecule that 
resonates in such a manner that the primary light that re-radiates from 
grass is in that part of the spectrum that we have labeled green.  And 
I'm certain that there are other valid answers.  (I have a non-standard 
answer to The sound of one hand clapping, as I can, indeed, clap with 
one hand...fingers against the palm.  I think it takes large hands.)


If one writes off as senseless questions that don't make sense to one, 
wellwhat is the square root of -1?  The very name imaginary tells 
you how unreasonable most mathematicians thought that question.  But it 
turned out to be rather valuable.  And it worked because someone made a 
series of operational tests and showed that it would work.  Up until 
then the very definition of square root prohibited using negative 
numbers.  So they agreed to change the definition.


I don't think that you can rule out any question as nonsensical provided 
that there are operational tests and unambiguous definitions.  And if 
there aren't, then you can make some.  It may not answer the question 
that you couldn't define...but if you can't sensibly ask the question, 
then it isn't much of a question (no matter HOW important it feels).



Tudor Boloni wrote:
I agree that there are many better questions to elucidate the 
tricks/pitfalls of language.  but lets list the biggest time wasters 
first, and the post showed some real time wasters from various fields 
that i found valuable to be aware of


It implies it is pointless to ask what the essence of time is, but
then proceeds to give an explanation of time that is not
pointless, and may shed light on its meaning, which is perhaps as
much of an essence as time has..

i think the post tries to show that the error is that treating time 
like an object of reality with an essence is nonsensical and a waste 
of time;) it seems wonderful to have an AGI 

Re: [agi] If aliens are monitoring us, our development of AGI might concern them

2008-11-29 Thread Charles Hixson

Well.
The speed of light limitation seems rather secure. So I would propose 
that we have been visited by roboticized probes, rather than by 
naturally evolved creatures. And the energetic constraints make it seem 
likely that they were extremely small and infrequent...though I suppose 
that they could build larger probes locally.


My guess is that UFOs are just that. Unidentified. I suspect that many 
of them aren't even objects in any normal sense of the word. Temporary 
plasmas, etc. And others are more or less orthodox flying vehicles seen 
under unusual conditions. (I remember once being convinced that I'd seen 
one, but extended observation revealed that it was an advertising blimp 
seen with the sun behind it, and it was partially transparent. Quite 
impressive, and not at all blimp like. It even seemed to be moving 
rapidly, but that was due to the sunlight passing through an interior 
membrane that was changing in size and shape.


It would require rather impressive evidence before I would believe in 
actual visitations by naturally evolved entities. (Though the concept of 
MacroLife does provide one reasonable scenario.) Still... I would 
consider it more plausible to assert that we lived in a virtual world 
scenario, and were being monitored within it.


In any case, I see no operational tests, and thus I don't see any cause 
for using those possibilities to alter our activities.



Ed Porter wrote:


Since there have been multiple discussions of aliens lately on this 
list, I think I should communicate a thought that I have had 
concerning them that I have not heard any one else say --- although I 
would be very surprised if others have not thought it --- and it does 
relate to AGI --- so it is “on list.”




As we learn just how common exoplanets are, the possibility that 
aliens have visited earth seems increasingly scientifically 
believable, even for a relatively rationalist person like myself. 
There have, in fact, been many reportings of UFOs from sources that 
are hard to reject out of hand. An astronaut that NASA respected 
enough to send to the moon, has publicly stated he has attended 
government briefings in which he was told there is substantial 
evidence aliens have repeatedly visited earth. Within the last year 
Drudge had a report from a Chicago TV station that said sources at the 
tower of O'Hare airport claimed multiple airline pilots reported to 
them seeing a large flying-saucer-shaped object hovering over one of 
the building of the airport and then disappearing.


Now, I am not saying these reports are necessarily true, but I am 
saying that --- (a) given how rapidly life evolved on earth, as soon 
as it cooled enough that there were large pools of water; (b) there 
are probably at least a million habitable planets in the Milky Way (a 
conservative estimates); and (c) if one assumes one in 1000 such 
planets will have life evolve to AGI super-intelligence --- the 
chances there are planets with AGI super-intelligence within several 
thousand light years of earth are very good. And since, at least, 
mechanical AGIs with super intelligence and the resulting levels of 
technology should be able to travel through space at one tenth to one 
thousandth the speed of light for many tens of thousands of years, it 
is not at all unlikely life and/or machine forms from such planets 
have had time to reach us --- and perhaps --- not only to reach us --- 
but also to report back to their home planet and recruit many more of 
their kind to visit us.


This becomes even more likely if one considers that some predict the 
Milky Way actually had its peak number of habitable planets billions 
of years ago, meaning that on many planets evolution of intelligent 
life is millions, or billions, of years ahead of ours, and thus that 
life/machine forms on many of the planets capable of supporting 
intelligent life are millions of years beyond their singularities. 
This would mean their development of extremely powerful 
super-intelligence and the attendant developments in technologies we 
know of --- such as nanofabrication, controlled fusion reactions, and 
quantum computing and engineering --- and technologies we do not yet 
even know of --- would be way beyond our imagining.


All of the above is nothing new, among those who are open minded about 
(a) the evidence about the commonness of exoplanets; (b) the fact that 
there are enough accounts of UFO's from reputable sources that such 
accounts cannot dismissed out of hand as false, and (c) what the 
singularity and the development of super-intelligence would mean to a 
civilization.




But what I am suggesting that I have never heard before is that it is 
possible the aliens, if they actually have been visiting us repeatedly 
are watching us to see when mankind achieves super-intelligence, 
because only then do we presumably have a chance of becoming their equal.


Perhaps this means that only then we can understand them. Or 

Re: [agi] Re: JAGI submission

2008-11-29 Thread Charles Hixson

Matt Mahoney wrote:

--- On Tue, 11/25/08, Eliezer Yudkowsky [EMAIL PROTECTED] wrote:

  

Shane Legg, I don't mean to be harsh, but your attempt to link
Kolmogorov complexity to intelligence is causing brain damage among
impressionable youths.

( Link debunked here:
  http://www.overcomingbias.com/2008/11/complexity-and.html
)



Perhaps this is the wrong argument to support my intuition that knowing more 
makes you smarter, as in greater expected utility over a given time period. How 
do we explain that humans are smarter than calculators, and calculators are 
smarter than rocks?

...

-- Matt Mahoney, [EMAIL PROTECTED]
  
Each particular instantiation of computing has a certain maximal 
intelligence that it can express (noting that intelligence is 
ill-defined).  More capacious stores can store more information.  Faster 
processors can process information more quickly.


However, information is not, in and of itself, intelligence.  
Information is the database on which intelligence operates.  Information 
isn't a measure of intelligence, and intelligence isn't a measure of 
information.  We have decent definitions of information.  We lack 
anything corresponding for intelligence.  It's certainly not complexity, 
though intelligence appears to require a certain amount of complexity.  
And it's not a relationship between information and complexity.


I still suspect that intelligence will turn out to be to what we think 
of as intelligence rather as a symptom is to a syndrome.  (N.B., not as 
a symptom is to a disease!)  That INTELLIGENCE will turn out to be 
composed of many, many, small little tricks that enable one to solve a 
certain class of problems quick...or even at all.  But that the tricks 
will have no necessary relation ship to each other.  One will be 
something like alpha-beta pruning and another will be hill-climbing and 
another quick-sort, and another...and another will be a heuristic for 
classifying a problem as to what tools might help solve it...and another
As such, I don't think that any AGI can exist.  Something more general 
than people, and certainly something that thinks more quickly than 
people and something that knows more than any person can...but not a 
truly general AI.


E.g., where would you put a map colorer for 4-color maps?  Certainly an 
AGI should be able to do it, but would you really expect it to do it 
more readily (compared to the speed of it's other processes) than people 
can?  If it could, would that really bump your estimate of it's 
intelligence that much?  And yet there are probably an indefinitely 
large number of such problems.  And from what it currently know, it's 
quite likely that each one would either need n^k or better steps to 
solve, or a specialized algorithm.  Or both.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Cog Sci Experiment

2008-11-22 Thread Charles Hixson

Acilio Mendes wrote:

My question is: how do they know your vegetable association?


...

Try this experiment: repeat the same procedure of the video, but
instead of asking for a vegetable, ask for an 'an animal that lives in
the jungle'. Most people will answer 'Lion' even though lions don't
live in the jungle.

...
Now, why people on average choose those answers is a whole other story

I veldt the reason was certainly because The lion is king of the jungle.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-15 Thread Charles Hixson

Robert Swaine wrote:

Conciousness is akin to the phlogiston theory in chemistry.  It is likely a 
shadow concept, similar to how the bodily reactions make us feel that the heart 
is the seat of emotions.  Gladly, cardiologist and heart surgeons do not look 
for a spirit, a soul, or kindness in the heart muscle.  The brain organ need 
not contain anything beyond the means to effect physical behavior,.. and 
feedback as to those behavior.
  
This isn't clear.  Certainly some definitions of consciousness fit this 
analysis, but the term is generally so loosely defined that unlike 
phlogiston it probably can't be disproven. 

OTOH, it seems to me quite likely that there are, or at least can be, 
definitions of consciousness which fit within the common definition of 
consciousness and are also reasonably accurate.  And testable.  (I 
haven't reviewed Richard Loosemore's recent paper.  Perhaps it is one of 
these.)

A finite degree of sensory awareness serves as a suitable replacement for 
consciousness, in otherwords, just feedback.
  
To an extent I agree with you.  I have in the past argued that a 
thermostat is minimally conscious.  But please note the *minimally*.  
Feedback cannot, by itself, progress beyond that minimal state.  Just 
what else is required is very interesting.  (The people who refuse to 
call thermostats minimally conscious merely have stricter minimal 
requirements for consciousness.  We don't disagree about how a 
thermostat behaves.)

Would it really make a difference if we were all biological machines, and our perceptions 
were the same as other animals, or other designed minds; more so if we were 
in a simulated existence.  The search for consciousness is a misleading (though not 
entirely fruitless) path to AGI.

  
??? We *are* biological machines.  So what?  And our perceptions are 
basically the same as those of other animals.  This doesn't make sense 
as an argument, unless you are presuming that other animals aren't 
conscious, which flys in the face of most recent research on the 
subject.  (I'm not sure that they've demonstrated consciousness in 
bacteria, but they have demonstrated that they are trainable.  Whether 
they are conscious, then, is probably an artifact of your definition.)



--- On Fri, 11/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:

  

From: Richard Loosemore [EMAIL PROTECTED]
Subject: [agi] A paper that actually does solve the problem of consciousness
To: agi@v2.listbox.com
Date: Friday, November 14, 2008, 12:27 PM
I completed the first draft of a technical paper on
consciousness the 
other day.   It is intended for the AGI-09 conference, and
it can be 
found at:


http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf

The title is Consciousness in Human and Machine: A
Theory and Some 
Falsifiable Predictions, and it does solve the

problem, believe it or not.

But I have no illusions:  it will be misunderstood, at the
very least. 
I expect there will be plenty of people who argue that it
does not solve 
the problem, but I don't really care, because I think
history will 
eventually show that this is indeed the right answer.  It
gives a 
satisfying answer to all the outstanding questions and it

feels right.

Oh, and it does make some testable predictions.  Alas, we
do not yet 
have the technology to perform the tests yet, but the
predictions are on 
the table, anyhow.


In a longer version I would go into a lot more detail,
introducing  the 
background material at more length, analyzing the other
proposals that 
have been made and fleshing out the technical aspects along
several 
dimensions.  But the size limit for the conference was 6
pages, so that 
was all I could cram in.






Richard Loosemore








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-11-03 Thread Charles Hixson
 31, 2008 at 2:26 AM, Charles Hixson
[EMAIL PROTECTED] wrote:
  

It all depends on what definition of number you are using.  If it's
constructive, then it must be a finite set of numbers.  If it's based on
full Number Theory, then it's either incomplete or inconsistent.  If it's
based on any of several subsets of Number Theory that don't allow
incompleteness to be proven (or even described) then the numbers are
precisely this which is included in that subset of the theory.

Number Theory is the one with the largest (i.e., and infinite number) of
unprovable theories about numbers of the variations that I have been
considering.   My point in the just prior post is that numbers are precisely
that item which the theory you are using to describe them says they are,
since they are artifacts created for computational convenience, as opposed
to direct sensory experiences of the universe.

As such, it doesn't make sense to say that a subset of number theory leaves
more facts about numbers undefined.  In the subsets those aren't facts about
numbers.

Abram Demski wrote:


Charles,

OK, but if you argue in that manner, then your original point is a
little strange, doesn't it? Why worry about Godelian incompleteness if
you think incompleteness is just fine?

Therefore, I would assert that it isn't that it leaves *even more*
about numbers left undefined, but that those characteristics aren't
in such a case properties of numbers.  Merely of the simplifications
an abstractions made to ease computation.

In this language, what I'm saying is that it is important to examine
the simplifications and abstractions, and discover how they work, so
that we can ease computation in our implementations.

--Abram

On Thu, Oct 30, 2008 at 7:58 PM, Charles Hixson
[EMAIL PROTECTED] wrote:

  

If you were talking about something actual, then you would have a valid
point.  Numbers, though, only exist in so far as they exist in the theory
that you are using to define them.  E.g., if I were to claim that no
number
larger than the power-set of energy states within the universe were
valid,
it would not be disprovable.  That would immediately mean that only
finite
numbers were valid.

P.S.:  Just because you have a rule that could generate a particular
number
given a larger than possible number of steps doesn't mean that it is a
valid
number, as you can't actually ever generate it.  I suspect that infinity
is
primarily a computational convenience.  But one shouldn't mistake the
fact
that it's very convenient for meaning that it's true.  Or, given Occam's
Razor, should one?  But Occam's Razor only detects provisional truths,
not
actual ones.

If you're going to be constructive, then you must restrict yourself to
finitely many steps, each composed of finitely complex reasoning.  And
this
means that you must give up both infinite numbers and irrational numbers.
 To do otherwise means assuming that you can make infinitely precise
measurements (which would, at any rate, allow irrational numbers back
in).

Therefore, I would assert that it isn't that it leaves *even more* about
numbers left undefined, but that those characteristics aren't in such a
case properties of numbers.  Merely of the simplifications an
abstractions
made to ease computation.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-31 Thread Charles Hixson
It all depends on what definition of number you are using.  If it's 
constructive, then it must be a finite set of numbers.  If it's based on 
full Number Theory, then it's either incomplete or inconsistent.  If 
it's based on any of several subsets of Number Theory that don't allow 
incompleteness to be proven (or even described) then the numbers are 
precisely this which is included in that subset of the theory.


Number Theory is the one with the largest (i.e., and infinite number) of 
unprovable theories about numbers of the variations that I have been 
considering.   My point in the just prior post is that numbers are 
precisely that item which the theory you are using to describe them says 
they are, since they are artifacts created for computational 
convenience, as opposed to direct sensory experiences of the universe.


As such, it doesn't make sense to say that a subset of number theory 
leaves more facts about numbers undefined.  In the subsets those aren't 
facts about numbers.


Abram Demski wrote:

Charles,

OK, but if you argue in that manner, then your original point is a
little strange, doesn't it? Why worry about Godelian incompleteness if
you think incompleteness is just fine?

Therefore, I would assert that it isn't that it leaves *even more*
about numbers left undefined, but that those characteristics aren't
in such a case properties of numbers.  Merely of the simplifications
an abstractions made to ease computation.

In this language, what I'm saying is that it is important to examine
the simplifications and abstractions, and discover how they work, so
that we can ease computation in our implementations.

--Abram

On Thu, Oct 30, 2008 at 7:58 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
  

If you were talking about something actual, then you would have a valid
point.  Numbers, though, only exist in so far as they exist in the theory
that you are using to define them.  E.g., if I were to claim that no number
larger than the power-set of energy states within the universe were valid,
it would not be disprovable.  That would immediately mean that only finite
numbers were valid.

P.S.:  Just because you have a rule that could generate a particular number
given a larger than possible number of steps doesn't mean that it is a valid
number, as you can't actually ever generate it.  I suspect that infinity is
primarily a computational convenience.  But one shouldn't mistake the fact
that it's very convenient for meaning that it's true.  Or, given Occam's
Razor, should one?  But Occam's Razor only detects provisional truths, not
actual ones.

If you're going to be constructive, then you must restrict yourself to
finitely many steps, each composed of finitely complex reasoning.  And this
means that you must give up both infinite numbers and irrational numbers.
 To do otherwise means assuming that you can make infinitely precise
measurements (which would, at any rate, allow irrational numbers back in).

Therefore, I would assert that it isn't that it leaves *even more* about
numbers left undefined, but that those characteristics aren't in such a
case properties of numbers.  Merely of the simplifications an abstractions
made to ease computation.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-30 Thread Charles Hixson
If you were talking about something actual, then you would have a valid 
point.  Numbers, though, only exist in so far as they exist in the 
theory that you are using to define them.  E.g., if I were to claim that 
no number larger than the power-set of energy states within the universe 
were valid, it would not be disprovable.  That would immediately mean 
that only finite numbers were valid.


P.S.:  Just because you have a rule that could generate a particular 
number given a larger than possible number of steps doesn't mean that it 
is a valid number, as you can't actually ever generate it.  I suspect 
that infinity is primarily a computational convenience.  But one 
shouldn't mistake the fact that it's very convenient for meaning that 
it's true.  Or, given Occam's Razor, should one?  But Occam's Razor only 
detects provisional truths, not actual ones.


If you're going to be constructive, then you must restrict yourself to 
finitely many steps, each composed of finitely complex reasoning.  And 
this means that you must give up both infinite numbers and irrational 
numbers.  To do otherwise means assuming that you can make infinitely 
precise measurements (which would, at any rate, allow irrational numbers 
back in).


Therefore, I would assert that it isn't that it leaves *even more* 
about numbers left undefined, but that those characteristics aren't in 
such a case properties of numbers.  Merely of the simplifications an 
abstractions made to ease computation.


Abram Demski wrote:

Charles,

Interesting point-- but, all of these theories would be weaker then
the standard axioms, and so there would be *even more* about numbers
left undefined in them.

--Abram

On Tue, Oct 28, 2008 at 10:46 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
  

Excuse me, but I thought there were subsets of Number theory which were
strong enough to contain all the integers, and perhaps all the rational, but
which weren't strong enough to prove Gödel's incompleteness theorem in.  I
seem to remember, though, that you can't get more than a finite number of
irrationals in such a theory.  And I think that there are limitations on
what operators can be defined.

Still, depending on what you mean my Number, that would seem to mean that
Number was well-defined.  Just not in Number Theory, but that's because
Number Theory itself wasn't well-defined.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread Charles Hixson
If not verify, what about falsify?  To me Occam's Razor has always been 
seen as a tool for selecting the first argument to attempt to falsify.  
If you can't, or haven't, falsified it, then it's usually the best 
assumption to go on (presuming that the costs of failing are evenly 
distributed).


OTOH, Occam's Razor clearly isn't quantitative, and it doesn't always 
pick the right answer, just one that's good enough based on what we 
know at the moment.  (Again presuming evenly distributed costs of failure.)


(And actually that's an oversimplification.  I've been considering the 
costs of applying the presumption of the theory chosen by Occam's Razor 
to be equal to or lower then the costs of the alternatives.  Whoops!  
The simplest workable approach isn't always the cheapest, and given that 
all non-falsified-as-of-now approaches have closely equal 
plausibility...perhaps one should instead choose the cheapest to presume 
of all theories that have been vetted against current knowledge.)


Occam's Razor is fine for it's original purposes, but when you try to 
apply it to practical rather than logical problems then you start 
needing to evaluate relative costs.  Both costs of presuming and costs 
of failure.  And actually often it turns out that a solution based on a 
theory known to be incorrect (e.g. Newton's laws) is good enough, so 
you don't need to decide about the correct answer.  NASA uses Newton, 
not Einstein, even though Einstein might be correct and Newton is known 
to be wrong.


Pei Wang wrote:

Ben,

It seems that you agree the issue I pointed out really exists, but
just take it as a necessary evil. Furthermore, you think I also
assumed the same thing, though I failed to see it. I won't argue
against the necessary evil part --- as far as you agree that those
postulates (such as the universe is computable) are not
convincingly justified. I won't try to disprove them.

As for the latter part, I don't think you can convince me that you
know me better than I know myself. ;-)

The following is from
http://nars.wang.googlepages.com/wang.semantics.pdf , page 28:

If the answers provided by NARS are fallible, in what sense these answers are
better than arbitrary guesses? This leads us to the concept of rationality.
When infallible predictions cannot be obtained (due to insufficient knowledge
and resources), answers based on past experience are better than arbitrary
guesses, if the environment is relatively stable. To say an answer is only a
summary of past experience (thus no future confirmation guaranteed) does
not make it equal to an arbitrary conclusion — it is what adaptation means.
Adaptation is the process in which a system changes its behaviors as if the
future is similar to the past. It is a rational process, even though individual
conclusions it produces are often wrong. For this reason, valid inference rules
(deduction, induction, abduction, and so on) are the ones whose conclusions
correctly (according to the semantics) summarize the evidence in the premises.
They are truth-preserving in this sense, not in the model-theoretic sense that
they always generate conclusions which are immune from future revision.

--- so you see, I don't assume adaptation will always be successful,
even successful to a certain probability. You can dislike this
conclusion, though you cannot say it is the same as what is assumed by
Novamente and AIXI.

Pei

On Tue, Oct 28, 2008 at 2:12 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
  

On Tue, Oct 28, 2008 at 10:00 AM, Pei Wang [EMAIL PROTECTED] wrote:


Ben,

Thanks. So the other people now see that I'm not attacking a straw man.

My solution to Hume's problem, as embedded in the experience-grounded
semantics, is to assume no predictability, but to justify induction as
adaptation. However, it is a separate topic which I've explained in my
other publications.
  

Right, but justifying induction as adaptation only works if the environment
is assumed to have certain regularities which can be adapted to.  In a
random environment, adaptation won't work.  So, still, to justify induction
as adaptation you have to make *some* assumptions about the world.

The Occam prior gives one such assumption: that (to give just one form) sets
of observations in the world tend to be producible by short computer
programs.

For adaptation to successfully carry out induction, *some* vaguely
comparable property to this must hold, and I'm not sure if you have
articulated which one you assume, or if you leave this open.

In effect, you implicitly assume something like an Occam prior, because
you're saying that  a system with finite resources can successfully adapt to
the world ... which means that sets of observations in the world *must* be
approximately summarizable via subprograms that can be executed within this
system.

So I argue that, even though it's not your preferred way to think about it,
your own approach to AI theory and practice implicitly assumes some variant
of the 

Re: [agi] constructivist issues

2008-10-28 Thread Charles Hixson
Excuse me, but I thought there were subsets of Number theory which were 
strong enough to contain all the integers, and perhaps all the rational, 
but which weren't strong enough to prove Gödel's incompleteness theorem 
in.  I seem to remember, though, that you can't get more than a finite 
number of irrationals in such a theory.  And I think that there are 
limitations on what operators can be defined.


Still, depending on what you mean my Number, that would seem to mean 
that Number was well-defined.  Just not in Number Theory, but that's 
because Number Theory itself wasn't well-defined.


Abram Demski wrote:

Mark,

That is thanks to Godel's incompleteness theorem. Any formal system
that describes numbers is doomed to be incomplete, meaning there will
be statements that can be constructed purely by reference to numbers
(no red cats!) that the system will fail to prove either true or
false.

So my question is, do you interpret this as meaning Numbers are not
well-defined and can never be (constructivist), or do you interpret
this as It is impossible to pack all true information about numbers
into an axiom system (classical)?

Hmm By the way, I might not be using the term constructivist in
a way that all constructivists would agree with. I think
intuitionist (a specific type of constructivist) would be a better
term for the view I'm referring to.

--Abram Demski

On Tue, Oct 28, 2008 at 4:13 PM, Mark Waser [EMAIL PROTECTED] wrote:
  

Numbers can be fully defined in the classical sense, but not in the


constructivist sense. So, when you say fully defined question, do
you mean a question for which all answers are stipulated by logical
necessity (classical), or logical deduction (constructivist)?

How (or why) are numbers not fully defined in a constructionist sense?

(I was about to ask you whether or not you had answered your own question
until that caught my eye on the second or third read-through).







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI)

2008-10-26 Thread Charles Hixson

Matt Mahoney wrote:

--- On Sun, 10/26/08, Mike Tintner [EMAIL PROTECTED] wrote:
  
So what's the connection according to you between 
viruses and illness/disease, heating water and boiling,

force applied to object and acceleration of object?



Observing illness causes me to believe a virus might be present. Observing 
boiling water causes me to believe it was heated. f = ma says nothing about 
cause and effect.

We say X causes Y if X and Y are correlated, and we can control X but not Y. 
This implies X precedes Y in time, because if Y happens first then controlling X would 
make them uncorrelated.

If the mind is computable, then control is just a belief. Therefore, cause and 
effect would also be just a belief. Physics is computable and does not require 
cause and effect to model it.

-- Matt Mahoney, [EMAIL PROTECTED]
---
  
Even that's overstating the case.  If a spigot is open at the bottom of 
a coffee urn, and there is liquid in it, and there is no lid on it, and 
the sun is shining, then liquid is observed to flow.  If there's no 
light then liquid is not observed to flow.  Did the light cause the 
flow?  No, but it permitted the observation.


And even that is overstating the case.  If the bottom part of the liquid 
were frozen, then liquid wouldn't flow, so my prior statement was 
incorrect.  And I can keep adding exceptional conditions that will 
prevent the flow of liquid, or will alter things so that the prevention 
is overcome.


Science is the abstraction away of exceptional conditions, and examining 
the relation between the remaining conditions.  It only works because 
(if?) we can presume that the relations between the normal conditions 
continue to persist with only a few alterations caused by the 
exceptional conditions, which enables us to determine THEIR effects.


In essence Science is model building...and testing the models for 
correspondence with observable externality.  And the very process of 
building those models rests on beliefs such as object permanence.  
(Which is what made radio-activity/alchemy so appalling/appealing.)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Charles Hixson

Dr. Matthias Heger wrote:

The goal of chess is well defined: Avoid being checkmate and try to
checkmate your opponent.

What checkmate means can be specified formally.

Humans mainly learn chess from playing chess. Obviously their knowledge
about other domains are not sufficient for most beginners to be a good chess
player at once. This can be proven empirically.

Thus an AGI would not learn chess completely different from all what we now.
It would learn from experience which is one of  the most common kinds of
learning.

I am sure that everyone who learns chess by playing against chess computers
and is able to learn good chess playing (which is not sure as also not
everyone can learn to be a good mathematician) will be able to be a good
chess player against humans.

My first posting in this thread shows the very weak point in the
argumentation of those people who say that social and other experiences are
needed to play chess.

You suppose knowledge must be available from another domain to solve
problems of the domain of chess.
But everything of chess in on the chessboard itself. If you are not able to
solve chess problems from chess alone then you are not able to solve certain
solvable problems. And thus you cannot call your AI AGI.

If you give an AGI all facts which are sufficient to solve a problem then
your AGI must be able to solve the problem using nothing else than these
facts.

If you do not agree with this, then how should an AGI know which experiences
in which other domains are necessary to solve the problem? 


The magic you use is the overestimation of real-world experiences. It sounds
as if the ability to solve arbitrary problems in arbitrary domains depend
essentially on that your AGI plays in virtual gardens and speaks often with
other people. But this is completely nonsense. No one can play good chess by
those experiences. Thus such experiences are not sufficient. On the other
hand there are programs which definitely do not have such experiences and
outperform humans in chess. Thus those experiences are neither sufficient
nor necessary to play good chess and emphasizing on such experiences is
mystifying AGI, similar as it is done by the doubters of AGI who always
argue with Goedel or quantum physics which in fact has no relevance for
practical AGI at all.

- Matthias





Trent Waddington [mailto:[EMAIL PROTECTED] wrote

Gesendet: Donnerstag, 23. Oktober 2008 07:42
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

On Thu, Oct 23, 2008 at 3:19 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
  

I do not think that it is essential for the quality of my chess who had
taught me to play chess.
I could have learned the rules from a book alone.
Of course these rules are written in a language. But this is not important
for the quality of my chess.

If a system is in state x then it is not essential for the future how x


was
  

generated.
Thus a programmer can hardcode the rules of chess in his AGI and then,
concerning chess the AGI would be in the same state as if someone teaches
the AGI the chess rules via language.

The social aspect of learning chess is of no relevance.



Sigh.

Ok, let's say I grant you the stipulation that you can hard code the
rules of chess some how.  My next question is, in a goal-based AGI
system, what goal are you going to set and how are you going to set
it?  You've ruled out language, so you're going to have to hard code
the goal too, so excuse my use of language:

Play good chess

Oh.. that sounds implementable.  Maybe you'll give it a copy of
GNUChess and let it go at it.. but I've known *humans* who learnt to
play chess that way and they get trounced by the first human they play
against.  How are you going to go about making an AGI that can learn
chess in a complete different way to all the known ways of learning
chess?

Or is the AGI supposed to figure that out?

I don't understand why so many of the people on this list seem to
think AGI = magic.

Trent

  
Well, a general game player for two person games with win/lose 
characteristics would be an advance in the state of the art, but I don't 
think it would qualify as an AGI.  A Game Theory solver, perhaps...for 
the easy subset of that discipline.  (I'll agree that this easy subset 
contains many unsolved problems...but it's still the easy subset.)


When we think of AGI, we model it on people (to greater or lesser 
degree).  We may not expect that it will have an intuitive knowledge of 
multi-player game theory with variable payoffs, but we expect that it 
will be operate in such an environment.  (People aren't all that good at 
the general form, after all.  Mastery isn't expected.  Recognition is.)  
Most developers don't appear to be even approaching this from the 
direction of Game Theory, but I suppose that it's as valid a starting 
point as any, however if the AI is limited to a small subset of Game 
Theory (e.g., chess), then it hardly qualifies 

Re: [agi] constructivist issues

2008-10-21 Thread Charles Hixson

Abram Demski wrote:

Ben,
...
One reasonable way of avoiding the humans are magic explanation of
this (or humans use quantum gravity computing, etc) is to say that,
OK, humans really are an approximation of an ideal intelligence
obeying those assumptions. Therefore, we cannot understand the math
needed to define our own intelligence. Therefore, we can't engineer
human-level AGI. I don't like this conclusion! I want a different way
out.

I'm not sure the guru explanation is enough... who was the Guru for Humankind?

Thanks,

--Abram

  
You may not like Therefore, we cannot understand the math needed to 
define our own intelligence., but I'm rather convinced that it's 
correct.  OTOH, I don't think that it follows from this that humans 
can't build a better than human-level AGI.  (I didn't say engineer, 
because I'm not certain what connotations you put on that term.)  This 
does, however, imply that people won't be able to understand the better 
than human-level AGI.  They may well, however, understand parts of it, 
probably large parts.  And they may well be able to predict with fair 
certitude how it would react in numerous situations.  Just not in 
numerous other situations.


The care, then, must be used in designing so that we can predict 
favorable motivations behind the actions in important-to-us  areas.  
Even this is probably impossible in detail, but then it doesn't *need* 
to be understood in detail.  If you can predict that it will make better 
choices than we can, and that it's motives are benevolent, and that it 
has a good understanding of our desires...that should suffice.  And I 
think we'll be able to do considerably better than that.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-15 Thread Charles Hixson
It doesn't need to satisfy everyone, it just has to be the definition 
that you are using in your argument, and which you agree to stick to.


E.g., if you define intelligence to be the resources used (given some 
metric) in solving some particular selection of problems, then that is a 
particular definition of intelligence.  It may not be a very good one, 
though, as it looks like a system that knows the answers ahead of time 
and responds quickly would win over one that understood the problems in 
depth.  Rather like a multiple choice test rather than an essay.


I'm sure that one could fudge the definition to skirt that particular 
pothole, but it would be an ad hoc patch.  I don't trust that entire 
mechanism of defining intelligence.  Still, if I know what you mean, I 
don't have to accept your interpretations to understand your argument.  
(You can't average across all domains, only across some pre-specified 
set of domains.  Infinity doesn't exist in the implementable universe.)


Personally, I'm not convinced by the entire process of measuring 
intelligence.  I don't think that there *IS* any such thing.  If it 
were a disease, I'd call intelligence a syndrome rather than a 
diagnosis.  It's a collection of partially related capabilities given 
one name to make them easy to think about, while ignoring details.  As 
such it has many uses, but it's easy to mistake it for some genuine 
thing, especially as it's an intangible.


As an analogy consider the gene for blue eyes.  There is no such 
gene.  There is a combination of genes that yields blue eyes, and it's 
characterized by the lack of genes for other eye colors.  (It's more 
complex than that, but that's enough.)


E.g., there appears to be a particular gene which is present in almost 
all people which enables them to parse grammatical sentences.  But there 
have been found a few people in one family where this gene is damaged.  
The result is that about half the members of that family can't speak or 
understand language.  Are they unintelligent?  Well, the can't parse 
grammatical sentences, and they can't learn language.  In most other 
ways they appear as intelligent as anyone else.


So I'm suspicious of ALL definitions of intelligence which treat it as 
some kind of global thing.  But if you give me the definition that you 
are using in an argument, then I can at least attempt to understand what 
you are saying.



Terren Suydam wrote:

Charles,

I'm not sure it's possible to nail down a measure of intelligence that's going 
to satisfy everyone. Presumably, it would be some measure of performance in 
problem solving across a wide variety of novel domains in complex (i.e. not 
toy) environments.

Obviously among potential agents, some will do better in domain D1 than others, 
while doing worse in D2. But we're looking for an average across all domains. 
My task-specific examples may have confused the issue there, you were right to 
point that out.

But if you give all agents identical processing power and storage space, then 
the winner will be the one that was able to assimilate and model each problem 
space the most efficiently, on average. Which ultimately means the one which 
used the *least* amount of overall computation.

Terren

--- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote:

  

From: Charles Hixson [EMAIL PROTECTED]
Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
To: agi@v2.listbox.com
Date: Tuesday, October 14, 2008, 2:12 PM
If you want to argue this way (reasonable), then you need a
specific 
definition of intelligence.  One that allows it to be
accurately 
measured (and not just in principle).  IQ
definitely won't serve.  
Neither will G.  Neither will GPA (if you're discussing

a student).

Because of this, while I think your argument is generally
reasonable, I 
don't thing it's useful.  Most of what you are
discussing is task 
specific, and as such I'm not sure that
intelligence is a reasonable 
term to use.  An expert engineer might be, e.g., a lousy
bridge player.  
Yet both are thought of as requiring intelligence.  I would
assert that 
in both cases a lot of what's being measured is task
specific 
processing, i.e., narrow AI. 


(Of course, I also believe that an AGI is impossible in the
true sense 
of general, and that an approximately AGI will largely act
as a 
coordinator between a bunch of narrow AI pieces of varying
generality.  
This seems to be a distinctly minority view.)


Terren Suydam wrote:


Hi Will,

I think humans provide ample evidence that
  

intelligence is not necessarily correlated with processing
power. The genius engineer in my example solves a given
problem with *much less* overall processing than the
ordinary engineer, so in this case intelligence is
correlated with some measure of cognitive
efficiency (which I will leave undefined). Likewise, a
grandmaster chess player looks at a given position and can
calculate a better move in one second than you or me could
come up

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Charles Hixson
If you want to argue this way (reasonable), then you need a specific 
definition of intelligence.  One that allows it to be accurately 
measured (and not just in principle).  IQ definitely won't serve.  
Neither will G.  Neither will GPA (if you're discussing a student).


Because of this, while I think your argument is generally reasonable, I 
don't thing it's useful.  Most of what you are discussing is task 
specific, and as such I'm not sure that intelligence is a reasonable 
term to use.  An expert engineer might be, e.g., a lousy bridge player.  
Yet both are thought of as requiring intelligence.  I would assert that 
in both cases a lot of what's being measured is task specific 
processing, i.e., narrow AI. 

(Of course, I also believe that an AGI is impossible in the true sense 
of general, and that an approximately AGI will largely act as a 
coordinator between a bunch of narrow AI pieces of varying generality.  
This seems to be a distinctly minority view.)


Terren Suydam wrote:

Hi Will,

I think humans provide ample evidence that intelligence is not necessarily correlated 
with processing power. The genius engineer in my example solves a given problem with 
*much less* overall processing than the ordinary engineer, so in this case intelligence 
is correlated with some measure of cognitive efficiency (which I will leave 
undefined). Likewise, a grandmaster chess player looks at a given position and can 
calculate a better move in one second than you or me could come up with if we studied the 
board for an hour. Grandmasters often do publicity events where they play dozens of 
people simultaneously, spending just a few seconds on each board, and winning most of the 
games.

Of course, you were referring to intelligence above a certain level, but if 
that level is high above human intelligence, there isn't much we can assume about that 
since it is by definition unknowable by humans.

Terren

--- On Tue, 10/14/08, William Pearson [EMAIL PROTECTED] wrote:
  

The relationship between processing power and results is
not
necessarily linear or even positively  correlated. And as
an increase
in intelligence above a certain level requires increased
processing
power (or perhaps not? anyone disagree?).

When the cost of adding more computational power, outweighs
the amount
of money or energy that you acquire from adding the power,
there is
not much point adding the computational power.  Apart from
if you are
in competition with other agents, that can out smart you.
Some of the
traditional views of RSI neglects this and thinks that
increased
intelligence is always a useful thing. It is not very

There is a reason why lots of the planets biomass has
stayed as
bacteria. It does perfectly well like that. It survives.

Too much processing power is a bad thing, it means less for
self-preservation and affecting the world. Balancing them
is a tricky
proposition indeed.

  Will Pearson




  




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Charles Hixson

Ben Goertzel wrote:


Jim,

I really don't have time for a long debate on the historical 
psychology of scientists...


To give some random examples though: Newton, Leibniz and Gauss were 
certainly obnoxious, egomaniacal pains in the ass though ... Edward 
Teller ... Goethe, whose stubbornness was largely on-the-mark with his 
ideas about morphology, but totally off-the-mark with his theory of 
color ... Babbage, who likely would have succeeded at building his 
difference engine were his personality less thorny ... etc. etc. etc. 
etc. etc. ...


...
ben


...

Galileo, Bruno of Nolan, etc.
OTOH, Paracelsus was quite personable.  So was, reputedly, Pythagoras.  
(No good evidence on Pythagoras, though.  Only stories from 
supporters.)  (Also, consider that the Pythagoreans, possibly including 
Pythagoras, had a guy put to death for discovering that sqrt(2) was 
irrational.  [As with most things from this date, this is more legend 
than fact, but is quite probable.])


As a generality, with many exceptions, strongly opinionated persons are 
not easy to get along with unless you agree with their opinions.  It 
appears to be irrelevant whether their opinions are right, wrong, or 
undecidable.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-06 Thread Charles Hixson

Dr. Matthias Heger wrote:


*Ben G wrote*

**
Well, for the purpose of creating the first human-level AGI, it seems 
important **to** wire in humanlike bias about space and time ... this 
will greatly ease the task of teaching the system to use our language 
and communicate with us effectively...


But I agree that not **all** AGIs should have this inbuilt biasing ... 
for instance an AGI hooked directly to quantum microworld sensors 
could become a kind of quantum mind with a totally different 
intuition for the physical world than we have...



Ok. But then I have again a different understanding of the G in AGI. 
The “quantum mind” should be more general than the human level AGI.


But since the human level AGI is difficult enough, we should build it 
first.


After that, for AGI 2.0, I propose the goal to build a quantum mind. ;-)


I feel that an AI with quantum level biases would be less general. It 
would be drastically handicapped when dealing with the middle level, 
which is where most of living is centered. Certainly an AGI should have 
modules which can more or less directly handle quantum events, but I 
would predict that those would not be as heavily used as the ones that 
deal with the mid level. We (usually) use temperature rather then 
molecule speeds for very good reasons.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-06 Thread Charles Hixson

Ben Goertzel wrote:



On Sun, Oct 5, 2008 at 7:41 PM, Abram Demski [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Ben,

I have heard the argument for point 2 before, in the book by Pinker,
How the Mind Works. It is the inverse-optics problem: physics can
predict what image will be formed on the retina from material
arrangements, but if we want to go backwards and find the arrangements
from the retinal image, we do not have enough data at all. Pinker
concludes that we do it using cognitive bias.



I understood Pinker's argument, but not Colin Hales's ...

Also, note cognitive bias can be learned rather than inborn (though in 
this case I imagine it's both). 

Probably we would be very bad at seeing environment different from 
those we evolved in, until after we'd gotten a lot of experience in 
them...


ben

I think the initial cognitive biases MUST be built in.

OTOH, clearly early experiences strongly shape the development from the 
built in biases.  (E.g.  experiments on kittens raised in a room with 
only vertical lines, and their later inability to see horizontal lines.)





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] New Scientist: Why nature can't be reduced to mathematical laws

2008-10-06 Thread Charles Hixson

Mike Tintner wrote:
Ben:I didn't read that book but I've read dozens of his papers ... 
it's cool stuff but does not convince me that engineering AGI is 
impossible ... however when I debated this with Stu F2F I'd say 
neither of us convinced each other ;-) ...
 
Ben,
 
His argument (like mine), is that AGI is *algorithmically* 
impossible, (Similarly he is arguing only that our *present* 
mechanistic worldview is inadequate). I can't vouch for it, since he 
doesn't explicitly address AGI as distinct from the powers of 
algorithms, but I would be v. surprised if he was arguing that AGI is 
impossible, period (no?). 
 
I would've thought that he would argue something like that just as we 
need a revolutionary new mechanistic worldview, so we need a 
revolutionary approach to AGI, (and not just a few tweaks  :)  ).

I would go both further and not as far.

Math clearly states that to derive all the possible truths from a 
numeric system as strong as number theory requires an infinite number of 
axioms.  I.e., choices.  This is clearly impossible.  To me this implies 
(but not proves) that there are an infinite number of possible futures 
descending from any precisely defined state.  As such, no AGI will be 
able to solve this problem.  It can't even make probability based choices.


OTOH, given a few local biases to start with, and reasoning with a 
relatively short headway from current time, Bayesian predictions work 
pretty well, and don't require infinite resources.


It's my further suspicion that we are equipped with sets of domain 
biases, and that at any one time one particular set is dominant.  This 
I see as primarily a simplifying approach, but one which reduces the 
amount of computation needed in any situation, allowing faster 
near-future predictions.


So what we have is something less that totally general.  Call it an 
A(g)I.  It has a general mode that it can use when it's got plenty of 
time, but that's not what it uses in real-time, and it's never run as a 
dominant mode, only as a moderately high priority task.  And the general 
mode tends to get stuck on insoluble (or just too complex) problems 
until it times out.  Sometimes it saves the state and returns to it 
later, but sometimes a meta-heuristic says Forget about it.  That 
game's not worth the candle.


The problem comes when you take the G in AGI too seriously.  There is no 
existence proof that such a thing can exist in finite space/time/energy 
situations.  But you should be able to get closer to it than people have 
evolved to demonstrate.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] New Scientist: Why nature can't be reduced to mathematical laws

2008-10-06 Thread Charles Hixson

Abram Demski wrote:

Charles,

Again as someone who knows a thing or two about this particular realm...

  

Math clearly states that to derive all the possible truths from a numeric
system as strong as number theory requires an infinite number of axioms.



Yep.

  

 I.e., choices.  This is clearly impossible.  To me this implies (but not
proves) that there are an infinite number of possible futures descending
from any precisely defined state.



Not quite.

An infinite number of axioms may be needed, but there is a right and
wrong here! We cannot choose any axioms we like. Well, we can, but if
we choose the wrong ones we will eventually derive a contradiction.
When we choose the right ones, we can't know that we have... we just
hold our breath and hope that no contradiction arises. :)

--Abram
  
Sorry.  Thinking on it you are correct.  Merely because the math ends up 
consistent doesn't mean that it matches reality.  But we can't know 
until after, quite possibly long after, we choose the axiom.  Which 
furthers the need for built in biases.  (I wish I'd realized your point, 
it would have made my argument stronger.)


OTOH, this is an argument by analogy, so it's not certain anyway.  It 
might be possible to derive a proof, but I sure couldn't do it.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Charles Hixson
I would go further.  Humans have demonstrated that they cannot be 
trusted in the long term even with the capabilities that we already 
possess.  We are too likely to have ego-centric rulers who make 
decisions not only for their own short-term benefit, but with an 
explicit After me the deluge mentality.  Sometimes they publicly admit 
it.  And history gives examples of rulers who were crazier than any 
leading a major nation-state at this time.


If humans were to remain in control, and technical progress stagnates, 
then I doubt that life on earth would survive the century.  Perhaps it 
would, though.  Microbes can be very hardy.
If humans were to remain in control, and technical progress accelerates, 
then I doubt that life on earth would survive the century.  Not even 
microbes.


I don't, however, say that we shouldn't have figurehead leaders who, 
within constraints, set the goals of the (first generation) AGI.  But 
the constraints would need to be such that humanity would benefit.  This 
is difficult when those nominally in charge not only don't understand 
what's going on, but don't want to.  (I'm not just talking about greed 
and power-hunger here.  That's a small part of the problem.)


For that matter, I consider Eliza to be a quite important feeler from 
the future.  AGI as psychologist is an underrated role, but one that I 
think could be quite important.  And it doesn't require a full AGI 
(though Eliza was clearly below the mark).  If things fall out well, I 
expect that long before full AGIs show up, sympathetic companions will 
arrive.  This is a MUCH simpler problem, and might well help stem the 
rising tide of insanity.   

A next step might be a personal secretary.  This also wouldn't require 
full AGI, though to take maximal advantage of it, it would require a 
body, but a minimal version wouldn't.  A few web-cams for eyes and mics 
for ears, and lots of initial help in dealing with e-mail, separating 
out which bills are legitimate.  Eventually it could, itself, verify 
that bills were legitimate and pay them, illegitimate and discard them, 
or questionable and present them to it's human for processing.  It's a 
complex problem, probably much more so than the companion, but quite 
useful, and well short of requiring AGI.


The question is, at what point do these entities start acquiring a 
morality.  I would assert that it should be from the very beginning.  
Even the companion should try to guide it's human away from immoral 
acts.  As such, the companion is acting as a quasi-independent agent, 
and is exerting some measure of control.  (More control if it's more 
skillful, or it's human is more amenable.)  When one gets to the 
secretary, it's exhibiting (one hopes), honesty and just behavior (e.g., 
not billing for services that it doesn't believe were rendered).


At each step along the way the morality of the agent has implications 
for the destination that will be arrived at, as each succeeding agent is 
built from the basis of its predecessor.   Also note that scaling is 
important, but not determinative.  One can imagine the same entity, in 
different instantiations, being either the secretary to a school teacher 
or to a multi-national corporation.  (Of course the hardware required 
would be different, but the basic activities are, or could be, the 
same.  Specialized training would be required to handle the government 
regulations dealing with large corporations, but it's the same basic 
functions.  If one job is simpler than the other, just have the program 
able to handle either and both of them.)


So.  Unless one expects an overnight transformation (a REALLY hard 
takeoff), AGIs will evolve in the context of humans as directors to 
replace bureaucracies...but with their inherent morality.  As such, as 
they occupy a larger percentage of the bureaucracy, that section will 
become subject to their morality.  People will remain in control, just 
as they are now...and orders that are considered immoral will be ... 
avoided.  Just as bureaucracies do now.  But one hopes that the evolving 
AGIs will have superior moralities.



Ben Goertzel wrote:



Keeping humans in control is neither realistic nor necessarily 
desirable, IMO.


I am interested of course in a beneficial outcome for humans, and also 
for the other minds we create ... but this does not necessarily 
involve us controlling these other minds...


ben g



If humans are to remain in control of AGI, then we have to make
informed, top level decisions. You can call this work if you want.
But if we abdicate all thinking to machines, then where does that
leave us?

-- Matt Mahoney, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Charles Hixson
Dawkins tends to see an truth, and then overstate it.  What he says 
isn't usually exactly wrong, so much as one-sided.  This may be an 
exception.


Some meanings of group selection don't appear to map onto reality.  
Others map very weakly.  Some have reasonable explanatory power.  If you 
don't define with precision which meaning you are using, then you invite 
confusion.  As such, it's a term that it's better not to use.


But I wouldn't usually call it a lie.  Merely a mistake.  The exact 
nature of the mistake depend on precisely what you mean, and the context 
within which you are using it.  Often it's merely a signal that you are 
confused and don't KNOW precisely what you are talking about, but merely 
the general ball park within which you believe it lies.  Only rarely is 
it intentionally used to confuse things with malice intended.  In that 
final case the term lie is appropriate.  Otherwise it's merely 
inadvisable usage.


Eric Burton wrote:

I remember Richard Dawkins saying that group selection is a lie. Maybe
we shoud look past it now? It seems like a problem.

On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
  

OK.  How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your survival.
Obviously, it is
then to your (evolutionary) benefit to behave ethically.


Ethics can't be explained simply by examining interactions between
individuals. It's an emergent dynamic that requires explanation at the
group level. It's a set of culture-wide rules and taboos - how did they
get there?
  

I wasn't explaining ethics with that statement.  I was identifying how
evolution operates in social groups in such a way that I can derive ethics
(in direct response to your question).

Ethics is a system.  The *definition of ethical behavior* for a given group
is an emergent dynamic that requires explanation at the group level
because it includes what the group believes and values -- but ethics (the
system) does not require belief history (except insofar as it affects
current belief).  History, circumstances, and understanding what a culture
has the rules and taboos that they have is certainly useful for deriving
more effective rules and taboos -- but it doesn't alter the underlying
system which is quite simple . . . . being perceived as helpful generally
improves your survival chances, being perceived as harmful generally
decreases your survival chances (unless you are able to overpower the
effect).



Really? I must be out of date too then, since I agree with his explanation

of ethics. I haven't read Hauser yet though, so maybe you're right.
  

The specific phrase you cited was human collectives with certain taboos
make the group as a whole more likely to persist.  The correct term of art
for this is group selection and it has pretty much *NOT* been supported by
scientific evidence and has fallen out of favor.

Matt also tends to conflate a number of ideas which should be separate which
you seem to be doing as well.  There need to be distinctions between ethical
systems, ethical rules, cultural variables, and evaluations of ethical
behavior within a specific cultural context (i.e. the results of the system
given certain rules -- which at the first-level seem to be reasonably
standard -- with certain cultural variables as input).  Hauser's work
identifies some of the common first-level rules and how cultural variables
affect the results of those rules (and the derivation of secondary rules).
It's good detailed, experiment-based stuff rather than the vague hand-waving
that you're getting from armchair philosophers.



I fail to see how your above explanation is anything but an elaboration of

the idea that ethics is due to group selection. The following statements
all support it:
- memes [rational or otherwise] when adopted by a group can enhance group

survival
- Ethics is first and foremost what society wants you to do.
- ethics turns into a matter of determining what is the behavior that is
best for society
  

I think we're stumbling over your use of the term group selection  and
what you mean by ethics is due to group selection.  Yes, the group
selects the cultural variables that affect the results of the common
ethical rules.  But group selection as a term of art in evolution
generally meaning that the group itself is being selected or co-evolved --
in this case, presumably by ethics -- which is *NOT* correct by current
scientific understanding.  The first phrase that you quoted was intended to
point out that both good and bad memes can positively affect survival -- so
co-evolution doesn't work.  The second phrase that you quoted deals with the
results of the system applying common ethical rules with cultural variables.
The third phrase that you quoted talks about determining what the best
cultural variables (and maybe secondary rules) are for a given set of
circumstances -- and should have been better phrased as 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Charles Hixson

Matt Mahoney wrote:
An AGI will not design its goals. It is up to humans to define the 
goals of an AGI, so that it will do what we want it to do.
Are you certain that this is the optimal approach?  To me it seems more 
promising to design the motives, and to allow the AGI to design it's own 
goals to satisfy those motives.  This provides less fine grained control 
over the AGI, but I feel that a fine-grained control would be 
counter-productive.


To me the difficulty is designing the motives of the AGI in such a way 
that they will facilitate human life, when they must be implanted in an 
AGI that currently has no concept of an external universe, much less any 
particular classes of inhabitant therein.  The only (partial) solution 
that I've been able to come up with so far (i.e., identify, not design) 
is based around imprinting.  This is fine for the first generation 
(probably, if everything is done properly), but it's not clear that it 
would be fine for the second generation et seq.  For this reason RSI is 
very important.  It allows all succeeding generations to be derived from 
the first by cloning, which would preserve the initial imprints.


Unfortunately, this is a problem. We may or may not be successful in 
programming the goals of AGI to satisfy human goals. If we are not 
successful, ... unpleasant because it would result in a different state.
 
-- Matt Mahoney, [EMAIL PROTECTED]


Failure is an extreme danger, but it's not only failure to design safely 
that's a danger.  Failure to design a successful AGI at all could be 
nearly as great a danger.  Society has become too complex to be safely 
managed by the current approaches...and things aren't getting any simpler.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Charles Hixson
Admittedly I don't have any proof, but I don't see any reason to doubt 
my assertions.  There's nothing in them that appears to be to be 
specific to any particular implementation of an (almost) AGI.


OTOH, you didn't define play, so I'm still presuming that you accept the 
definition that I proffered.  But then you also didn't explicitly accept 
it, so I'm not certain.  To quote myself: 

Play is a form a strategy testing in an environment that doesn't 
severely penalize failures.


There's nothing about that statement that appears to me to be specific 
to any particular implementation.  It seems *to me*, and again I 
acknowledge that I have no proof of this, that any (approaching) AGI of 
any construction would necessarily engage in this activity.


P.S.:  I'm being specific about (approaching) AGI as I doubt the 
possibility, and especially the feasibility, of constructing an actual 
AGI, rather than something which merely approaches being an AGI at the 
limit.  I'm less certain about an actual AGI, but I suspect that it, 
also, would need to play for the same reasons.



Brad Paulsen wrote:

Charles,

By now you've probably read my reply to Tintner's reply.  I think that 
probably says it all (and them some!).


What you say holds IFF you are planing on building an airplane that 
flies just like a bird.  In other words, if you are planning on 
building a human-like AGI (that could, say, pass the Turing test).  
My position is, and has been for decades, that attempting to pass the 
Turing test (or win either of the two, one-time-only, Loebner Prizes) 
is a waste of precious time and intellectual resources.


Thought experiments?  No problem.  Discussing ideas?  No problem. 
Human-like AGI?  Big problem.


Cheers,
Brad

Charles Hixson wrote:
Play is a form a strategy testing in an environment that doesn't 
severely penalize failures.  As such, every AGI will necessarily 
spend a lot of time playing.


If you have some other particular definition, then perhaps I could 
understand your response if you were to define the term.


OTOH, if this is interpreted as being a machine that doesn't do 
anything BUT play (using my supplied definition), then your response 
has some merit, but even that can be very useful.  Almost all of 
mathematics, e.g., is derived out of such play.


I have a strong suspicion that machines that don't have a play mode 
can never proceed past the reptilian level of mentation.  (Here I'm 
talking about thought processes that are mediated via the reptile 
brain in entities like mammals.  Actual reptiles may have some more 
advanced faculties of which I'm unaware.  (Note that, e.g., shrews 
don't have much play capability, but they have SOME.)



Brad Paulsen wrote:
Mike Tintner wrote: ...how would you design a play machine - a 
machine that can play around as a child does?


I wouldn't.  IMHO that's just another waste of time and effort 
(unless it's being done purely for research purposes).  It's a 
diversion of intellectual and financial resources that those serious 
about building an AGI any time in this century cannot afford.  I 
firmly believe if we had not set ourselves the goal of developing 
human-style intelligence (embodied or not) fifty years ago, we would 
already have a working, non-embodied AGI.


Turing was wrong (or at least he was wrongly interpreted).  Those 
who extended his imitation test to humanoid, embodied AI were even 
more wrong.  We *do not need embodiment* to be able to build a 
powerful AGI that can be of immense utility to humanity while also 
surpassing human intelligence in many ways.  To be sure, we want 
that AGI to be empathetic with human intelligence, but we do not 
need to make it equivalent (i.e., just like us).


I don't want to give the impression that a non-Turing intelligence 
will be easy to design and build.  It will probably require at least 
another twenty years of two steps forward, one step back effort.  
So, if we are going to develop a non-human-like, non-embodied AGI 
within the first quarter of this century, we are going to have to 
just say no to Turing and start to use human intelligence as an 
inspiration, not a destination.


Cheers,

Brad



Mike Tintner wrote:
Just a v. rough, first thought. An essential requirement of  an AGI 
is surely that it must be able to play - so how would you design a 
play machine - a machine that can play around as a child does?


You can rewrite the brief as you choose, but my first thoughts are 
- it should be able to play with

a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be 
vastly more flexible than a computer, but if you want to do it all 
on computer, fine.


How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something 
interesting?

How do infants, IOW, play?






---
agi

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Charles Hixson
Play is a form a strategy testing in an environment that doesn't 
severely penalize failures.  As such, every AGI will necessarily spend a 
lot of time playing.


If you have some other particular definition, then perhaps I could 
understand your response if you were to define the term.


OTOH, if this is interpreted as being a machine that doesn't do anything 
BUT play (using my supplied definition), then your response has some 
merit, but even that can be very useful.  Almost all of mathematics, 
e.g., is derived out of such play.


I have a strong suspicion that machines that don't have a play mode 
can never proceed past the reptilian level of mentation.  (Here I'm 
talking about thought processes that are mediated via the reptile 
brain in entities like mammals.  Actual reptiles may have some more 
advanced faculties of which I'm unaware.  (Note that, e.g., shrews don't 
have much play capability, but they have SOME.)



Brad Paulsen wrote:
Mike Tintner wrote: ...how would you design a play machine - a 
machine that can play around as a child does?


I wouldn't.  IMHO that's just another waste of time and effort (unless 
it's being done purely for research purposes).  It's a diversion of 
intellectual and financial resources that those serious about building 
an AGI any time in this century cannot afford.  I firmly believe if we 
had not set ourselves the goal of developing human-style intelligence 
(embodied or not) fifty years ago, we would already have a working, 
non-embodied AGI.


Turing was wrong (or at least he was wrongly interpreted).  Those who 
extended his imitation test to humanoid, embodied AI were even more 
wrong.  We *do not need embodiment* to be able to build a powerful AGI 
that can be of immense utility to humanity while also surpassing human 
intelligence in many ways.  To be sure, we want that AGI to be 
empathetic with human intelligence, but we do not need to make it 
equivalent (i.e., just like us).


I don't want to give the impression that a non-Turing intelligence 
will be easy to design and build.  It will probably require at least 
another twenty years of two steps forward, one step back effort.  
So, if we are going to develop a non-human-like, non-embodied AGI 
within the first quarter of this century, we are going to have to 
just say no to Turing and start to use human intelligence as an 
inspiration, not a destination.


Cheers,

Brad



Mike Tintner wrote:
Just a v. rough, first thought. An essential requirement of  an AGI 
is surely that it must be able to play - so how would you design a 
play machine - a machine that can play around as a child does?


You can rewrite the brief as you choose, but my first thoughts are - 
it should be able to play with

a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be vastly 
more flexible than a computer, but if you want to do it all on 
computer, fine.


How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something 
interesting?

How do infants, IOW, play?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Charles Hixson

Jonathan El-Bizri wrote:



On Mon, Aug 25, 2008 at 2:26 PM, Terren Suydam [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:



If an AGI played because it recognized that it would improve its
skills in some domain, then I wouldn't call that play, I'd call it
practice. Those are overlapping but distinct concepts.


The evolution of play is how nature has convinced us to practice 
skills of a general but un-predefinable type. Would it make sense to 
think of practice as the narrow AI version of play?
No.  Because in practice one is honing skills with a definite chosen 
purpose (and usually no instinctive guide), whereas in play one is 
honing skills without the knowledge that one is doing so.  It's very 
different, e.g., to play a game of chess, and to practice playing chess.


Part ...

Jonathan El-Bizri




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-18 Thread Charles Hixson

This is probably quibbling over a definition, but:
Jim Bromer wrote:

On Sat, Aug 9, 2008 at 5:35 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
  

Jim Bromer wrote:


As far as I can tell, the idea of making statistical calculation about
what we don't know is only relevant for three conditions.
The accuracy of the calculation is not significant.
The evaluation is near 1 or 0.
The problem of what is not known is clearly within a generalization
category and a measurement of the uncertainty is also made within a
generalization category valid for the other generalization category.

But we can make choices about things that are not known based on opinion.
  


  

Could you define opinion in an operational manner, i.e. in such a way that
it was specified whether a particular structure in a database satisfied that
or not?  Or a particular logical operation?
Otherwise I am forced to consider opinion as a conflation of probability
estimates and desirability evaluations.  This doesn't seem consistent with
your assertion (i.e., if you intended opinion to be so defined, you wouldn't
have responded in that way), but I have no other meaning for it.




I could define the difference between opinion and general knowledge
with abstract terms but it is extremely difficult to come up with an
operational principle that could be used to reliably detect opinion.
This is true when dealing with human opinion so why wouldn't it be
true when dealing with AI opinion?  Most facts are supported by
opinion and most opinions are supported by some facts, although the
connection may be somewhat difficult to see in some cases.

Your opinion that opinion itself can be defined, 'as a conflation of
probability estimates and desirability evaluations,' avoids the
difficulty of the definition by making it dependent on two concepts
neither of which are necessary and both of which would require some
kind of arbitrary evaluation system for most cases.  Opinion can be
derived without probability or an evaluation of desirability.  And
opinion is not necessarily dependent on some kind of weighted system
of numerical measurement.

But while I cannot provide an operational definition that is
absolutely reliable for all cases, I can begin to discuss it as if it
were still an open question (as opposed to an arbitrary definition).

Opinion will be mixed with facts in almost all cases.  One can only
start to distinguish them by devising standard systems that attempt to
separate and categorize them.  This system is going to be imperfect
just as it is in everyday life.  This idea of creating standard
methods that can be used for general classes of kinds of things is
significant because it is related to the problem of  'grounding'
opinions or theories onto 'observable events'.

My imaginary AI program would use categorical reasoning but it would
also be able to learn.  I would use text-based IO at first.  So in
this sense 'grounding' would have to based on textual interactions.
This kind of grounding would be weaker than the grounding that humans
are capable of, but people are limited too, in their own way.

Since opinion and fact seem to be gnarly and intertwined, I feel that
the use of standard methods to examine the problem are necessary.  Why
'standard methods'?  Because standard methods would be established
only after passing a series of tests to demonstrate the kind of
reliability that would be desirable for the kinds of problems that
they would be applied to. This kind of reliability could be measurable
in some cases, but measurability is not a necessary aspect of
detecting opinion.  And another aspect of developing standard methods
is that by relying on highly reliable components and by narrowing the
variations of individual interpretation, these standard methods could
act as a base for methods of grounding.  Ironically, this helps to
bond individual opinions from human society about what is fact and
what is not, but this process is helpful as long as it is not
totalitarian.

So an opinion that contains some truth, but cannot attain a standard
of reliability based on the use of established standard methods to
examine similar problems would have to continue to be considered as an
opinion.  Of course, a theory might only be considered to be an
opinion after the thinking device is exposed to an alternative theory
that explains some reference data in another way.

This problem is directly related to the greater problem of artificial
judgment the lack of elementary methods that could act as the
'independent variables' to produce higher AI.  That means that I think
the problem is AI-Complete (to use an interesting phrase that someone
in the group has used).

Jim Bromer
  
Well, one point where we disagree is on whether truth can actually be 
known by anything.  I don't think this is possible.  So to me that which 
is called truth is just something with a VERY high probability, and 
which is also consistent with the mental models that one

Re: [agi] The Necessity of Embodiment

2008-08-09 Thread Charles Hixson

Brad Paulsen wrote:

...
Sigh.  Your point of view is heavily biased by the unspoken assumption 
that AGI must be Turing-indistinguishable from humans.  That it must 
be AGHI.  This is not necessarily a bad idea, it's just the wrong idea 
given our (lack of) understanding of general intelligence.  Turing was 
a genius, no doubt about that.  But many geniuses have been wrong.  
Turing was tragically wrong in proposing (and AI researchers/engineers 
terribly naive in accepting) his infamous imitation test, a simple 
test that has, almost single-handedly, kept AGI from becoming a 
reality for over fifty years.  The idea that AGI won't be real AGI 
unless it is embodied is a natural extension of Turing's imitation 
test and, therefore, inherits all of its wrongness.

...
Cheers,

Brad
You have misunderstood what Turing was proposing.  He was claiming that 
if a computer could act in the proposed manner that you would be forced 
to conceed that it was intelligent, not the converse.  I have seen no 
indication that he believed that there was any requirement that a 
computer be able to pass the Turing test to be considered intelligent. 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-09 Thread Charles Hixson

Jim Bromer wrote:

In most situations this is further limited because one CAN'T know all of the
consequences.  So one makes probability calculations weighting things not
only by probability of occurrence, but also by importance.  So different
individuals disagree not only on the definition of best, but also on the
weights given to the potential outcomes.



As far as I can tell, the idea of making statistical calculation about
what we don't know is only relevant for three conditions.
The accuracy of the calculation is not significant.
The evaluation is near 1 or 0.
The problem of what is not known is clearly within a generalization
category and a measurement of the uncertainty is also made within a
generalization category valid for the other generalization category.

  

If free will has any other components, what do you think they are?


But we can make choices about things that are not known based on opinion.
Jim Bromer

...
Could you define opinion in an operational manner, i.e. in such a way 
that it was specified whether a particular structure in a database 
satisfied that or not?  Or a particular logical operation?
Otherwise I am forced to consider opinion as a conflation of probability 
estimates and desirability evaluations.  This doesn't seem consistent 
with your assertion (i.e., if you intended opinion to be so defined, you 
wouldn't have responded in that way), but I have no other meaning for it.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless (AND fuzzy) reasoning - in one

2008-08-09 Thread Charles Hixson

Brad Paulsen wrote:



Mike Tintner wrote:
That illusion is partly the price of using language, which fragments 
into pieces what is actually a continuous common sense, integrated 
response to the world.


Mike,

Excellent observation.  I've said it many times before: language is 
analog human experience digitized.  And every time I do, people look 
at me funny.


There is an analogy to musical synthesizers that may be instructive.  
Early synthesizers attempted to recreate analog instruments using 
mathematics.  The result sounded sort of like the real thing, but 
any human could tell it was a synthesized sound fairly easily.  Then, 
people started recording instruments and sampling their sounds 
digitally.  Bingo.  I've been a musician all my life, classically 
trained and am both a published songwriter and professional 
guitarist.  With the latest digital synthesizers I have in my nome 
studio, it's very difficult for me to answer the question, Is it real 
or is it digitized. Even plucked string instruments, like the guitar, 
really sound like the analog original using the newer synths.


Language is how we record analog human experience in digitized 
format.  We need to concentrate on discovering how that works so we 
can use it as input to produce intelligence that sounds just like 
the real thing on output.  I believe Matt Mahoney has been working 
on developing insights in this area with his work in information 
theory and compression.  Once we crack the code, we will be able to 
build symbolized AGIs that will, in many cases, exceed the 
capabilities of the original because the underlying representation 
will be so much easier to observe and manipulate.


Cheers,

Brad

However language is not standardized in the same manner that musical 
synthesizers are standardized.  So while what you are saying may well be 
true within any one mind, when these same thoughts are shared via 
language, the message immediately becomes much fuzzier (to be 
resharpened when received, but with slightly different centroids of 
meaning).  As a result of this being repeated multiple times language, 
though essentially digital, has much of the fuzziness of an analog 
signal.  Books and other mass media tend to diminish this effect, however.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-08 Thread Charles Hixson

Jim Bromer wrote:

On Thu, Aug 7, 2008 at 3:53 PM, Charles Hixson
[EMAIL PROTECTED] wrote:

  

At this point I think it relevant to bring in an assertion from Larry Niven
(Protector):
Paraphrase:  When you understand all the consequences of an act, then you
don't have free will.  You must choose the best decision.  (For some value
of best.)

If this is correct, then Free Will is either an argument over probabilities,
or over best...which could reasonably be expected to differ from entity to
entity.



That is interesting, I never considered that before.
I think that free-will has to be defined relatively.  So even though
we cannot transcend anyway we want to we still have free-will relative
to the range of possibilities that we do have. And this range is too
great to be comprehended except in the terms of broad generalizations.
So the choices that an future AGI program can make should not be and
cannot be dismissed before hand.
Free will can differ from entity to entity but I do not think a
working definition can be limited to probabilities or over what is
'best'.
Jim Bromer
  
I agree with you, but I think that it likely this is due to the fact 
that one cannot know ALL the consequences of any choice.  If one did, 
then I suspect that free will *would* be limited to *best*.  E.g., the 
best move in a game may not be the move that gives one the highest 
probability of winning the game when one considers, e.g., social 
factors.   Thus by considering wider consequences, the evaluation of 
best changes, but one still chooses a best move, for some definition of 
best.


In most situations this is further limited because one CAN'T know all of 
the consequences.  So one makes probability calculations weighting 
things not only by probability of occurrence, but also by importance.  
So different individuals disagree not only on the definition of best, 
but also on the weights given to the potential outcomes.


If free will has any other components, what do you think they are?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


FWIW: Re: [agi] Groundless reasoning

2008-08-07 Thread Charles Hixson

Brad Paulsen wrote:
... Nope.  Wrong again.  At least you're consistent.  That line 
actually comes from a Cheech and Chong skit (or a movie -- can't 
remember which at the moment) where the guys are trying to get 
information by posing as cops.  At least I think that's the setup.  
When the person they're attempting to question asks to see their 
badges, Cheech replies, Badges?  We don't need no stinking badges!


Having been a young adult in the 1960's and 1970's, I am, of course, a 
long-time Pink Floyd fan.  In fact, one of my Pandora 
(http://www.pandora.com) stations is set up so that I hear something 
by PF at least once a week.




Brad
FWIW:  We don't need no stinking badges is from the movie Treasure of 
the Sierra Madre.  Many places have copied it from there.  It could be 
that both Pink Floyd and Cheech and Chong both copied it.  (It was also 
in a Farley comic strip.)  *Your* source may be Cheech and Chong.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Charles Hixson

Jim Bromer wrote:

...
I mostly agree with your point of view, and I am not actually saying
that your technical statements are wrong.  I am trying to explain that
there is something more to be learned.  The apparent paradox can be
reduced to the never ending deterministic vs free will argument.  I
think the resolution of these two paradoxical problems is a necessary
design principle.
Jim Bromer

  
At this point I think it relevant to bring in an assertion from Larry 
Niven (Protector):
Paraphrase:  When you understand all the consequences of an act, then 
you don't have free will.  You must choose the best decision.  (For some 
value of best.)


If this is correct, then Free Will is either an argument over 
probabilities, or over best...which could reasonably be expected to 
differ from entity to entity.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] META: do we need a stronger politeness code on this list?

2008-08-03 Thread Charles Hixson

Vladimir Nesov wrote:

On Sun, Aug 3, 2008 at 7:47 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
  

I think Ed's email was a bit harsh, but not as harsh as many of Richard's
(which are frequently full of language like fools, rubbish and so forth
...).

Some of your emails have been pretty harsh in the past too.

I would be willing to enforce a stronger code of politeness on this list if
that is what the membership wants.  I have been told before, in other
contexts, that I tend to be overly tolerant of rude behavior.

Anyone else have an opinion on this?




I don't notice rudeness so much, but content-free posts (and posters
who don't learn) are a problem on this list. Low signal-to-noise
ratio. I'd say you are too tolerant in avoiding moderation, but
moderation is needed for content, not just politeness.

  
Moderation for content is a hard problem.  E.g., different people come 
to different decisions about what it useful content.  Should posts be 
limited to algorithms and sample programs (with minimal explication)?  
Justify your answer.


Basically, there's no mechanical or semi-mechanical way to come to even 
an approximation of limiting by content except either:
1) a person dedicated to screening.  People *do* have opinions as to 
what is useful content, even if they disagree.
2) a closed list.  Only a few people are allowed to post.  Privilege 
revocable easily.  Frequent warnings.


Those both require LOTS of management, and tend to foster rigid attitudes.

I, too, would like to see more substantive posts...but I'm not sure that 
an e-mail list is the place to look.  A website where anyone could start 
a blog about any thesis that they have WRT AGI would seem more 
reasonable.  Something like a highly focused Slashdot, only instead of 
keying off news articles it would key off of papers that were submitted.


But again, that, too, would require lots of human investment, even if I 
feel it *would* be a more productive investment (if you could entice 
people to write the papers).




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


P.S.; Re: [agi] META: do we need a stronger politeness code on this list?

2008-08-03 Thread Charles Hixson

Charles Hixson wrote:

Vladimir Nesov wrote:

On Sun, Aug 3, 2008 at 7:47 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
I think Ed's email was a bit harsh, but not as harsh as many of 
Richard's
(which are frequently full of language like fools, rubbish and 
so forth

...).

Some of your emails have been pretty harsh in the past too.

I would be willing to enforce a stronger code of politeness on this 
list if

that is what the membership wants.  I have been told before, in other
contexts, that I tend to be overly tolerant of rude behavior.

Anyone else have an opinion on this?




I don't notice rudeness so much, but content-free posts (and posters
who don't learn) are a problem on this list. Low signal-to-noise
ratio. I'd say you are too tolerant in avoiding moderation, but
moderation is needed for content, not just politeness.

  
Moderation for content is a hard problem.  E.g., different people 
come to different decisions about what it useful content.  Should 
posts be limited to algorithms and sample programs (with minimal 
explication)?  Justify your answer.


Basically, there's no mechanical or semi-mechanical way to come to 
even an approximation of limiting by content except either:
1) a person dedicated to screening.  People *do* have opinions as to 
what is useful content, even if they disagree.
2) a closed list.  Only a few people are allowed to post.  Privilege 
revocable easily.  Frequent warnings.


Those both require LOTS of management, and tend to foster rigid 
attitudes.


I, too, would like to see more substantive posts...but I'm not sure 
that an e-mail list is the place to look.  A website where anyone 
could start a blog about any thesis that they have WRT AGI would seem 
more reasonable.  Something like a highly focused Slashdot, only 
instead of keying off news articles it would key off of papers that 
were submitted.


But again, that, too, would require lots of human investment, even if 
I feel it *would* be a more productive investment (if you could entice 
people to write the papers).


A part of what this would facilitate is organization of material by 
subject.  Another part is a place to post idea pieces that are more 
than just e-mails.  This *isn't* a replacement for an e-mail list.  It 
*does* require organization by subject at a higher level.  Probably a 
lattice organization would be best, but also searchable key words.
Does such a thing exist?  Probably not.  That means that someone would 
need to put in a substantial amount of time both organizing it and 
writing scripts.   And it would need a moderation system (ala 
slashdot).  So it's a substantial amount of work. 

This means I don't expect it to happen.  Independent fora aren't the 
same thing, though they have a partial overlap.  So do wiki. (Wiki may 
be closer, but the original article shouldn't be modifiable, only 
commentable upon.)


Note that it's very important for this to do the job that I'm proposing 
that comments be moderated and the moderators be meta-moderated.  These 
articles are seen as being available for a long time, and significant 
comments, ammendments, and additions need to be easy to locate.


I'm seeing this as a kind of an textbook, kind of an encyclopedia, kind 
of an... well, make up your own mind.  (And, as I said, it's probably 
too much work for this community.  Most of us have other projects.  But 
it would be a great thing at, say, a university.  Might be a reasonable 
project for someone in CS.)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-29 Thread Charles Hixson
On Tuesday 29 July 2008 03:08:55 am Valentina Poletti wrote:
 lol.. well said richard.
 the stimuli simply invokes no signiticant response and thus our brain
 concludes that we 'don't know'. that's why it takes no effort to realize
 it. agi algorithms should be built in a similar way, rather than searching.

Unhhh that *IS* a kind of search.  It's a shallowly truncated 
breadth-first search, but it's a search.
Compare that with the the words right on the tip of my tongue phenomenon.  
In that case you get sufficient response that you become aware of a search 
going on, and you even know that the result *should* be positive.  You just 
can't find it.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-29 Thread Charles Hixson
On Tuesday 29 July 2008 04:12:27 pm Brad Paulsen wrote:
 Richard Loosemore wrote:
  Brad Paulsen wrote:
  All,
 
  Here's a question for you:
 
  What does fomlepung mean?
 
  If your immediate (mental) response was I don't know. it means
  you're not a slang-slinging Norwegian.  But, how did your brain
  produce that feeling of not knowing?  And, how did it produce that
  feeling so fast?
 
  Your brain may have been able to do a massively-parallel search of
  your entire memory and come up empty.  But, if it does this, it's
  subconscious.  No one to whom I've presented the above question has
  reported a conscious feeling of searching before having the
  conscious feeling of not knowing.
 
  It could be that your brain keeps a list of things I don't know.  I
  tend to think this is the case, but it doesn't explain why your brain
  can react so quickly with the feeling of not knowing when it doesn't
  know it doesn't know (e.g., the very first time it encounters the word
  fomlepung).
 
  My intuition tells me the feeling of not knowing when presented with a
  completely novel concept or event is a product of the Danger, Will
  Robinson!, reptilian part of our brain.  When we don't know we don't
  know something we react with a feeling of not knowing as a survival
  response.  Then, having survived, we put the thing not known at the
  head of our list of things I don't know.  As long as that thing is
  in this list it explains how we can come to the feeling of not knowing
  it so quickly.
 
  Of course, keeping a large list of things I don't know around is
  probably not a good idea.  I suspect such a list will naturally get
  smaller through atrophy.  You will probably never encounter the
  fomlepung question again, so the fact that you don't know what it
  means will become less and less important and eventually it will drop
  off the end of the list.  And...
 
  Another intuition tells me that the list of things I don't know,
  might generate a certain amount of cognitive dissonance the resolution
  of which can only be accomplished by seeking out new information
  (i.e., learning)?  If so, does this mean that such a list in an AGI
  could be an important element of that AGI's desire to learn?  From a
  functional point of view, this could be something as simple as a
  scheduled background task that checks the things I don't know list
  occasionally and, under the right circumstances, pings the AGI with
  a pang of cognitive dissonance from time to time.
 
  So, what say ye?
 
  Isn't this a bit of a no-brainer?  Why would the human brain need to
  keep lists of things it did not know, when it can simply break the word
  down into components, then have mechanisms that watch for the rate at
  which candidate lexical items become activated  when  this mechanism
  notices that the rate of activation is well below the usual threshold,
  it is a fairly simple thing for it to announce that the item is not
  known.
 
  Keeping lists of things not known is wildly, outrageously impossible,
  for any system!  Would we really expect that the word
  ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-
  owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
  hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
  dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere as
  a word that I do not know? :-)
 
  I note that even in the simplest word-recognition neural nets that I
  built and studied in the 1990s, activation of a nonword proceeded in a
  very different way than activation of a word:  it would have been easy
  to build something to trigger a this is a nonword neuron.
 
  Is there some type of AI formalism where nonword recognition would be
  problematic?
 
 
 
  Richard Loosemore

 Richard,

 You seem to have decided my request for comment was about word
 (mis)recognition. It wasn't.  Unfortunately, I included a misleading
 example in my initial post. A couple of list members called me on it
 immediately (I'd expect nothing less from this group -- and this was a
 valid criticism duly noted).  So far, three people have pointed out that a
 query containing an un-common (foreign, slang or both) word is one way to
 quickly generate the feeling of not knowing.  But, it is just that: only
 one way.  Not all feelings of not knowing are produced by linguistic
 analysis of surface features.  In fact, I would guess that the vast
 majority of them are not so generated.  Still, some are and pointing this
 out was a valid contribution (perhaps that example was fortunately bad).

 I don't think my query is a no-brainer to answer (unless you want to make
 it one) and your response, since it contained only another flavor of the
 previous two responses, gives me no reason whatsoever to change my opinion.

 Please take a look at the revised example in this thread.  I don't think it
 has the same problems (as an example) as did the initial example.  In
 

Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread Charles Hixson
On Monday 28 July 2008 07:04:01 am YKY (Yan King Yin) wrote:
 Here is an example of a problematic inference:

 1.  Mary has cybersex with many different partners
 2.  Cybersex is a kind of sex
 3.  Therefore, Mary has many sex partners
 4.  Having many sex partners - high chance of getting STDs
 5.  Therefore, Mary has a high chance of STDs

 What's wrong with this argument?  It seems that a general rule is
 involved in step 4, and that rule can be refined with some
 qualifications (ie, it does not apply to all kinds of sex).  But the
 question is, how can an AGI detect that an exception to a general rule
 has occurred?

 Or, do we need to explicitly state the exceptions to every rule?

 Thanks for any comments!
 YKY

There's nothing wrong with the logical argument.  What's wrong is that you 
are presuming a purely declarative logic approach can work...which it can in 
extremely simple situations, where you can specify all necessary facts.

My belief about this is that the proper solution is to have a model of the 
world, and how interactions happen in it separate from the logical 
statements.  The logical statements are then seen as focusing techniques.  
Thus here one would need to model a whole bunch of different features of the 
world.  Cybersex is one of them, sex is another.  The statement Cybersex is 
a kind of sex would be seen as making a logical correspondence between two 
very different models.  As such one would expect only a very loose mapping, 
unless one were specifically instructed otherwise.  This puts the conclusion 
at 3 on very shaky ground.  Consequently the conclusion at 5 must also be 
considered unreliable.  (It could still be true.)

What logicians call logic doesn't bear much resemblance to what people do in 
their day-to-day lives, and for good reason.  A logician looking at it would 
argue with almost every step of the argument that you have presented, as it's 
quite ill-formed.   (E.g., what does a kind of mean?)  A biologist would 
probably deny that cybersex was sex.  So would a pathologist.  So this could 
be seen as an example of From a false premise one can draw any conclusion.  

N.B.:  You called this a fuzzy logic problem, but you don't seem to have 
specified sufficient details for such logic to operate.  Specifically which 
details are missing varies slightly depending on exactly which version of 
fuzzy logic you are considering, but they all require more information than 
is present.  Still, I don't think that's the basic problem.  The basic 
problem is that your rules require a large amount of unstated knowledge to 
make any sense...and this is pointed up most clearly by the 
statement Cybersex is a kind of sex.  To properly model that statement 
would require an immense amount of knowledge.  Much, but not all, of it is 
built into humans via bodily experience.  An AI cannot be expected to have 
this substratum of intrinsic knowledge, so it would need to be supplied 
explicitly.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread Charles Hixson
On Monday 28 July 2008 09:30:08 am YKY (Yan King Yin) wrote:
 On 7/29/08, Charles Hixson [EMAIL PROTECTED] wrote:
  There's nothing wrong with the logical argument.  What's wrong is that
  you are presuming a purely declarative logic approach can work...which it
  can in extremely simple situations, where you can specify all necessary
  facts.
 
  My belief about this is that the proper solution is to have a model of
  the world, and how interactions happen in it separate from the logical
  statements.  The logical statements are then seen as focusing techniques.
  [ ... ]

 The key word here is model.  If you can reason with mental models,
 then of course you can resolve a lot of paradoxes in logic.  This
 boils down to:  how can you represent mental models?  And they seem to
 boil down further to logical statements themselves.  In other words,
 we can use logic to represent rich mental models.

 YKY

This is true, but the logic statements of the model are rather different 
than simple assertions, much more like complex statements specifying 
proportional relationships and causal links.  I envision the causal links 
as being at statements about the physical layer.  And everything covered 
with amount of belief .  The model would also need to include mechanisms 
believed to be in operation. (E.g., fire is caused by the phologiston 
escaping from the wood).  And mechanisms would need to be in place for 
correcting and updating the model.  Etc.

Conversational type logic statements, then, would be seen as being for the 
purpose of directing attention to specific portions of this model.  
And deductions from these informal logic statements would need to be 
checked and verified against the model.  Sensory data would be more 
significant, but there's considerable evidence that even sensory data has a 
hard time in overruling a strong model belief.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Theoretic estimation of reliability vs experimental

2008-07-03 Thread Charles Hixson
On Thursday 03 July 2008 11:14:15 am Vladimir Nesov wrote:
 On Thu, Jul 3, 2008 at 9:36 PM, William Pearson [EMAIL PROTECTED] 
wrote:...
  I know this doesn't have the properties you would look for in a
  friendly AI set to dominate the world. But I think it is similar to
  the way humans work, and will be as chaotic and hard to grok as our
  neural structure. So as likely as humans are to explode intelligently.

 Yes, one can argue that AGI of minimal reliability is sufficient to
 jump-start singularity (it's my current position anyway, Oracle AI),
 but the problem with faulty design is not only that it's not going to
 be Friendly, but that it isn't going to work at all.

The problem here is that proving a theory is often considerably more difficult 
than testing it.  Additionally there are a large number of conditions 
where almost optimal techniques can be found relatively easily, but where 
optimal techniques require an infinite number of steps to derive.  In such 
conditions generate and test is a better approach, but since you are 
searching a very large state-space you can't expect to get very close to 
optimal, unless there's a very large area where the surface is smooth enough 
for hill-climbing to work.

So what's needed are criteria for sufficiently friendly that are testable.  
Of course, we haven't yet generated the first entry for generate and test, 
but friendly, like optimal, may be too high a bar.  Sufficiently friendly 
might be a much easier goal...but to know that you've achieved it, you need 
to be able to test for it.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] A point of philosophy, rather than engineering

2002-11-12 Thread Charles Hixson
Ben Goertzel wrote:


Hi,

 

Personally, I believe that the most effective AI will have a core
general intelligence, that may be rather primitive, and a huge number of
specialized intelligence modules.  The tricky part of this architecture
is designing the various modules so that they can communicate.  It isn't
clear that this is always reasonable (consider the interfaces between
chess and cooking), but if the problem can be handled in a general
manner (there's that word again!), then one of the intelligences could
be specialized for message passing.  In this model the core general
intelligence will be for use when none of the hueristics fit the
problem.  And it's attempts will be watched by another module whose
specialty is generating new hueristics.

Plausible?  I don't really know.  Possibly to complicated to actually
build.  It might need to be evolved from some simpler precursor.
   


It's clear that the human brain does something like what you're suggesting.
Much of the brain is specialized for things like vision, motion control,
linguistic analysis, time perception, etc. etc.  The portion of the human
brain devoted to general abstract thinking is very small.

Novamente is based on an integrative approach sorta like you suggest.  But
it's not quite as rigidly modular as you suggest.   Rather, we think one
needs to

-- create a flexible knowledge representation (KR) useful for representing
all forms of knowledge (declarative, procedural, perceptual, abstract,
linguistic, explicit, implicit, etc. etc.)


This probably won't work.  Thinking of the brain as a model, we have 
something called the synesthetic gearbox which is used to relate 
information in one modality of senstation with another modality.  This 
is a part of the reason that I suggested that one of the hueristic 
modules be specialized for message passing (and translation).


-- create a number of specialized mind agents acting on the KR, carrying
out specialized forms of intelligent processes

-- create an appropriate set of integrative mind agents acting on the KR,
oriented toward creating general intelligence based largely on the activity
specialized mindagents


Again the term general intelligence.  I would like to suggest that the 
intelligence needed to repair an auto engine is different from that 
needed to solve a calculus equation.  I see the General Intelligence as 
being the primarily to handle problems for which no hueristic can be 
found, and would suggest that nearly any even slightly tuned hueristic 
is better than the general intellligence for almost all problems.  E.g., 
if one is repairing an auto engine, one hueristic would be to remember 
the shapes of all the pieces you have seen, and to remember where they 
were when you first saw them.  Just think how that one hueristic would 
assist reassembling the engine.



Set up a knowledge base involving all these mind agents.. hook it up to
sensors  actuators  give it a basic goal relating to its environment...

Of course, this general framework and 89 cents will get you a McDonald's
Junior Burger.  All the work is in designing and implementing the KR and the
MindAgents!!  That's what we've spent (and are spending) all our time on...


May I suggest that if you are even close to what you are attempting, 
that you have the start of a dandy personal secretary.  With so much 
correspondence coming via e-mail these days, this would create a very 
simplified environment in which the entity would need to operate.  In 
this limited environment you wouldn't need full meanings for most words, 
only categories and valuations.

I have a project which I am aiming at that area, but it is barely 
getting started.

-- Ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

--
-- Charles Hixson
Gnu software that is free,
The best is yet to be.


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/


Re: [agi] A point of philosophy, rather than engineering

2002-11-11 Thread Charles Hixson
The problem with a truly general intelligence is that the search spaces 
are too large.  So one uses specializing hueristics to cut down the 
amount of search space.  This does, however, inevitably remove a piece 
of the generality.  The benefit is that you can answer more 
complicated questions quickly enough to be useful.  I don't see any way 
around this, short of quantum computers, and I'm not sure about them (I 
have this vague suspicion that there will be exponentially increasing 
probabilities of error, which require hugely increased error recovery 
systems, etc.).

This doesn't mean that we have currently reached the limits of agi.  It 
means that whatever those limits are, there will always be hueristicly 
tuned intelligences that will be more efficient in most problem domains.

Of course, here I am taking a strict interpretation of general, as in 
General Relativity vs. Special Relativity.  Notice that while Special 
Relativity has many uses, General Relativity is (or at least was until 
quite recently) mainly of theoretical interest.  Be prepared for a 
similar result with General Intelligence vs. Special Intelligence.  (The 
difference here is that Special Intelligence comes in lots of modules 
adapted for lots of special circumstances.)

Personally, I believe that the most effective AI will have a core 
general intelligence, that may be rather primitive, and a huge number of 
specialized intelligence modules.  The tricky part of this architecture 
is designing the various modules so that they can communicate.  It isn't 
clear that this is always reasonable (consider the interfaces between 
chess and cooking), but if the problem can be handled in a general 
manner (there's that word again!), then one of the intelligences could 
be specialized for message passing.  In this model the core general 
intelligence will be for use when none of the hueristics fit the 
problem.  And it's attempts will be watched by another module whose 
specialty is generating new hueristics.

Plausible?  I don't really know.  Possibly to complicated to actually 
build.  It might need to be evolved from some simpler precursor.  



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/