[agi] Mushed Up Decision Processes

2008-11-29 Thread Jim Bromer
One of the problems that comes with the casual use of analytical
methods is that the user becomes inured to their habitual misuse. When
a casual familiarity is combined with a habitual ignorance of the
consequences of a misuse the user can become over-confident or
unwisely dismissive of criticism regardless of how on the mark it
might be.

The most proper use of statistical and probabilistic methods is to
base results on a strong association with the data that they were
derived from.  The problem is that the AI community cannot afford this
strong a connection to original source because they are trying to
emulate the mind in some way and it is not reasonable to assume that
the mind is capable of storing all data that it has used to derive
insight.

This is a problem any AI method has to deal with, it is not just a
probability thing.  What is wrong with the AI-probability group
mind-set is that very few of its proponents ever consider the problem
of statistical ambiguity and its obvious consequences.

All AI programmers have to consider the problem.  Most theories about
the mind posit the use of similar experiences to build up theories
about the world (or to derive methods to deal effectively with the
world).  So even though the methods to deal with the data environment
are detached from the original sources of those methods, they can
still be reconnected by the examination of similar experiences that
may subsequently occur.

But still it is important to be able to recognize the significance and
necessity of doing this from time to time.  It is important to be able
to reevaluate parts of your theories about things.  We are not just
making little modifications from our internal theories about things
when we react to ongoing events, we must be making some sort of
reevaluation of our insights about the kind of thing that we are
dealing with as well.

I realize now that most people in these groups probably do not
understand where I am coming from because their idea of AI programming
is based on a model of programming that is flat.  You have the program
at one level and the possible reactions to the data that is input as
the values of the program variables are carefully constrained by that
level.  You can imagine a more complex model of programming by
appreciating the possibility that the program can react to IO data by
rearranging subprograms to make new kinds of programs.  Although a
subtle argument can be made that any program that conditionally reacts
to input data is rearranging the execution of its subprograms, the
explicit recognition by the programmer that this is useful tool in
advanced programming is probably highly correlated with its more
effective use.  (I mean of course it is highly correlated with its
effective use!)  I believe that casually constructed learning methods
(and decision processes) can lead to even more uncontrollable results
when used with this self-programming aspect of advanced AI programs.

The consequences then of failing to recognize that mushed up decision
processes that are never compared against the data (or kinds of
situations) that they were derived from will be the inevitable
emergence of inherently illogical decision processes that will mush up
an AI system long before it gets any traction.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Abram Demski
Jim,

There is a large body of literature on avoiding overfitting, ie,
finding patterns that work for more then just the data at hand. Of
course, the ultimate conclusion is that you can never be 100% sure;
but some interesting safeguards have been cooked up anyway, which help
in practice.

My point is, the following paragraph is unfounded:

 This is a problem any AI method has to deal with, it is not just a
 probability thing.  What is wrong with the AI-probability group
 mind-set is that very few of its proponents ever consider the problem
 of statistical ambiguity and its obvious consequences.

The AI-probability group definitely considers such problems.

--Abram

On Sat, Nov 29, 2008 at 10:48 AM, Jim Bromer [EMAIL PROTECTED] wrote:
 One of the problems that comes with the casual use of analytical
 methods is that the user becomes inured to their habitual misuse. When
 a casual familiarity is combined with a habitual ignorance of the
 consequences of a misuse the user can become over-confident or
 unwisely dismissive of criticism regardless of how on the mark it
 might be.

 The most proper use of statistical and probabilistic methods is to
 base results on a strong association with the data that they were
 derived from.  The problem is that the AI community cannot afford this
 strong a connection to original source because they are trying to
 emulate the mind in some way and it is not reasonable to assume that
 the mind is capable of storing all data that it has used to derive
 insight.

 This is a problem any AI method has to deal with, it is not just a
 probability thing.  What is wrong with the AI-probability group
 mind-set is that very few of its proponents ever consider the problem
 of statistical ambiguity and its obvious consequences.

 All AI programmers have to consider the problem.  Most theories about
 the mind posit the use of similar experiences to build up theories
 about the world (or to derive methods to deal effectively with the
 world).  So even though the methods to deal with the data environment
 are detached from the original sources of those methods, they can
 still be reconnected by the examination of similar experiences that
 may subsequently occur.

 But still it is important to be able to recognize the significance and
 necessity of doing this from time to time.  It is important to be able
 to reevaluate parts of your theories about things.  We are not just
 making little modifications from our internal theories about things
 when we react to ongoing events, we must be making some sort of
 reevaluation of our insights about the kind of thing that we are
 dealing with as well.

 I realize now that most people in these groups probably do not
 understand where I am coming from because their idea of AI programming
 is based on a model of programming that is flat.  You have the program
 at one level and the possible reactions to the data that is input as
 the values of the program variables are carefully constrained by that
 level.  You can imagine a more complex model of programming by
 appreciating the possibility that the program can react to IO data by
 rearranging subprograms to make new kinds of programs.  Although a
 subtle argument can be made that any program that conditionally reacts
 to input data is rearranging the execution of its subprograms, the
 explicit recognition by the programmer that this is useful tool in
 advanced programming is probably highly correlated with its more
 effective use.  (I mean of course it is highly correlated with its
 effective use!)  I believe that casually constructed learning methods
 (and decision processes) can lead to even more uncontrollable results
 when used with this self-programming aspect of advanced AI programs.

 The consequences then of failing to recognize that mushed up decision
 processes that are never compared against the data (or kinds of
 situations) that they were derived from will be the inevitable
 emergence of inherently illogical decision processes that will mush up
 an AI system long before it gets any traction.

 Jim Bromer


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Steve Richfield
Jim,

YES - and I think I have another piece of your puzzle to consider...

A longtime friend of mine, Dave,  went on to become a PhD psychologist, who
subsequently took me on as a sort of project - to figure out why most
people who met me then either greatly valued my friendship, or quite the
opposite, would probably kill me if they had the safe opportunity. After
much discussion, interviewing people in both camps, etc., he came up with
what appears to be a key to decision making in general...

It appears that people pigeonhole other people, concepts, situations,
etc., into a very finite number of pigeonholes - probably just tens of
pigeonholes for other people. Along with the pigeonhole, they keep
amendments, like Steve is like Joe, but with 

Then, there is the pigeonhole labeled other that all the mavericks are
thrown into. Not being at all like anyone else that most people have ever
met, I was invariably filed into the other pigeonhole, along with
Einstein, Ted Bundy, Jack the Ripper, Stephen Hawking, etc.

People are safe to the extent that they are predictable, and people in the
other pigeonhole got that way because they appear to NOT be predictable,
e.g. because of their worldview, etc. Now, does the potential value of the
alternative worldview outweigh the potential danger of perceived
unpredictability? The answer to this question apparently drove my own
personal classification in other people.

Dave's goal was to devise a way to stop making enemies, but unfortunately,
this model of how people got that way suggested no potential solution.
People who keep themselves safe from others having radically different
worldviews are truly in a mental prison of their own making, and there is no
way that someone whom they distrust could ever release them from that
prison.

I suspect that recognition, decision making, and all sorts of intelligent
processes may be proceeding in much the same way. There may be no
grandmother neuron/pidgeonhole, but rather a kindly old person with an
amendment that is related. If on the other hand your other grandmother
flogged you as a child, the filing might be quite different.

Any thoughts?

Steve Richfield

On 11/29/08, Jim Bromer [EMAIL PROTECTED] wrote:

 One of the problems that comes with the casual use of analytical
 methods is that the user becomes inured to their habitual misuse. When
 a casual familiarity is combined with a habitual ignorance of the
 consequences of a misuse the user can become over-confident or
 unwisely dismissive of criticism regardless of how on the mark it
 might be.

 The most proper use of statistical and probabilistic methods is to
 base results on a strong association with the data that they were
 derived from.  The problem is that the AI community cannot afford this
 strong a connection to original source because they are trying to
 emulate the mind in some way and it is not reasonable to assume that
 the mind is capable of storing all data that it has used to derive
 insight.

 This is a problem any AI method has to deal with, it is not just a
 probability thing.  What is wrong with the AI-probability group
 mind-set is that very few of its proponents ever consider the problem
 of statistical ambiguity and its obvious consequences.

 All AI programmers have to consider the problem.  Most theories about
 the mind posit the use of similar experiences to build up theories
 about the world (or to derive methods to deal effectively with the
 world).  So even though the methods to deal with the data environment
 are detached from the original sources of those methods, they can
 still be reconnected by the examination of similar experiences that
 may subsequently occur.

 But still it is important to be able to recognize the significance and
 necessity of doing this from time to time.  It is important to be able
 to reevaluate parts of your theories about things.  We are not just
 making little modifications from our internal theories about things
 when we react to ongoing events, we must be making some sort of
 reevaluation of our insights about the kind of thing that we are
 dealing with as well.

 I realize now that most people in these groups probably do not
 understand where I am coming from because their idea of AI programming
 is based on a model of programming that is flat.  You have the program
 at one level and the possible reactions to the data that is input as
 the values of the program variables are carefully constrained by that
 level.  You can imagine a more complex model of programming by
 appreciating the possibility that the program can react to IO data by
 rearranging subprograms to make new kinds of programs.  Although a
 subtle argument can be made that any program that conditionally reacts
 to input data is rearranging the execution of its subprograms, the
 explicit recognition by the programmer that this is useful tool in
 advanced programming is probably highly correlated with its more
 effective use.  

Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Jim Bromer
Hi.  I will just make a quick response to this message and then I want
to think about the other messages before I reply.

A few weeks ago I decided that I would write a criticism of
ai-probability to post to this group.  I wasn't able remember all of
my criticisms so I decided to post a few preliminary sketches to
another group.  I wasn't too concerned about how they responded, and
in fact I thought they would just ignore me.  The first response I got
was from an irate guy who was quite unpleasant and then finished by
declaring that I slandered the entire ai-probability community!  He
had some reasonable criticisms about this but I considered the issue
tangential to the central issue I wanted to discuss. I would have
responded to his more reasonable criticisms if they hadn't been
embedded in his enraged rant.  I wondered why anyone would deface the
expression of his own thoughts with an emotional and hostile message,
so I wanted to try the same message on this group to see if anyone who
was more mature would focus on this same issue.

Abram made a measured response but his focus was on the
over-generalization.  As I said, this was just a preliminary sketch of
a message that I intended to post to this group after I had worked on
it.

Your point is taken.  Norvig seems to say that overfitting is a
general problem.  The  method given to study the problem is
probabilistic but it is based on the premise that the original data is
substantially intact.  But Norvig goes on to mention that with pruning
noise can be tolerated. If you read my message again you may see that
my central issue was not really centered on the issue of whether
anyone in the ai-probability community was aware of the nature of the
science of statistics but whether or not probability can be used as
the fundamental basis to create agi given the complexities of the
problem.  So while your example of overfitting certainly does deflate
my statements that no one in the ai-probability community gets this
stuff, it does not actually address the central issue that I was
thinking of.

I am not sure if Norvig's application of a probabilistic method to
detect overfitting is truly directed toward the agi community.  In
other words: Has anyone in this grouped tested the utility and clarity
of the decision making of a fully automated system to detect
overfitting in a range of complex IO data fields that one might expect
to encounter in AGI?

Jim Bromer



On Sat, Nov 29, 2008 at 11:32 AM, Abram Demski [EMAIL PROTECTED] wrote:
 Jim,

 There is a large body of literature on avoiding overfitting, ie,
 finding patterns that work for more then just the data at hand. Of
 course, the ultimate conclusion is that you can never be 100% sure;
 but some interesting safeguards have been cooked up anyway, which help
 in practice.

 My point is, the following paragraph is unfounded:

 This is a problem any AI method has to deal with, it is not just a
 probability thing.  What is wrong with the AI-probability group
 mind-set is that very few of its proponents ever consider the problem
 of statistical ambiguity and its obvious consequences.

 The AI-probability group definitely considers such problems.

 --Abram

 On Sat, Nov 29, 2008 at 10:48 AM, Jim Bromer [EMAIL PROTECTED] wrote:
 One of the problems that comes with the casual use of analytical
 methods is that the user becomes inured to their habitual misuse. When
 a casual familiarity is combined with a habitual ignorance of the
 consequences of a misuse the user can become over-confident or
 unwisely dismissive of criticism regardless of how on the mark it
 might be.

 The most proper use of statistical and probabilistic methods is to
 base results on a strong association with the data that they were
 derived from.  The problem is that the AI community cannot afford this
 strong a connection to original source because they are trying to
 emulate the mind in some way and it is not reasonable to assume that
 the mind is capable of storing all data that it has used to derive
 insight.

 This is a problem any AI method has to deal with, it is not just a
 probability thing.  What is wrong with the AI-probability group
 mind-set is that very few of its proponents ever consider the problem
 of statistical ambiguity and its obvious consequences.

 All AI programmers have to consider the problem.  Most theories about
 the mind posit the use of similar experiences to build up theories
 about the world (or to derive methods to deal effectively with the
 world).  So even though the methods to deal with the data environment
 are detached from the original sources of those methods, they can
 still be reconnected by the examination of similar experiences that
 may subsequently occur.

 But still it is important to be able to recognize the significance and
 necessity of doing this from time to time.  It is important to be able
 to reevaluate parts of your theories about things.  We are not just
 making little modifications from our 

Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Ben Goertzel
Well, if you're willing to take the step of asking questions about the
world that are framed in terms of probabilities and probability
distributions ... then modern probability and statistics tell you a
lot about overfitting and how to avoid it...

OTOH if, like Pei Wang, you think it's misguided to ask questions
posed in a probabilistic framework, then that theory will not be
directly relevant to you...

To me the big weaknesses of modern probability theory lie  in
**hypothesis generation** and **inference**.   Testing a hypothesis
against data, to see if it's overfit to that data, is handled well by
crossvalidation and related methods.

But the problem of: given a number of hypotheses with support from a
dataset, generating other interesting hypotheses that will also have
support from the dataset ... that is where traditional probabilistic
methods (though not IMO the foundational ideas of probability) fall
short, providing only unscalable or oversimplified solutions...

-- Ben G

On Sat, Nov 29, 2008 at 1:08 PM, Jim Bromer [EMAIL PROTECTED] wrote:
 Hi.  I will just make a quick response to this message and then I want
 to think about the other messages before I reply.

 A few weeks ago I decided that I would write a criticism of
 ai-probability to post to this group.  I wasn't able remember all of
 my criticisms so I decided to post a few preliminary sketches to
 another group.  I wasn't too concerned about how they responded, and
 in fact I thought they would just ignore me.  The first response I got
 was from an irate guy who was quite unpleasant and then finished by
 declaring that I slandered the entire ai-probability community!  He
 had some reasonable criticisms about this but I considered the issue
 tangential to the central issue I wanted to discuss. I would have
 responded to his more reasonable criticisms if they hadn't been
 embedded in his enraged rant.  I wondered why anyone would deface the
 expression of his own thoughts with an emotional and hostile message,
 so I wanted to try the same message on this group to see if anyone who
 was more mature would focus on this same issue.

 Abram made a measured response but his focus was on the
 over-generalization.  As I said, this was just a preliminary sketch of
 a message that I intended to post to this group after I had worked on
 it.

 Your point is taken.  Norvig seems to say that overfitting is a
 general problem.  The  method given to study the problem is
 probabilistic but it is based on the premise that the original data is
 substantially intact.  But Norvig goes on to mention that with pruning
 noise can be tolerated. If you read my message again you may see that
 my central issue was not really centered on the issue of whether
 anyone in the ai-probability community was aware of the nature of the
 science of statistics but whether or not probability can be used as
 the fundamental basis to create agi given the complexities of the
 problem.  So while your example of overfitting certainly does deflate
 my statements that no one in the ai-probability community gets this
 stuff, it does not actually address the central issue that I was
 thinking of.

 I am not sure if Norvig's application of a probabilistic method to
 detect overfitting is truly directed toward the agi community.  In
 other words: Has anyone in this grouped tested the utility and clarity
 of the decision making of a fully automated system to detect
 overfitting in a range of complex IO data fields that one might expect
 to encounter in AGI?

 Jim Bromer



 On Sat, Nov 29, 2008 at 11:32 AM, Abram Demski [EMAIL PROTECTED] wrote:
 Jim,

 There is a large body of literature on avoiding overfitting, ie,
 finding patterns that work for more then just the data at hand. Of
 course, the ultimate conclusion is that you can never be 100% sure;
 but some interesting safeguards have been cooked up anyway, which help
 in practice.

 My point is, the following paragraph is unfounded:

 This is a problem any AI method has to deal with, it is not just a
 probability thing.  What is wrong with the AI-probability group
 mind-set is that very few of its proponents ever consider the problem
 of statistical ambiguity and its obvious consequences.

 The AI-probability group definitely considers such problems.

 --Abram

 On Sat, Nov 29, 2008 at 10:48 AM, Jim Bromer [EMAIL PROTECTED] wrote:
 One of the problems that comes with the casual use of analytical
 methods is that the user becomes inured to their habitual misuse. When
 a casual familiarity is combined with a habitual ignorance of the
 consequences of a misuse the user can become over-confident or
 unwisely dismissive of criticism regardless of how on the mark it
 might be.

 The most proper use of statistical and probabilistic methods is to
 base results on a strong association with the data that they were
 derived from.  The problem is that the AI community cannot afford this
 strong a connection to original source because they are 

Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Matt Mahoney
--- On Sat, 11/29/08, Jim Bromer [EMAIL PROTECTED] wrote:

 I am not sure if Norvig's application of a probabilistic method to
 detect overfitting is truly directed toward the agi community.  In
 other words: Has anyone in this grouped tested the utility and clarity
 of the decision making of a fully automated system to detect
 overfitting in a range of complex IO data fields that one might expect
 to encounter in AGI?

The general problem of detecting overfitting is not computable. The principle 
according to Occam's Razor, formalized and proven by Hutter's AIXI model, is to 
choose the shortest program (simplest hypothesis) that generates the data. 
Overfitting is the case of choosing a program that is too large.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Charles Hixson

A response to:

I wondered why anyone would deface the
expression of his own thoughts with an emotional and hostile message,

My theory is that thoughts are generated internally and forced into words via a 
babble generator.  Then the thoughts are filtered through a screen to remove 
any that don't match ones intent, that don't make sense, etc.  The value 
assigned to each expression is initially dependent on how well it expresses 
one's emotional tenor.

Therefore I would guess that all of the verbalizations that the individual 
generated which passed the first screen were hostile in nature.  From the 
remaining sample he filtered those which didn't generate sensible-to-him 
scenarios when fed back into his world model.  This left him with a much 
reduced selection of phrases to choose from when composing his response.

In my model this happens a phrase at a time rather than a sentence at a time.  
And there is also a probabilistic element where each word has a certain 
probability of being followed by divers other words.  I often don't want to 
express the most likely probability, as by choosing a less frequently chosen 
alternative I (believe I) create the impression a more studied, i.e. 
thoughtful, response.  But if one wishes to convey a more dynamic style then 
one would choose a more likely follower.

Note that in this scenario phrases are generated both randomly and in parallel. 
 Then they are selected for fitness for expression by passing through various 
filter.

Reasonable?


Jim Bromer wrote:

Hi.  I will just make a quick response to this message and then I want
to think about the other messages before I reply.

A few weeks ago I decided that I would write a criticism of
ai-probability to post to this group.  I wasn't able remember all of
my criticisms so I decided to post a few preliminary sketches to
another group.  I wasn't too concerned about how they responded, and
in fact I thought they would just ignore me.  The first response I got
was from an irate guy who was quite unpleasant and then finished by
declaring that I slandered the entire ai-probability community!  He
had some reasonable criticisms about this but I considered the issue
tangential to the central issue I wanted to discuss. I would have
responded to his more reasonable criticisms if they hadn't been
embedded in his enraged rant.  I wondered why anyone would deface the
expression of his own thoughts with an emotional and hostile message,
so I wanted to try the same message on this group to see if anyone who
was more mature would focus on this same issue.

Abram made a measured response but his focus was on the
over-generalization.  As I said, this was just a preliminary sketch of
a message that I intended to post to this group after I had worked on
it.

Your point is taken.  Norvig seems to say that overfitting is a
general problem.  The  method given to study the problem is
probabilistic but it is based on the premise that the original data is
substantially intact.  But Norvig goes on to mention that with pruning
noise can be tolerated. If you read my message again you may see that
my central issue was not really centered on the issue of whether
anyone in the ai-probability community was aware of the nature of the
science of statistics but whether or not probability can be used as
the fundamental basis to create agi given the complexities of the
problem.  So while your example of overfitting certainly does deflate
my statements that no one in the ai-probability community gets this
stuff, it does not actually address the central issue that I was
thinking of.

I am not sure if Norvig's application of a probabilistic method to
detect overfitting is truly directed toward the agi community.  In
other words: Has anyone in this grouped tested the utility and clarity
of the decision making of a fully automated system to detect
overfitting in a range of complex IO data fields that one might expect
to encounter in AGI?

Jim Bromer



On Sat, Nov 29, 2008 at 11:32 AM, Abram Demski [EMAIL PROTECTED] wrote:
  

Jim,

There is a large body of literature on avoiding overfitting, ie,
finding patterns that work for more then just the data at hand. Of
course, the ultimate conclusion is that you can never be 100% sure;
but some interesting safeguards have been cooked up anyway, which help
in practice.

My point is, the following paragraph is unfounded:



This is a problem any AI method has to deal with, it is not just a
probability thing.  What is wrong with the AI-probability group
mind-set is that very few of its proponents ever consider the problem
of statistical ambiguity and its obvious consequences.
  

The AI-probability group definitely considers such problems.

--Abram

On Sat, Nov 29, 2008 at 10:48 AM, Jim Bromer [EMAIL PROTECTED] wrote:


One of the problems that comes with the casual use of analytical
methods is that the user becomes inured to their habitual misuse. When
a casual familiarity is combined with a 

Re: [agi] who is going to build the wittgenstein-ian AI filter to spot all the intellectual nonsense

2008-11-29 Thread Charles Hixson
A general approach to this that frequently works is to examine the 
definitions that you are using for ambiguity.  Then to look for 
operational tests.  If the only clear meanings lack operational tests, 
then it's probably worthless to waste computing resources on the problem 
until those problems have been cleared up.  If the level of ambiguity is 
too high (judgment call) then the first order of business is to ensure 
that you are talking about the same thing.  If you can't do that, then 
it's probably a waste of time to compute intensively about it.


Note that this works, because different people draw their boundaries in 
different places, so different people spend time on different 
questions.  It results in an approximately reasonable allocation of 
effort, which changes as knowledge accumulates.  If everyone drew the 
bounds in the same place, then it would be a lamentably narrow area 
being explored intensively, with lots of double coverage.  (There's 
already lots of double coverage.  Patents for the telephone, I believe 
it was, were filed by two people within the same week.  Or look at the 
history of the airplane.  But there's a lot LESS double coverage than if 
everyone drew the boundary in the same place.)


As for What is consciousness?... DEFINE YOUR TERMS.  If you define how 
you recognize consciousness, then I can have a chance of answering your 
question, otherwise you can reject any answer I give with But that's 
not what I meant!


Ditto for time.  Or I could slip levels and tell you that it's a word 
with four letters (etc.).


Also, many people are working intensively on the nature of time.  They 
know in detail what they mean (not that they all necessarily mean the 
same thing).  To say that they are wasting their time because questions 
about the nature of time are silly is, itself, silly.  Your question 
about the nature of time may be silly, but that's because you don't have 
a good definition with operational tests.  That says nothing about what 
the exact same words may mean when someone else says them. (E.g. [off 
the top of my head], time is a locally monotonically increasing measure 
of state changes within the local environment is a plausible definition 
of time.  It has some redeeming features.  It, however, doesn't admit of 
a test of why it exists.  That would need to be posed within the context 
of a larger theory which implied operational tests.)


There are linguistic tricks.  E.g., when It's raining, who or what is 
raining.  But generally they are relatively trivial...unless you accept 
language as being an accurate model of the universe.  Or consider Who 
is the master who makes the grass green?  That's not a meaningless 
question in the proper context.  It's an elementary problem for the 
student. 


(don't peek)


(do you know the answer?)


It's intended to cause the student to realize that things do not have 
inherent properties that are caused by sensations interpreted by the 
human brain.  But other reasonable answers might be the gardener, who 
waters and fertilizes it or perhaps a particular molecule that 
resonates in such a manner that the primary light that re-radiates from 
grass is in that part of the spectrum that we have labeled green.  And 
I'm certain that there are other valid answers.  (I have a non-standard 
answer to The sound of one hand clapping, as I can, indeed, clap with 
one hand...fingers against the palm.  I think it takes large hands.)


If one writes off as senseless questions that don't make sense to one, 
wellwhat is the square root of -1?  The very name imaginary tells 
you how unreasonable most mathematicians thought that question.  But it 
turned out to be rather valuable.  And it worked because someone made a 
series of operational tests and showed that it would work.  Up until 
then the very definition of square root prohibited using negative 
numbers.  So they agreed to change the definition.


I don't think that you can rule out any question as nonsensical provided 
that there are operational tests and unambiguous definitions.  And if 
there aren't, then you can make some.  It may not answer the question 
that you couldn't define...but if you can't sensibly ask the question, 
then it isn't much of a question (no matter HOW important it feels).



Tudor Boloni wrote:
I agree that there are many better questions to elucidate the 
tricks/pitfalls of language.  but lets list the biggest time wasters 
first, and the post showed some real time wasters from various fields 
that i found valuable to be aware of


It implies it is pointless to ask what the essence of time is, but
then proceeds to give an explanation of time that is not
pointless, and may shed light on its meaning, which is perhaps as
much of an essence as time has..

i think the post tries to show that the error is that treating time 
like an object of reality with an essence is nonsensical and a waste 
of time;) it seems wonderful to have an AGI 

Re: [agi] If aliens are monitoring us, our development of AGI might concern them

2008-11-29 Thread Charles Hixson

Well.
The speed of light limitation seems rather secure. So I would propose 
that we have been visited by roboticized probes, rather than by 
naturally evolved creatures. And the energetic constraints make it seem 
likely that they were extremely small and infrequent...though I suppose 
that they could build larger probes locally.


My guess is that UFOs are just that. Unidentified. I suspect that many 
of them aren't even objects in any normal sense of the word. Temporary 
plasmas, etc. And others are more or less orthodox flying vehicles seen 
under unusual conditions. (I remember once being convinced that I'd seen 
one, but extended observation revealed that it was an advertising blimp 
seen with the sun behind it, and it was partially transparent. Quite 
impressive, and not at all blimp like. It even seemed to be moving 
rapidly, but that was due to the sunlight passing through an interior 
membrane that was changing in size and shape.


It would require rather impressive evidence before I would believe in 
actual visitations by naturally evolved entities. (Though the concept of 
MacroLife does provide one reasonable scenario.) Still... I would 
consider it more plausible to assert that we lived in a virtual world 
scenario, and were being monitored within it.


In any case, I see no operational tests, and thus I don't see any cause 
for using those possibilities to alter our activities.



Ed Porter wrote:


Since there have been multiple discussions of aliens lately on this 
list, I think I should communicate a thought that I have had 
concerning them that I have not heard any one else say --- although I 
would be very surprised if others have not thought it --- and it does 
relate to AGI --- so it is “on list.”




As we learn just how common exoplanets are, the possibility that 
aliens have visited earth seems increasingly scientifically 
believable, even for a relatively rationalist person like myself. 
There have, in fact, been many reportings of UFOs from sources that 
are hard to reject out of hand. An astronaut that NASA respected 
enough to send to the moon, has publicly stated he has attended 
government briefings in which he was told there is substantial 
evidence aliens have repeatedly visited earth. Within the last year 
Drudge had a report from a Chicago TV station that said sources at the 
tower of O'Hare airport claimed multiple airline pilots reported to 
them seeing a large flying-saucer-shaped object hovering over one of 
the building of the airport and then disappearing.


Now, I am not saying these reports are necessarily true, but I am 
saying that --- (a) given how rapidly life evolved on earth, as soon 
as it cooled enough that there were large pools of water; (b) there 
are probably at least a million habitable planets in the Milky Way (a 
conservative estimates); and (c) if one assumes one in 1000 such 
planets will have life evolve to AGI super-intelligence --- the 
chances there are planets with AGI super-intelligence within several 
thousand light years of earth are very good. And since, at least, 
mechanical AGIs with super intelligence and the resulting levels of 
technology should be able to travel through space at one tenth to one 
thousandth the speed of light for many tens of thousands of years, it 
is not at all unlikely life and/or machine forms from such planets 
have had time to reach us --- and perhaps --- not only to reach us --- 
but also to report back to their home planet and recruit many more of 
their kind to visit us.


This becomes even more likely if one considers that some predict the 
Milky Way actually had its peak number of habitable planets billions 
of years ago, meaning that on many planets evolution of intelligent 
life is millions, or billions, of years ahead of ours, and thus that 
life/machine forms on many of the planets capable of supporting 
intelligent life are millions of years beyond their singularities. 
This would mean their development of extremely powerful 
super-intelligence and the attendant developments in technologies we 
know of --- such as nanofabrication, controlled fusion reactions, and 
quantum computing and engineering --- and technologies we do not yet 
even know of --- would be way beyond our imagining.


All of the above is nothing new, among those who are open minded about 
(a) the evidence about the commonness of exoplanets; (b) the fact that 
there are enough accounts of UFO's from reputable sources that such 
accounts cannot dismissed out of hand as false, and (c) what the 
singularity and the development of super-intelligence would mean to a 
civilization.




But what I am suggesting that I have never heard before is that it is 
possible the aliens, if they actually have been visiting us repeatedly 
are watching us to see when mankind achieves super-intelligence, 
because only then do we presumably have a chance of becoming their equal.


Perhaps this means that only then we can understand them. Or 

Re: [agi] Re: JAGI submission

2008-11-29 Thread Charles Hixson

Matt Mahoney wrote:

--- On Tue, 11/25/08, Eliezer Yudkowsky [EMAIL PROTECTED] wrote:

  

Shane Legg, I don't mean to be harsh, but your attempt to link
Kolmogorov complexity to intelligence is causing brain damage among
impressionable youths.

( Link debunked here:
  http://www.overcomingbias.com/2008/11/complexity-and.html
)



Perhaps this is the wrong argument to support my intuition that knowing more 
makes you smarter, as in greater expected utility over a given time period. How 
do we explain that humans are smarter than calculators, and calculators are 
smarter than rocks?

...

-- Matt Mahoney, [EMAIL PROTECTED]
  
Each particular instantiation of computing has a certain maximal 
intelligence that it can express (noting that intelligence is 
ill-defined).  More capacious stores can store more information.  Faster 
processors can process information more quickly.


However, information is not, in and of itself, intelligence.  
Information is the database on which intelligence operates.  Information 
isn't a measure of intelligence, and intelligence isn't a measure of 
information.  We have decent definitions of information.  We lack 
anything corresponding for intelligence.  It's certainly not complexity, 
though intelligence appears to require a certain amount of complexity.  
And it's not a relationship between information and complexity.


I still suspect that intelligence will turn out to be to what we think 
of as intelligence rather as a symptom is to a syndrome.  (N.B., not as 
a symptom is to a disease!)  That INTELLIGENCE will turn out to be 
composed of many, many, small little tricks that enable one to solve a 
certain class of problems quick...or even at all.  But that the tricks 
will have no necessary relation ship to each other.  One will be 
something like alpha-beta pruning and another will be hill-climbing and 
another quick-sort, and another...and another will be a heuristic for 
classifying a problem as to what tools might help solve it...and another
As such, I don't think that any AGI can exist.  Something more general 
than people, and certainly something that thinks more quickly than 
people and something that knows more than any person can...but not a 
truly general AI.


E.g., where would you put a map colorer for 4-color maps?  Certainly an 
AGI should be able to do it, but would you really expect it to do it 
more readily (compared to the speed of it's other processes) than people 
can?  If it could, would that really bump your estimate of it's 
intelligence that much?  And yet there are probably an indefinitely 
large number of such problems.  And from what it currently know, it's 
quite likely that each one would either need n^k or better steps to 
solve, or a specialized algorithm.  Or both.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Jim Bromer
In response to my message, where I said,
What is wrong with the AI-probability group mind-set is that very few
of its proponents ever consider the problem of statistical ambiguity
and its obvious consequences.
Abram noted,
The AI-probability group definitely considers such problems.
There is a large body of literature on avoiding overfitting, ie,
finding patterns that work for more then just the data at hand.

Suppose I responded with a remark like,
6341/6344 wrong Abram...

A remark like this would be absurd because it lacks reference,
explanation and validity while also presenting a comically false
numerical precision for its otherwise inherent meaninglessness.

Where does the ratio 6341/6344 come from?  I did a search in ListBox
of all references to the word overfitting made in 2008 and found
that out of 6344 messages only 3 actually involved the discussion of
the word before Abram mentioned it today.  (I don't know how good
ListBox is for this sort of thing).

So what is wrong with my conclusion that Abram was 6341/6344 wrong?
Lots of things and they can all be described using declarative
statements.

First of all the idea that the conversations in this newsgroup
represent an adequate sampling of all ai-probability enthusiasts is
totally ridiculous.  Secondly, Abram's mention of overfitting was just
one example of how the general ai-probability community is aware of
the problem that I mentioned.  So while my statistical finding may be
tangentially relevant to the discussion, the presumption that it can
serve as a numerical evaluation of Abram's 'wrongness' in his response
is so absurd that it does not merit serious consideration.  My
skepticism then concerns the question of just how would a fully
automated AGI program that relied fully on probability methods be able
to avoid getting sucked into the vortex of such absurd mushy reasoning
if it wasn't also able to analyze the declarative inferences of its
application of statistical methods?

I believe that an AI program that is to be capable of advanced AGI has
to be capable of declarative assessment to work with any other
mathematical methods of reasoning it is programmed with.

The ability to reason about declarative knowledge does not necessarily
have to be done in text or something like that.  That is not what I
mean.  What I really mean is that an effective AI program is going to
have to be capable of some kind of referential analysis of events in
the IO data environment using methods other than probability.  But if
it is to attain higher intellectual functions it has to be done in a
creative and imaginative way.

Just as human statisticians have to be able to express and analyze the
application of their statistical methods using declarative statements
that refer to the data subject fields and the methods used, an AI
program that is designed to utilize automated probability reasoning to
attain greater general success is going to have to be able to express
and analyze its statistical assessments in terms of some kind of
declarative methods as well.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Jim Bromer
On Sat, Nov 29, 2008 at 1:51 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 To me the big weaknesses of modern probability theory lie  in
 **hypothesis generation** and **inference**.   Testing a hypothesis
 against data, to see if it's overfit to that data, is handled well by
 crossvalidation and related methods.

 But the problem of: given a number of hypotheses with support from a
 dataset, generating other interesting hypotheses that will also have
 support from the dataset ... that is where traditional probabilistic
 methods (though not IMO the foundational ideas of probability) fall
 short, providing only unscalable or oversimplified solutions...

 -- Ben G

Could you give me a little more detail about your thoughts on this?
Do you think the problem of increasing uncomputableness of complicated
complexity is the common thread found in all of the interesting,
useful but unscalable methods of AI?
Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Ben Goertzel
Whether an AI needs to explicitly manipulate declarative statements is
a deep question ... it may be that other dynamics that are in some
contexts implicitly equivalent to this sort of manipulation will
suffice

But anyway, there is no contradiction between manipulating explicit
declarative statements and using probability theory.

Some of my colleagues and I spent a bunch of time during the last few
years figuring out nice ways to combine probability theory and formal
logic.  In fact there are Progic workshops every year exploring
these sorts of themes.

So, while the mainstream of probability-focused AI theorists aren't
doing hard-core probabilistic logic, some researchers certainly are...

I've been displeased with the wimpiness of the progic subfield, and
its lack of contribution to areas like inference with nested
quantifiers, and intensional inference ... and I've tried to remedy
these shortcomings with PLN (Probabilistic Logic Networks) ...

So, I think it's correct to criticize the mainstream of
probability-focused AI theorists for not doing AGI ;-) ... but I don't
think they've overlooking basic issues like overfitting and such ... I
think they're just focusing on relatively easy problems where (unlike
if you want to do explicitly probability theory based AGI) you don't
need to merge probability theory with complex logical constructs...

ben

On Sat, Nov 29, 2008 at 9:15 PM, Jim Bromer [EMAIL PROTECTED] wrote:
 In response to my message, where I said,
 What is wrong with the AI-probability group mind-set is that very few
 of its proponents ever consider the problem of statistical ambiguity
 and its obvious consequences.
 Abram noted,
 The AI-probability group definitely considers such problems.
 There is a large body of literature on avoiding overfitting, ie,
 finding patterns that work for more then just the data at hand.

 Suppose I responded with a remark like,
 6341/6344 wrong Abram...

 A remark like this would be absurd because it lacks reference,
 explanation and validity while also presenting a comically false
 numerical precision for its otherwise inherent meaninglessness.

 Where does the ratio 6341/6344 come from?  I did a search in ListBox
 of all references to the word overfitting made in 2008 and found
 that out of 6344 messages only 3 actually involved the discussion of
 the word before Abram mentioned it today.  (I don't know how good
 ListBox is for this sort of thing).

 So what is wrong with my conclusion that Abram was 6341/6344 wrong?
 Lots of things and they can all be described using declarative
 statements.

 First of all the idea that the conversations in this newsgroup
 represent an adequate sampling of all ai-probability enthusiasts is
 totally ridiculous.  Secondly, Abram's mention of overfitting was just
 one example of how the general ai-probability community is aware of
 the problem that I mentioned.  So while my statistical finding may be
 tangentially relevant to the discussion, the presumption that it can
 serve as a numerical evaluation of Abram's 'wrongness' in his response
 is so absurd that it does not merit serious consideration.  My
 skepticism then concerns the question of just how would a fully
 automated AGI program that relied fully on probability methods be able
 to avoid getting sucked into the vortex of such absurd mushy reasoning
 if it wasn't also able to analyze the declarative inferences of its
 application of statistical methods?

 I believe that an AI program that is to be capable of advanced AGI has
 to be capable of declarative assessment to work with any other
 mathematical methods of reasoning it is programmed with.

 The ability to reason about declarative knowledge does not necessarily
 have to be done in text or something like that.  That is not what I
 mean.  What I really mean is that an effective AI program is going to
 have to be capable of some kind of referential analysis of events in
 the IO data environment using methods other than probability.  But if
 it is to attain higher intellectual functions it has to be done in a
 creative and imaginative way.

 Just as human statisticians have to be able to express and analyze the
 application of their statistical methods using declarative statements
 that refer to the data subject fields and the methods used, an AI
 program that is designed to utilize automated probability reasoning to
 attain greater general success is going to have to be able to express
 and analyze its statistical assessments in terms of some kind of
 declarative methods as well.

 Jim Bromer


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

I intend to live forever, or die trying.
-- 

Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Ben Goertzel
 Could you give me a little more detail about your thoughts on this?
 Do you think the problem of increasing uncomputableness of complicated
 complexity is the common thread found in all of the interesting,
 useful but unscalable methods of AI?
 Jim Bromer

Well, I think that dealing with combinatorial explosions is, in
general, the great unsolved problem of AI. I think the opencog prime
design can solve it, but this isn't proved yet...

Even relatively unambitious AI methods tend to get dumbed down further
when you try to scale them up, due to combinatorial explosion issues.
For instance, Bayes nets aren't that clever to begin with ... they
don't do that much ... but to make them scalable, one has to make them
even more limited and basically ignore combinational causes and just
look at causes between one isolated event-class and another...

And of course, all theorem provers are unscalable due to having no
scalable methods of inference tree pruning...

Evolutionary methods can't handle complex fitness functions because
they'd require overly large population sizes...

In general, the standard AI methods can't handle pattern recognition
problems requiring finding complex interdependencies among multiple
variables that are obscured among scads of other variables

The human mind seems to do this via building up intuition via drawing
analogies among multiple problems it confronts during its history.
Also of course the human mind builds internal simulations of the
world, and probes these simulations and draws analogies from problems
it solved in its inner sim world, to problems it encounters in the
outer world...

etc. etc. etc.

ben


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Jim Bromer
On Sat, Nov 29, 2008 at 11:53 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
 Jim,

 YES - and I think I have another piece of your puzzle to consider...

 A longtime friend of mine, Dave,  went on to become a PhD psychologist, who
 subsequently took me on as a sort of project - to figure out why most
 people who met me then either greatly valued my friendship, or quite the
 opposite, would probably kill me if they had the safe opportunity. After
 much discussion, interviewing people in both camps, etc., he came up with
 what appears to be a key to decision making in general...

 It appears that people pigeonhole other people, concepts, situations,
 etc., into a very finite number of pigeonholes - probably just tens of
 pigeonholes for other people.


Steve:
I found that I used a similar method of categorizing people who I
talked to on these newsgroups.  I wouldn't call it pigeonholing
though. (Actually, I wouldn't call anything pigeonholing, but that is
just me.)  I would rely on a handful of generalizations that I thought
were applicable to different people who tended to exhibit some common
characteristics.  However, when I discovered that an individual who I
thought I understood had another facet to his personality or thoughts
that I hadn't seen before I often found that I had to apply another
categorical generality to my impression of him.  I soon built up
generalization categories based on different experiences with
different kinds of people, and I eventually realized that although I
often saw similar kinds of behaviors in different people, each person
seemed to be comprised of different sets (or different strengths) of
the various component characteristics that I derived to recall my
experiences with people in these groups.  So I came to similar
conclusions that you and your friend came to.

An interesting thing about talking to reactive people in these
discussion groups.  I found that by eliminating more and more affect
from my comments, by refraining from personal comments, innuendos or
making meta-discussion analyses and by increasingly emphasizing
objectivity in my comments I could substantially reduce any hostility
directed at me.  My problem is that I do not want to remove all affect
from my conversation just to placate some unpleasant person.  But I
guess I should start using that technique again when necessary.

Jim Bromer


On Sat, Nov 29, 2008 at 11:53 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
 Jim,

 YES - and I think I have another piece of your puzzle to consider...

 A longtime friend of mine, Dave,  went on to become a PhD psychologist, who
 subsequently took me on as a sort of project - to figure out why most
 people who met me then either greatly valued my friendship, or quite the
 opposite, would probably kill me if they had the safe opportunity. After
 much discussion, interviewing people in both camps, etc., he came up with
 what appears to be a key to decision making in general...

 It appears that people pigeonhole other people, concepts, situations,
 etc., into a very finite number of pigeonholes - probably just tens of
 pigeonholes for other people. Along with the pigeonhole, they keep
 amendments, like Steve is like Joe, but with 

 Then, there is the pigeonhole labeled other that all the mavericks are
 thrown into. Not being at all like anyone else that most people have ever
 met, I was invariably filed into the other pigeonhole, along with
 Einstein, Ted Bundy, Jack the Ripper, Stephen Hawking, etc.

 People are safe to the extent that they are predictable, and people in the
 other pigeonhole got that way because they appear to NOT be predictable,
 e.g. because of their worldview, etc. Now, does the potential value of the
 alternative worldview outweigh the potential danger of perceived
 unpredictability? The answer to this question apparently drove my own
 personal classification in other people.

 Dave's goal was to devise a way to stop making enemies, but unfortunately,
 this model of how people got that way suggested no potential solution.
 People who keep themselves safe from others having radically different
 worldviews are truly in a mental prison of their own making, and there is no
 way that someone whom they distrust could ever release them from that
 prison.

 I suspect that recognition, decision making, and all sorts of intelligent
 processes may be proceeding in much the same way. There may be no
 grandmother neuron/pidgeonhole, but rather a kindly old person with an
 amendment that is related. If on the other hand your other grandmother
 flogged you as a child, the filing might be quite different.

 Any thoughts?

 Steve Richfield
 
 On 11/29/08, Jim Bromer [EMAIL PROTECTED] wrote:

 One of the problems that comes with the casual use of analytical
 methods is that the user becomes inured to their habitual misuse. When
 a casual familiarity is combined with a habitual ignorance of the
 consequences of a misuse the user can become 

[agi] Seeking CYC critiques

2008-11-29 Thread Robin Hanson


What are the best available critiques of CYC as it exists now (vs. soon
after project started)?

Robin Hanson [EMAIL PROTECTED]
http://hanson.gmu.edu 
Research Associate, Future of Humanity Institute at Oxford
University
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-
703-993-2326 FAX: 703-993-2323
 



  

  
  agi | Archives

 | Modify
 Your Subscription


  

  








Re: [agi] Seeking CYC critiques

2008-11-29 Thread Stephen Reed
Hi Robin,
There are no Cyc critiques that I know of in the last few years.  I was 
employed seven years at Cycorp until August 2006 and my non-compete agreement 
expired a year later.   

An interesting competition was held by Project Halo in which Cycorp 
participated along with two other research groups to demonstrate human-level 
competency answering chemistry questions.  Results are here.  Although Cycorp 
performed principled deductive inference giving detailed justifications, it was 
judged to have performed inferior due to the complexity of its justifications 
and due to its long running times.  The other competitors used special purpose 
problem solving modules whereas Cycorp used its general purpose inference 
engine, extended for chemistry equations as needed.

My own interest is in natural language dialog systems for rapid knowledge 
formation.  I was Cycorp's first project manager for its participation in the 
the DARPA Rapid Knowledge Formation project where it performed to DARPA's 
satisfaction, but subsequently its RKF tools never lived up to Cycorp's 
expectations that subject matter experts could rapidly extend the Cyc KB 
without Cycorp ontological engineers having to intervene.  A Cycorp paper 
describing its KRAKEN system is here.

 
I would be glad to answer questions about Cycorp and Cyc technology to the best 
of my knowledge, which is growing somewhat stale at this point.

Cheers.
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860





From: Robin Hanson [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, November 29, 2008 9:46:09 PM
Subject: [agi] Seeking CYC critiques

What are the best available critiques of CYC as it exists now (vs. soon after 
project started)?


Robin Hanson  [EMAIL PROTECTED]  http://hanson.gmu.edu 
Research Associate, Future of Humanity Institute at Oxford University
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-
703-993-2326  FAX: 703-993-2323
  


 
agi | Archives  | Modify Your Subscription  


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com