Re: [agi] Parsing theories

2007-05-23 Thread Eric Baum

This is based purely on reading the wikipedia entry on Operator
grammar, which I find very interesting. I'm hoping someone out there
knows enough about this to answer some questions :^) 

Wikipedia says that various quantities are learnable because they can
in principle be determined by data. What is known about whether they
are efficiently learnable, e.g. (a) whether a child would acquire enough
data to learn the language and (b) whether given the data, learning
the language would be computationally feasible? (e.g. polynomial
time.)

Keep in mind that, you have to learn the language well enough to 
deal with the fact that you can generate and understand (and thus
pretty much have to be able to calculate the likelihood of) a
virtually infinite number of sentences never before seen.

I presume the answer to these two questions (how much data you need
and how easy it is to learn from it) will depend on how you
parametrize the various knowledge you learn. So, for example,
take a word that takes two arguments. One way to parametrize 
the likelihood of various arguments would be with a table over
all two word combinations, the i,j entry gives the likelihood
that the ith word and the jth word are the two arguments.
But most likely, in reality, the likelihood of the jth word
will be much pinned down conditional on the ith. So one might
imagine parametrizing these learned coherent selection tables
in some powerful way that exposes underlying structure.
If you just use lookup tables, I'm guessing learning is
computationally trivial, but data requirements are prohibitive.
On the other hand, if you posit underlying structure, you can no
doubt lower the amount of data required to be able to deal with
novel sentences, but I would expect you'd run into the standard
problems that finding the optimal structure becomes NP-hard.
At this point, a heuristic might or might not suffice, it would
be an empirical question.

Is there empirical work with this model?

Also, I don't see how you can call a model semantic when it makes
no reference to the world. The model as described by Wikipedia
could have the capability of telling me whether a sentence is
natural or highly unlikely, but unless I misunderstand something,
there is no possibility it could tell me whether a sentence
describes a scene.

Matt --- Chuck Esterbrook [EMAIL PROTECTED] wrote:

 Any opinions on Operator Grammar vs. Link Grammar?
 
 http://en.wikipedia.org/wiki/Operator_Grammar
 
 http://en.wikipedia.org/wiki/Link_grammar
 
 Link Grammar seems to have spawned practical software, but Operator
 Grammar has some compelling ideas including coherent selection,
 information content and more. Maybe these ideas are too hard or too
 ill-defined to implement?
 
 Or, in other words, why does Link Grammar win the GoogleFight?
 
Matt 
http://www.googlefight.com/index.php?lang=en_GBword1=%22link+grammar%22word2=%22operator+grammar%22
 (http://tinyurl.com/yvu9xr)

Matt Link grammar has a website and online demo at
Matt http://www.link.cs.cmu.edu/link/submit-sentence-4.html

Matt But as I posted earlier, it gives the same parse for:

Matt - I ate pizza with pepperoni.  - I ate pizza with a friend.  - I
Matt ate pizza with a fork.

Matt which shows that you can't separate syntax and semantics.  Many
Matt grammars have this problem.

Matt Operator grammar seems to me to be a lot closer to the way
Matt natural language actually works.  It includes semantics.  The
Matt basic constraints (dependency, likelihood, and reduction) are
Matt all learnable.  It might have gotten less attention because its
Matt main proponent, Zellig Harris, died in 1992, just before it
Matt became feasible to test the grammar in computational models
Matt (e.g.  perplexity or text compression).  Also, none of his
Matt publications are online, but you can find reviews of his books
Matt at http://www.dmi.columbia.edu/zellig/


Matt -- Matt Mahoney, [EMAIL PROTECTED]

Matt - This list is sponsored by AGIRI:
Matt http://www.agiri.org/email To unsubscribe or change your
Matt options, please go to:
Matt http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] NARS: definition of intelligence

2007-05-23 Thread Pei Wang

On 5/22/07, Derek Zahn [EMAIL PROTECTED] wrote:


Pei,

 As part of my ongoing AGI education, I am beginning to study NARS in some
detail.


Thanks for the interest. I'll do my best to help, though since I'm on
vacation in China, I may not be able to process my emails as usual.


As has been discussed recently here, you define intelligence as:

 Intelligence is the capability of an information system to adapt to its
environment while operating with insufficient knowledge and resources.

 In later discussion about an adaptive system, you introduce the phrase it
attempts to improve its performance in carrying out the tasks.  This would
seem to be an important further specification.  Would it be accurate for my
own understanding to rephrase your definition to be:

 Intelligence is the capability of a task-performing information system to
adapt to its environment while operating with insufficient knowledge and
resources

 where

 task-performing means that the system's purpose is the performance of one
or more simultaneously active tasks where a task is defined in terms of a
goal state and a (perhaps approximate) method for measuring whether the goal
state has been achieved?  If goal state is not a good way to describe
tasks in the sense you intend, could you explain a little bit about your
definitions of carrying out the tasks and improve its performance?

 Sorry if this seems like a trivial issue, I'm just trying to understand as
clearly as possible how you define the goals for the NARS project.


It is not a trivial point at all, though I haven't had the pressure
(until now) to explain this aspect of my definition publicly.

I mostly agree with your description, though rather not to modify the
definition in that way, because to me task performing and goal
achieving have been mostly implied by the notion of information
system, so your description sounds redundent to me.

I touched this issue in my first book, though plan to reserve it for
my other book, which will be less technical and more philosophical. A
few people on this list who was associated with Webmind Inc. should
had browsed my extended abstract years ago. The relevent part of that
book is to put intelligent system into a larger picture, within a
hierachy roughly like the following:

1. system: things/events that should be analyzed as interalating
parts, with internal structure and external function

 1.1 information system: systems whose structure and function can be
analyzed abstractly as goal-achieving (or task-performing), without
depending too much on the lower level description (using the terms of
physics, chemistry, biology, ...)

   1.1.1 intelligent system: information systems that are adaptive
and work with insufficient knowledge and reources

   1.1.2 instinctive system: information systems that work with
sufficient knowledge and resources

 1.2 non-information system: systems whose structure and function
cannot be analyzed abstractly, and have to be explained in terms of
physics, chemistry, biology, ...

I know the above description is brief and controversal --- the working
definition of information is no less complicated than that of
intelligence. Since you asked, I give the above position statement,
though I won't argue for it, since it is not that crucial for AGI at
the current time.

Another topic is goal ---  as you noticed, I don't follow the common
practice of specifying goal as goal state, because to me this is a
big mistake of traditional AI. In the usual sense, a state is
indicated by a COMPLETE description of the relevant part of the
domain/environment, which cannot be obtained if insufficient knowledge
and resources is assumed.

Roughly speaking, in NARS a goal is a description, which is a PARTIAL
description of the environment. Furthermore, a goal is usualy
achieved/satisfied to a degree, which is not a matter of yes/no.
Since each goal in NARS is satisfied by a statement, the degree of
satisfaction is related to (though not completely reduced to) the
truth value of the statement. A more detailed and formal description
is in my book, and I'm also working on a paper focusing on this aspect
of the system. I'll post a draft when it is finished.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Richard Loosemore

Mark Waser wrote:

AGIs (at least those that could run on current computers)
cannot really get excited about anything. It's like when you represent
the pain intensity with a number. No matter how high the number goes,
it doesn't really hurt. Real feelings - that's the key difference
between us and them and the reason why they cannot figure out on their
own that they would rather do something else than what they were asked
to do.


So what's the difference in your hardware that makes you have real pain 
and real feelings?  Are you *absolutely positive* that real pain and 
real feelings aren't an emergent phenomenon of sufficiently complicated 
and complex feedback loops?  Are you *really sure* that a sufficiently 
sophisticated AGI won't experience pain?


I think that I can guarantee (as in, I'd be willing to bet a pretty 
large sum of money) that a sufficiently sophisticated AGI will act as if 
it experiences pain . . . . and if it acts that way, maybe we should 
just assume that it is true.


Jiri,

I agree with Mark's comments here, but would add that I think we can do 
more than just take a hands-off Turing attitude to such things as pain: 
 I believe that we can understand why a system built in the right kind 
of way *must* experience feelings of exactly the sort we experience.


I won't give the whole argument here (I presented it at the 
Consciousness conference in Tucson last year, but have not yet had time 
to write it up as a full paper).


I think it is a serious mistake for anyone to say that the difference 
between machines cannot in principle experience real feelings.  Sure, if 
they are too simple they will not, but all of our discussions, on this 
list, are not about those kinds of too-simple systems.


Having said that:  there are some conventional approaches to AI that are 
so crippled that I don't think they will ever become AGI, let alone have 
feelings.  If you were criticizing those specifically, rather than just 
AGI in general, I'm on your side!  :-;



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Lukasz Kaiser

Hi,

On 5/23/07, Mark Waser [EMAIL PROTECTED] wrote:

- Original Message -
From: Jiri Jelinek [EMAIL PROTECTED]
 On 5/20/07, Mark Waser [EMAIL PROTECTED] wrote:
 - Original Message -
 From: Jiri Jelinek [EMAIL PROTECTED]
  On 5/16/07, Mark Waser [EMAIL PROTECTED] wrote:
  - Original Message -
  From: Jiri Jelinek [EMAIL PROTECTED]


Mark and Jiri, I beg you, could you PLEASE stop top-posting?
I guess it is just a second for you to cut it, or even better, to
change the settings of your mail program to cut it, and it takes
a second for every message you send for everyone who reads
it to scroll through it, not to mention looking inside for content
just in case it was not entirely top-posted. Please, cut it!

- lk

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Parsing theories

2007-05-23 Thread Jean-Paul Van Belle
Check bigrams (or, more interestingly, trigrams) in computational
linguistics.
 
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


 Eric Baum [EMAIL PROTECTED] 2007/05/23 15:36:20 

One way to parametrize 
the likelihood of various arguments would be with a table over
all two word combinations, the i,j entry gives the likelihood
that the ith word and the jth word are the two arguments.
But most likely, in reality, the likelihood of the jth word
will be much pinned down conditional on the ith. 

Is there empirical work with this model?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Write a doctoral dissertation, trigger a Singularity

2007-05-23 Thread Jean-Paul Van Belle
Universal compassion and tolerance are the ultimate consequences of
enlightenment
which one Matt on the list equated IMHO erroneously to high-orbit
intelligence
methinx subtle humour is a much better proxy for intelligence
 
Jean-Paul 
member of the 'let Murray stay' advocacy group aka 'the write 2
doctorates, trigger 2 singularities movement'
just back from 2 weeks enlightenment-seeking in Indian ashram ;-)
 
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21

 Benjamin Goertzel [EMAIL PROTECTED] 2007/05/20 20:38:35 
Personally, I find many of his posts highly entertaining...

If your sense of humor differs, you can always use the DEL key ;-)

-- Ben G

On 5/20/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:

 Why is Murray allowed to remain on this mailing list, anyway?  As a
 warning to others?  The others don't appear to be taking the hint.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Parsing theories

2007-05-23 Thread Eric Baum

A google search on operator grammar + trigram
yields nada.

A google search on operator grammar + bigram yields nothing
interesting.

I've seen papers on statistical language parsing before,
including trigrams etc. Not so clear to me the extent to which
they've been merged with Harris's work.


Jean-Paul Check bigrams (or, more interestingly, trigrams) in
Jean-Paul computational linguistics.
 
 
Jean-Paul Department of Information Systems Email:
Jean-Paul [EMAIL PROTECTED] Phone: (+27)-(0)21-6504256
Jean-Paul Fax: (+27)-(0)21-6502280 Office: Leslie Commerce 4.21


 Eric Baum [EMAIL PROTECTED] 2007/05/23 15:36:20 

Jean-Paul One way to parametrize the likelihood of various arguments
Jean-Paul would be with a table over all two word combinations, the
Jean-Paul i,j entry gives the likelihood that the ith word and the
Jean-Paul jth word are the two arguments.  But most likely, in
Jean-Paul reality, the likelihood of the jth word will be much pinned
Jean-Paul down conditional on the ith.

Jean-Paul Is there empirical work with this model?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Parsing theories

2007-05-23 Thread Mark Waser
I'll take a shot at answering some of your questions as someone who has done 
some work and research but is certainly not claiming to be an expert . . . .



Wikipedia says that various quantities are learnable because they can
in principle be determined by data. What is known about whether they
are efficiently learnable, e.g. (a) whether a child would acquire enough
data to learn the language and (b) whether given the data, learning
the language would be computationally feasible? (e.g. polynomial
time.)


Operator grammar in many respects reminds me of conceptual classification 
systems in that there has been success in processing huge amounts (corpuses, 
corpi? :-) of data and producing results -- but it's *clearly* not the way 
in which humans (i.e. human children) do it.


My belief is that if you had the proper structure-building learning 
algorithms that your operator grammar system would simply (re-)discover the 
basic parts of speech and would then successfully proceed from there.  I 
suspect that doing so is probably even computationally feasible 
(particularly if you accidentally bias it -- which would be *really* tough 
to avoid).


All human languages fundamentally have the same basic parts of speech.  I 
believe that operator grammar is reinventing the wheel in terms of it's 
unnecessary generalization of dependency.



Is there empirical work with this model?


It depends upon what you mean.  My current project is positing an 
underlying structure of the basic parts of speech.  Does it count -- or 
would I need to (IMO foolishly ;-) discard that for it to count?



Also, I don't see how you can call a model semantic when it makes
no reference to the world.


Ah, but this is where it gets tricky.  While the model makes no reference to 
the world, it is certainly influenced by the fact that 100% of it's data 
comes from the world -- which then forces the model to build itself based 
upon the world (i.e. effectively, it is building a world model) -- and I 
would certainly call that semantics.



natural or highly unlikely, but unless I misunderstand something,
there is no possibility it could tell me whether a sentence
describes a scene.


Do you mean that it couldn't perform sensory fusion or that it can't 
recognize meaning?  I would agree with the former but (as an opinion --  
because I can't definitively prove it) disagree with the latter.


   Mark


- Original Message - 
From: Eric Baum [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, May 23, 2007 9:36 AM
Subject: Re: [agi] Parsing theories




This is based purely on reading the wikipedia entry on Operator
grammar, which I find very interesting. I'm hoping someone out there
knows enough about this to answer some questions :^)

Wikipedia says that various quantities are learnable because they can
in principle be determined by data. What is known about whether they
are efficiently learnable, e.g. (a) whether a child would acquire enough
data to learn the language and (b) whether given the data, learning
the language would be computationally feasible? (e.g. polynomial
time.)

Keep in mind that, you have to learn the language well enough to
deal with the fact that you can generate and understand (and thus
pretty much have to be able to calculate the likelihood of) a
virtually infinite number of sentences never before seen.

I presume the answer to these two questions (how much data you need
and how easy it is to learn from it) will depend on how you
parametrize the various knowledge you learn. So, for example,
take a word that takes two arguments. One way to parametrize
the likelihood of various arguments would be with a table over
all two word combinations, the i,j entry gives the likelihood
that the ith word and the jth word are the two arguments.
But most likely, in reality, the likelihood of the jth word
will be much pinned down conditional on the ith. So one might
imagine parametrizing these learned coherent selection tables
in some powerful way that exposes underlying structure.
If you just use lookup tables, I'm guessing learning is
computationally trivial, but data requirements are prohibitive.
On the other hand, if you posit underlying structure, you can no
doubt lower the amount of data required to be able to deal with
novel sentences, but I would expect you'd run into the standard
problems that finding the optimal structure becomes NP-hard.
At this point, a heuristic might or might not suffice, it would
be an empirical question.

Is there empirical work with this model?

Also, I don't see how you can call a model semantic when it makes
no reference to the world. The model as described by Wikipedia
could have the capability of telling me whether a sentence is
natural or highly unlikely, but unless I misunderstand something,
there is no possibility it could tell me whether a sentence
describes a scene.

Matt --- Chuck Esterbrook [EMAIL PROTECTED] wrote:


Any opinions on Operator Grammar vs. Link Grammar?


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Mark Waser

A meta-question here with some prefatory information . . . .

The reason why I top-post (and when I do so, I *never* put content inside) 
is because I frequently find it *really* convenient to have the entire text 
of the previous message or two (no more) immediately available for 
reference.


On the other hand, I, too, find top-posting annoying whenever I'm reading a 
list as a digest but feel that it is offset by it's usefulness.


That being said, I am more than willing to stop top-posting if even a 
sizeable minority find it frustrating (I've seen this meta-discussion on 
several other lists and seen it go about 50/50 with a very slight edge for 
allowing top-posting with a skew towards low-volume lists liking it and 
high-volume lists not).


   Mark 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


RE: [agi] NARS: definition of intelligence

2007-05-23 Thread Derek Zahn
Pei Wang writes:
 
 Thanks for the interest. I'll do my best to help, though since I'm on 
 vacation in China, I may not be able to process my emails as usual.
 
Thank you for your response.  I'm planning over the course of the rest of the 
year to look in-depth at all of the AGI projects that include a significant 
implementation component (that is, those that are not just books musing about 
the nature of intelligence -- I am also reading those in parallel but there are 
so many that I don't know if anybody could have a solid understanding of all of 
them).
 
NARS is very well described so it's a good one to start with.  I am working 
from your book Rigid Flexibility which I assume is the best source.  I'm 
sorry that I wasn't able to justify the high cost of buying it new; I got it 
used from a vendor affiliated with amazon.com.
 
Unless I hit some fundamental roadblock I can easily wait to ask any questions 
(I don't want to pick nits or ask dumb things anyway) until you're back from 
your vacation.
 
One thing I'm curious about:  peeking ahead, the book sketches a rather long 
string of increasingly-ambitious implementation stages (if I remember 
correctly, up to NAL-8).  What stage is the current implementation?
 
Thanks again!
 
   As has been discussed recently here, you define intelligence as:   
   Intelligence is the capability of an information system to adapt to its 
environment while operating with insufficient knowledge and 
   resources.   In later discussion about an adaptive system, you 
   introduce the phrase it  attempts to improve its performance in 
   carrying out the tasks. This would  seem to be an important further 
   specification. Would it be accurate for my  own understanding to 
   rephrase your definition to be:   Intelligence is the capability of 
   a task-performing information system to  adapt to its environment while 
   operating with insufficient knowledge and  resources   where   
   task-performing means that the system's purpose is the performance of 
   one  or more simultaneously active tasks where a task is defined in 
   terms of a  goal state and a (perhaps approximate) method for measuring 
   whether the goal  state has been achieved? If goal state is not a 
   good way to describe  tasks in the sense you intend, could you explain 
   a little bit about your  definitions of carrying out the tasks and 
   improve its performance?   Sorry if this seems like a trivial 
   issue, I'm just trying to understand as  clearly as possible how you 
   define the goals for the NARS project.  It is not a trivial point at 
   all, though I haven't had the pressure (until now) to explain this 
   aspect of my definition publicly.  I mostly agree with your 
   description, though rather not to modify the definition in that way, 
   because to me task performing and goal achieving have been mostly 
   implied by the notion of information system, so your description 
   sounds redundent to me.  I touched this issue in my first book, though 
   plan to reserve it for my other book, which will be less technical and 
   more philosophical. A few people on this list who was associated with 
   Webmind Inc. should had browsed my extended abstract years ago. The 
   relevent part of that book is to put intelligent system into a larger 
   picture, within a hierachy roughly like the following:  1. system: 
   things/events that should be analyzed as interalating parts, with 
   internal structure and external function  1.1 information system: 
   systems whose structure and function can be analyzed abstractly as 
   goal-achieving (or task-performing), without depending too much on the 
   lower level description (using the terms of physics, chemistry, biology, 
   ...)  1.1.1 intelligent system: information systems that are adaptive 
   and work with insufficient knowledge and reources  1.1.2 instinctive 
   system: information systems that work with sufficient knowledge and 
   resources  1.2 non-information system: systems whose structure and 
   function cannot be analyzed abstractly, and have to be explained in 
   terms of physics, chemistry, biology, ...  I know the above 
   description is brief and controversal --- the working definition of 
   information is no less complicated than that of intelligence. Since 
   you asked, I give the above position statement, though I won't argue for 
   it, since it is not that crucial for AGI at the current time.  Another 
   topic is goal --- as you noticed, I don't follow the common practice 
   of specifying goal as goal state, because to me this is a big 
   mistake of traditional AI. In the usual sense, a state is indicated by 
   a COMPLETE description of the relevant part of the domain/environment, 
   which cannot be obtained if insufficient knowledge and resources is 
   assumed.  Roughly speaking, in NARS a goal is a description, which is a 
   PARTIAL description of the environment. Furthermore, a goal is usualy 
   

Re: [agi] Write a doctoral dissertation, trigger a Singularity

2007-05-23 Thread A. T. Murray
The scholar and gentleman Jean-Paul Van Belle wrote:
 Universal compassion and tolerance are the ultimate 
 consequences of enlightenment which one Matt on the 
 list equated IMHO erroneously to high-orbit intelligence
 methinx subtle humour is a much better proxy for intelligence
 
 Jean-Paul 
 member of the 'let Murray stay' advocacy group
 aka 'the write 2 doctorates, trigger 2 singularities movement'
 just back from 2 weeks enlightenment-seeking in Indian ashram ;-)

Satyan eva jayate -- Sanskrit for Truth alone prevails -- 
quoted by Mahatma Mohandas Karamchand Ghandi, who also said,
First they laugh at you, then they fear you, 
 then they fight you, then you win.

By way of explanation...
The original message of this thread is also at 
http://mentifex.virtualentity.com/edcohelp.html 
as a kind of staging area for AGI Help Wanted
appeals from the SourceForge AI Mind project.

Now v.t.y. Mentifex here is preparing to ask
for Russian and German translations of AI docs.
Members of this liberal, all-ideas-welcome list
may enjoy some of the Everything2 links below.

http://www.everything2.com/index.pl?node_id=1013306
AI should be our top priority

http://www.everything2.com/index.pl?node_id=1043865
AI virus

http://www.everything2.com/index.pl?node_id=563003
aspects of American society that may be new to you

http://www.everything2.com/index.pl?node_id=1228930
the birth of artificial intelligence

http://www.everything2.com/index.pl?node_id=11298
But who codes the coders?

http://www.everything2.com/index.pl?node_id=452676
butterfly effect

http://www.everything2.com/index.pl?node_id=51480
coding standards

http://www.everything2.com/index.pl?node_id=134452
Cogito ergo sum

http://www.everything2.com/index.pl?node_id=12718
Dark Side of the Moon

http://www.everything2.com/index.pl?node_id=32693
Dr. Strangelove, or How I Learned to Stop Worrying and Love the Bomb

http://www.everything2.com/index.pl?node_id=774320
+* Excuse me, may I blow your mind?

[the failure of Mentifex is not]
http://www.everything2.com/index.pl?node_id=1521490
the failure of artificial intelligence

http://www.everything2.com/index.pl?node_id=938762
From now on, any ordinary knowledge is no longer 
going to satisfy you, I'm afraid

http://www.everything2.com/index.pl?node_id=76962
futurism

http://www.everything2.com/index.pl?node_id=55865
futurist

http://www.everything2.com/index.pl?node_id=40987
FWIW 

http://www.everything2.com/index.pl?node_id=624119
* Geeks of the world unite

http://www.everything2.com/index.pl?node_id=472395
hack reality

http://www.everything2.com/index.pl?node_id=445357
+ A Heartbreaking Work of Staggering Genius

http://www.everything2.com/index.pl?node_id=965284
A Heartbreaking Work of Staggering Hubris

http://www.everything2.com/index.pl?node_id=426116
* I am not a hacker

http://www.everything2.com/index.pl?node_id=675507
I Am Not a Lawyer

http://www.everything2.com/index.pl?node_id=113825
I am not making this up

http://www.everything2.com/index.pl?node_id=494930
I can't decide whether to change the world or just become a bitter recluse

http://www.everything2.com/index.pl?node_id=559881
** I just bought real estate in your mind

http://www.everything2.com/index.pl?node_id=670519
** I refute him thus!

http://www.everything2.com/index.pl?node_id=584208
* I speak for the Borg

http://www.everything2.com/index.pl?node_id=870562
I'm at a programming roadblock

http://www.everything2.com/index.pl?node_id=1336607
I'm sorry Dave, I'm afraid I can't do that

http://www.everything2.com/index.pl?node_id=1188429
in defense of robot domination

http://www.everything2.com/index.pl?node_id=19005
Information wants to be free

http://www.everything2.com/index.pl?node_id=606019
Information War is coming: whose side are you on?

http://www.everything2.com/index.pl?node_id=745413
Is development in AI bad?

http://www.everything2.com/index.pl?node_id=73157
Let's Play Global Thermonuclear War

One of the 
http://www.everything2.com/index.pl?node_id=1522443
limitations on artificial intelligence
is that True AI needs to be translated into more languages.

http://www.everything2.com/index.pl?node_id=61306
The Matrix

http://www.everything2.com/index.pl?node_id=525319
The Matrix is going down for a reboot in 5 minutes:
all users, please save your data and log out

http://www.everything2.com/index.pl?node_id=111373
meatspace

http://www.everything2.com/index.pl?node_id=12366
meme

http://www.everything2.com/index.pl?node_id=1401073
meme hijack

http://www.everything2.com/index.pl?node_id=48303
mission statement

http://www.everything2.com/index.pl?node_id=121864
MIT Artificial Intelligence Lab

http://www.everything2.com/index.pl?node_id=36338
noosphere

http://www.everything2.com/index.pl?node_id=177121
Omega

http://www.everything2.com/index.pl?node_id=523623
Omega Point

http://www.everything2.com/index.pl?node_id=877088
only in America

http://www.everything2.com/index.pl?node_id=45103
otaku


Re: [agi] Parsing theories

2007-05-23 Thread Mark Waser
 As I think about it, one problem is, depending on how its
 parametrized, its not going to build much of a world model.
 Say for example it uses trigrams. The average hs grad knows
 something like 50,000 words. So there are something like 10^17
 trigrams. It will never see enough data to build a model capturing
 much semantics, unless it builds an incredibly compact model,
 in which case-- what is the underlying structure and how
 (computationally) are you going to learn it?

Absolutely correct.  That's why I said My belief is that if you had the proper 
structure-building learning algorithms that your operator grammar system would 
simply (re-)discover the 
basic parts of speech and would then successfully proceed from there.  and why 
I slammed it for reinventing the wheel in terms of it's unnecessary 
generalization of dependency

 In unsupervised learning, you can learn a lot,
 say you can cluster the world into two clusters. But until you get 
 supervision, you can't learn the final few bits to distinguish good
 from bad, or whatever.

I'm afraid that I disagree completely with the latter sentence.

 Operator grammar might be very useful for
 getting a structure that could then be rapidly trained to produce
 meaning, but I don't think you can finish the job until you interact
 with sensation.

It seems as if you're now talking sensory fusion (which is a whole 'nother can 
o' worms).

Mark

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Parsing theories

2007-05-23 Thread Lukasz Stafiniak

On 5/23/07, Mark Waser [EMAIL PROTECTED] wrote:

systems in that there has been success in processing huge amounts (corpuses,
corpi? :-) of data and producing results -- but it's *clearly* not the way


corpora

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Eric Baum

Richard Mark Waser wrote:
 AGIs (at least those that could run on current computers)
 cannot really get excited about anything. It's like when you
Richard represent
 the pain intensity with a number. No matter how high the number
Richard goes,
 it doesn't really hurt. Real feelings - that's the key difference
 between us and them and the reason why they cannot figure out on
Richard their
 own that they would rather do something else than what they were
Richard asked
 to do.
 So what's the difference in your hardware that makes you have real
 pain and real feelings?  Are you *absolutely positive* that real
 pain and real feelings aren't an emergent phenomenon of
 sufficiently complicated and complex feedback loops?  Are you
 *really sure* that a sufficiently sophisticated AGI won't
 experience pain?
 
 I think that I can guarantee (as in, I'd be willing to bet a pretty
 large sum of money) that a sufficiently sophisticated AGI will act
 as if it experiences pain . . . . and if it acts that way, maybe we
 should just assume that it is true.

Richard Jiri,

Richard I agree with Mark's comments here, but would add that I think
Richard we can do more than just take a hands-off Turing attitude to
Richard such things as pain: I believe that we can understand why a
Richard system built in the right kind of way *must* experience
Richard feelings of exactly the sort we experience.

Richard I won't give the whole argument here (I presented it at the
Richard Consciousness conference in Tucson last year, but have not
Richard yet had time to write it up as a full paper).

What is Thought? argues the same thing (Chapter 14). I'd be curious
to see if your argument is different.

Richard I think it is a serious mistake for anyone to say that the
Richard difference between machines cannot in principle experience
Richard real feelings.  Sure, if they are too simple they will not,
Richard but all of our discussions, on this list, are not about those
Richard kinds of too-simple systems.

Richard Having said that: there are some conventional approaches to
Richard AI that are so crippled that I don't think they will ever
Richard become AGI, let alone have feelings.  If you were criticizing
Richard those specifically, rather than just AGI in general, I'm on
Richard your side!  :-;


Richard Richard Loosemore

Richard - This list is sponsored by AGIRI:
Richard http://www.agiri.org/email To unsubscribe or change your
Richard options, please go to:
Richard http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Eric Baum

 AGIs (at least those that could run on current computers) cannot
 really get excited about anything. It's like when you represent the
 pain intensity with a number. No matter how high the number goes,
 it doesn't really hurt. Real feelings - that's the key difference
 between us and them and the reason why they cannot figure out on
 their own that they would rather do something else than what they
 were asked to do.

Mark So what's the difference in your hardware that makes you have
Mark real pain and real feelings?  Are you *absolutely positive* that
Mark real pain and real feelings aren't an emergent phenomenon of
Mark sufficiently complicated and complex feedback loops?  Are you
Mark *really sure* that a sufficiently sophisticated AGI won't
Mark experience pain?

Mark I think that I can guarantee (as in, I'd be willing to bet a
Mark pretty large sum of money) that a sufficiently sophisticated AGI
Mark will act as if it experiences pain . . . . and if it acts that
Mark way, maybe we should just assume that it is true.

If you accept the proposition (for which Turing gave compelling
arguments) that a computer with the right program could simulate the
workings of your brain in detail, then it follows that your feelings
are identifiable with some aspect or portion of the computation.

I claim that if feelings are identified with the decision making
computations of a top level module, (which might reasonably
be called a homunculus) everything is
concisely explained. What you are then *unaware* of is all the many
and varied computations done in subroutines that the decision
making module is isolated from by abstraction boundary (this
is by far most of the computation) as well as most internal computations
of the decision making module itself (which it will no more be
programmed to be able to report than my laptop can report its
internal transistor voltages). What you feel and can report and
the qualitative nature of your 
sensations is then determined by the code being run as it makes
decisions. I claim that the subjective nature of every feeling is
very naturally explained in this context. 
Pain, for example, is the weighing
of programmed-in negative reinforcement. (How could you possibly
modify the sensation of pain to make it any clearer it is 
negative reinforcement?) What is Thought? ch 14
goes through about 10 sensations that a philosopher had claimed
were not plausibly explainable by a computational model, and 
argues that each has exactly the nature you'd expect evolution 
to program in.
You then can't have a zombie that behaves the way you do but
doesn't have sensations, since to behave like you do it has to
make decisions, and it is in fact the decision making computation
that is identified with sensation. (Computations that are better
preprogrammed because they don't require decision, such as pulling
away from a hot stove or driving the usual route home for the
thousandth time, are dispatched to subroutines and are unconscious.) 

This picture is subject to empirical test, through psychophysics
(and also as we increasingly understand the genetic programming that
builds much of this code.)
A good example is Ramanchandran's amputee experiment. Amputees
frequently feel pain in their phantom (missing) limb. They can
feel themselves clenching their phantom hand so hard, that their
phantom finger nails gouge their phantom hands, causing intense real
pain. Ramanchandran predicted that this was caused by the mind sending
a signal to the phantom hand saying: relax, but getting no feedback
assuming that the hand had not relaxed, and inferring that pain should
be felt (including computing details of its nature). 
He predicted that if he provided a feedback telling the mind
that relaxation had occurred the pain would go away, which he then 
provided through a mirror device in which patients could place both
real and phantom limbs, relax both simultaneously, and get visual
feedback that the phantom limb had relaxed (in the mirror). Instantly
the pain vanished, confirming the prediction that the pain was
purely computational.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Eric Baum

Mike Eric Baum: What is Thought [claims that] feelings.are
Mike explainable by a computational model.

Mike Feelings/ emotions are generated by the brain's computations,
Mike certainly. But they are physical/ body events. Does your Turing
Mike machine have a body other than that of some kind of computer
Mike box? And does it want to dance when it hears emotionally
Mike stimulating music?

Mike And does your Turing Machine also find it hard to feel - get in
Mike touch with - feelings/ emotions? Will it like humans massively
Mike overconsume every substance in order to get rid of unpleasant
Mike emotions?

If its running the right code.

If you find that hard to understand, its because your understanding
mechanism has certain properties, and one of them is that it has
having trouble with this concept. I claim its not surprising either
that evolution programmed in an understanding mechanism like that,
but I suggest it is possible to overcome in the same way that
physicists were capable of coming to understand quantum mechanics.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] NARS: definition of intelligence

2007-05-23 Thread Pei Wang

On 5/23/07, Derek Zahn [EMAIL PROTECTED] wrote:


I'm planning over the course of the rest of
the year to look in-depth at all of the AGI projects that include a
significant implementation component (that is, those that are not just books
musing about the nature of intelligence -- I am also reading those in
parallel but there are so many that I don't know if anybody could have a
solid understanding of all of them).


I'm working on an introduction of AGI projects, which may help people
like you. I'll post it when I'm back in late June.


 NARS is very well described so it's a good one to start with.  I am working
from your book Rigid Flexibility which I assume is the best source.  I'm
sorry that I wasn't able to justify the high cost of buying it new; I got it
used from a vendor affiliated with amazon.com.


Yes, the book is the best source for most of the topics. Sorry for the
absurd price, which I have no way to influence.


 One thing I'm curious about:  peeking ahead, the book sketches a rather
long string of increasingly-ambitious implementation stages (if I remember
correctly, up to NAL-8).  What stage is the current implementation?


The book corresponds to NARS 4.3.0, which implements the basic of
NAL-8 (the final layer of the NAL family). The current on-line demo is
4.3.1, and I'm coding 4.3.2. I consider NAL-1 to NAL-6 to be
relatively mature, while I'm still adding details into NAL-7 and
NAL-8. For the overall engineering plan, see
http://nars.wang.googlepages.com/wang.roadmap.pdf , which is more
clear and up-to-date than the book on this topic.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] Computer explains your error by showing how it should have been done

2007-05-23 Thread Lukasz Stafiniak

For those of you interested in type-driven program synthesis:

http://www.cs.washington.edu/homes/blerner/seminal.html
(quick link: 
http://www.cs.washington.edu/homes/blerner/files/seminal-visitdays.ppt)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Mike Tintner
P.S. Eric, I haven't forgotten your question to me,  will try to address it 
in time - the answer is complex. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread Mike Tintner

Eric,

The point is simply that you can only fully simulate emotions with a body as 
well as a brain. And emotions while identified by the conscious brain are 
felt with the body


I don't find it at all hard to understand - I fully agree -  that emotions 
are generated as a result of computations in the brain. I agree with cog. 
sci. that they are highly functional in helping us achieve goals.


My underlying argument, though, is that  your (or any) computational model 
of emotions,  if it does not also include a body, will be fundamentally 
flawed both physically AND computationally.




Mike Eric Baum: What is Thought [claims that] feelings.are
Mike explainable by a computational model.

Mike Feelings/ emotions are generated by the brain's computations,
Mike certainly. But they are physical/ body events. Does your Turing
Mike machine have a body other than that of some kind of computer
Mike box? And does it want to dance when it hears emotionally
Mike stimulating music?

Mike And does your Turing Machine also find it hard to feel - get in
Mike touch with - feelings/ emotions? Will it like humans massively
Mike overconsume every substance in order to get rid of unpleasant
Mike emotions?

If its running the right code.

If you find that hard to understand, its because your understanding
mechanism has certain properties, and one of them is that it has
having trouble with this concept. I claim its not surprising either
that evolution programmed in an understanding mechanism like that,
but I suggest it is possible to overcome in the same way that
physicists were capable of coming to understand quantum mechanics.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.467 / Virus Database: 269.7.6/815 - Release Date: 22/05/2007 
15:49






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] NARS: definition of intelligence

2007-05-23 Thread Shane Legg

Pei,

Yes, the book is the best source for most of the topics. Sorry for the

absurd price, which I have no way to influence.



It's $190.  Somebody is making a lot of money on each copy and
I'm sure it's not you.  To get a 400 page hard cover published at
lulu.com is more like $25.

Shane

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] NARS: definition of intelligence

2007-05-23 Thread Pei Wang

Shane,

Well, I actually considered Lulu and similar publishers, though as the
last option. It is much easier to publish with them, but given the
nature of NARS, such a publisher will make the book even more likely
to be classified as by a crackpot. :(

I continued to look for a publisher with tough peer-review procedure,
even after the manuscript had been rejected by more than a dozen of
them. Though the price excludes most of individual buyers, it may be
more likely for a research library to buy a $190 book from Springer
than a $25 book from Lulu, given the topic.

Pei

On 5/24/07, Shane Legg [EMAIL PROTECTED] wrote:

Pei,


 Yes, the book is the best source for most of the topics. Sorry for the
 absurd price, which I have no way to influence.

It's $190.  Somebody is making a lot of money on each copy and
I'm sure it's not you.  To get a 400 page hard cover published at
lulu.com is more like $25.

Shane

 
 This list is sponsored by AGIRI: http://www.agiri.org/email

To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-05-23 Thread J Storrs Hall, PhD
On Wednesday 23 May 2007 06:34:29 pm Mike Tintner wrote:
 My underlying argument, though, is that  your (or any) computational model 
 of emotions,  if it does not also include a body, will be fundamentally 
 flawed both physically AND computationally.

Does everyone here know what an ICE is in the EE sense? (In-Circuit 
Emulator -- it's a gadget that plugs into a circuit and simulates a given 
chip, but has all sorts of debugging readouts on the back end that allow the 
engineer to figure out why it's screwing up.)

Now pretend that there is a body and a brain and we have removed the brain and 
plugged in a BrainICE instead. There's this fat cable running from the body 
to the ICE (just as there is in electronic debugging) that carries all the 
signals that the brain would be getting from the body.

Most of the cable's bandwidth is external sensation (and indeed most of that 
is vision). Motor control is most of the outgoing bandwidth. There is some 
extra portion of the bandwidth that can be counted as internal affective 
signals. (These are very real -- the body takes part in quite a few feedback 
loops with such mechanisms as hormone release and its attendant physiological 
effects.) Let us call these internal feedback loop closure mechanisms the 
affect effect.

Now here is 

*
Hall's Conjecture:
The computational resources necessary to simulate the affect effect are less 
than 1% of that necessary to implement the computational mechanism of the 
brain.
*

I think that people have this notion that because emotions are so unignorable 
and compelling subjectively, that they must be complex. In fact the body's 
contribution, in an information theoretic sense, is tiny -- I'm sure I way 
overestimate it with the 1%.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] NARS: definition of intelligence

2007-05-23 Thread J. Andrew Rogers


On May 23, 2007, at 4:17 PM, Pei Wang wrote:

I continued to look for a publisher with tough peer-review procedure,
even after the manuscript had been rejected by more than a dozen of
them. Though the price excludes most of individual buyers, it may be
more likely for a research library to buy a $190 book from Springer
than a $25 book from Lulu, given the topic.



Most books of this type are priced so that it will turn a profit on  
library sales alone.  There are hundreds of libraries that will buy a  
copy of every single book published by a major publisher in a given  
publishing program regardless of either price or specific content.   
Because this is the business model, sales to individuals are not even  
a relevant consideration -- individual sales are pure gravy to the  
publisher.


Given that, the economics of the pricing becomes obvious:

price = (production cost * 1.2) / library buyers

or something like that.  What the market for individual sales will  
bear does not even enter the picture.  Note that the library buyers  
will buy or not buy a program based on the quality/strength of an  
individual program at a publisher, they do not simply buy all the  
books from every STM publisher for a given topic area.  Program  
quality rather than price is the deciding factor, as is in evidence  
here.


Cheers,

J. Andrew Rogers

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e