RE: [agi] Unification by index?

2008-11-02 Thread Benjamin Johnston

 In classical logic programming, there is the concept of unification,
...
 It seems to me that by appropriate use of indexes, it should be
 possible to unify against the entire database simultaneously, or 
 at least to isolate a small fraction of it as potential matches 
 so that the individual unification algorithm need not be run against
 every expression in the database.

Hi Russell,

I have not looked into the structure myself, but I recall hearing once that
a data structure called a unification tree is helpful here. A quick
web-search wasn't immediately helpful though - the best resources appear to
be subscription services - it looks like you'll have to visit your local
university's library to access the full paper.

Some other thoughts:

The Prolog clause database effectively has this same problem. It solves it
simply by indexing on the functor of the outermost term and the first
argument of that term. This may be enough for your problem. As Donald Knuth
puts it, premature optimization is the root of all evil.

Even if you can't get your hand on a description of a unification tree, I
wouldn't imagine it would be too difficult to build an appropriate
tree-structured index (especially given that the unification algorithm does
itself operate over trees). If your data doesn't change much, you can
probably search efficiently, even in the case of unbound variables in the
query term, by indexing your data in several places (i.e., indexing with
some arguments ignored).

Hope that is helpful,

-Benjamin Johnston




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] the universe is computable [Was: Occam's Razor and its abuse]

2008-11-02 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 --- On Thu, 10/30/08, John G. Rose [EMAIL PROTECTED] wrote:
 
  You can't compute the universe within this universe
  because the computation
  would have to include itself.
 
 Exactly. That is why our model of physics must be probabilistic
 (quantum mechanics).
 

I'd venture to say that ANY computation is an estimation unless the
computation is itself. To compute the universe you could estimate it but
that computation is an estimation unless the computation is the universe.
Thus the universe itself IS an exact computation just as a chair for example
is an exact computation existing uniquely as itself. Any other computation
of that chair is an estimation.

IOW a computation is itself unless it is an approximation of something else,
it's somewhere between being partially exact or a partially exact
anti-representation. A computation mimicking another same computation would
be partially exact taking time and space into account.

Though there may be some subatomic symmetric simultaneity that violates what
I'm saying above not sure.

Also it's early in the morning and I'm actually just blabbing here so this
all may be relatively inexact :)

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Cloud Intelligence

2008-11-02 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 --- On Thu, 10/30/08, John G. Rose [EMAIL PROTECTED] wrote:
 
   From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  
   Cloud computing is compatible with my proposal for distributed AGI.
   It's just not big enough. I would need 10^10 processors, each 10^3
 to
   10^6 times more powerful than a PC.
  
 
  The only thing we have that come close to those numbers are
  insect brains.
  Maybe something can be biogenetically engineered :) Somehow
  wire billions of
  insect brains together modified in such a way that they are
  peer 2 peer and
  emerge a greater intelligence :)
 
 Or molecular computing. The Earth has about 10^37 bits of data encoded
 in DNA*. Evolution executes a parallel algorithm that runs at 10^33
 operations per second**. This far exceeds the 10^25 bits of memory and
 10^27 OPS needed to simulate all the human brains on Earth as neural
 networks***.
 
 *Human DNA has 6 x 10^9 base pairs (diploid count) at 2 bits each ~
 10^10 bits. The human body has ~ 10^14 cells = 10^24 bits. There are ~
 10^10 humans ~ 10^34 bits. Humans make up 0.1% of the biomass ~ 10^37
 bits.
 
 **Cell replication ranges from 20 minutes in bacteria to ~ 1 year in
 human tissue. Assume 10^-4 replications per second on average ~ 10^33
 OPS. The figure would be much higher if you include RNA and protein
 synthesis.
 
 ***Assume 10^15 synapses per brain at 1 bit each and 10 ms resolution
 times 10^10 humans.
 


I agree on the molecular computing. The resources are there. Not sure though
how one would go about calculating the evolution parallel algorithm OPS, it
would be different than just cell reproduction magnitude.

Still though I don't agree on your initial numbers estimate for AGI. A bit
high perhaps? Your numbers may be able to be trimmed down based on refined
assumptions.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Cloud Intelligence

2008-11-02 Thread Matt Mahoney
--- On Sun, 11/2/08, John G. Rose [EMAIL PROTECTED] wrote:

 Still though I don't agree on your initial
 numbers estimate for AGI. A bit
 high perhaps? Your numbers may be able
 to be trimmed down based on refined assumptions.

True, we can't explain why the human brain needs 10^15 synapses to store 10^9 
bits of long term memory (Landauer's estimate). Typical neural networks store 
0.15 to 0.25 bits per synapse.

I estimate a language model with 10^9 bits of complexity could be implemented 
using 10^9 to 10^10 synapses. However, time complexity is hard to estimate. A 
naive implementation would need around 10^18 to 10^19 operations to train on 1 
GB of text. However this could be sped up significantly if only a small 
fraction of neurons are active at any time.

Just looking at the speed/memory/accuracy tradeoffs of various models at 
http://cs.fit.edu/~mmahoney/compression/text.html (the 2 graphs below the main 
table), it seems that memory is more of a limitation than CPU speed. A real 
time language model would be allowed 10-20 years.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Ben and Cassio quoted in Huffington Post Article

2008-11-02 Thread Ed Porter
Congratulations to two contributors to this list, Cassio Pennachin and Ben
Goertzel, for being quoted in an article on Huffington Post, entitled Man
Versus Machine about the role of computers in the recent financial crisis.
The article is at
http://www.huffingtonpost.com/2008/11/02/man-versus-machine_n_140115.html 

I thought Cassio's remarks were some of the most interesting in the article.
I am guessing all the financial AI he and Ben did at Web Mind made them
pretty knowledgeable about finance.

Ed Porter





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Nathan Cook
This article (http://www.sciam.com/article.cfm?id=defining-evil) about a
chatbot programmed to have an 'evil' intentionality, from Scientific
American, may be of some interest to this list. Reading the researcher's
personal and laboratory websites (http://www.rpi.edu/~brings/ ,
http://www.cogsci.rpi.edu/research/rair/projects.php), it seems clear to me
that the program is more than an Eliza clone.

-- 
Nathan Cook



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Trent Waddington
On Mon, Nov 3, 2008 at 7:17 AM, Nathan Cook [EMAIL PROTECTED] wrote:
 This article (http://www.sciam.com/article.cfm?id=defining-evil) about a
 chatbot programmed to have an 'evil' intentionality, from Scientific
 American, may be of some interest to this list. Reading the researcher's
 personal and laboratory websites (http://www.rpi.edu/~brings/ ,
 http://www.cogsci.rpi.edu/research/rair/projects.php), it seems clear to me
 that the program is more than an Eliza clone.

I've noticed lately that the paranoid fear of computers becoming
intelligent and taking over the world has almost entirely disappeared
from the common culture. Near as I can tell, this coincides with the
release of MS-DOS. (Larry DeLuca)

If you have any trouble sounding condescending, find a Unix user to
show you how it's done. (Scott Adams)

:)

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Bob Mottram
http://kryten.mm.rpi.edu/PRES/SYNCHARIBM0807/sb_ka_etal_cogrobustsynchar_082107v1.mov


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Mark Waser

I've noticed lately that the paranoid fear of computers becoming
intelligent and taking over the world has almost entirely disappeared
from the common culture.


Is this sarcasm, irony, or are you that unaware of current popular culture 
(i.e. Terminator Chronicles on TV, a new Terminator movie in the works, I, 
Robot, etc.)?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Trent Waddington
On Mon, Nov 3, 2008 at 6:56 AM, Mark Waser [EMAIL PROTECTED] wrote:
 Is this sarcasm, irony, or are you that unaware of current popular culture
 (i.e. Terminator Chronicles on TV, a new Terminator movie in the works, I,
 Robot, etc.)?

The quote is from the early 80s.. pre-Terminator hysteria.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Trent Waddington
On Mon, Nov 3, 2008 at 7:50 AM, Bob Mottram [EMAIL PROTECTED] wrote:
 http://kryten.mm.rpi.edu/PRES/SYNCHARIBM0807/sb_ka_etal_cogrobustsynchar_082107v1.mov

Is it just me or is that mov broken?

The slides don't update, the audio is clipping, etc.

Interesting that they're using Piaget tasks in virtual environment :)

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Ben Goertzel
Hi,

I know Selmer Bringsjord (the leader of this project) and his work fairly
well.

He's an interesting guy and I'm afraid to misrepresent his views somehow in
a brief summary.  But I'll try.

First, an interesting point is that Selmer does not believe strong AI is
possible on traditional digital computers.  Possibly related to this is that
he is a serious Christian theological thinker.

Second, his approach to AI is very strongly logic-based, and his approach to
implementing morality is based on deontic logic, which is an attempt to
formalize explicit rules defining the structure of good and evil.

So, yes, his stuff is not ELIZA-like, it's based on a fairly sophisticated
crisp-logic-theorem-prover back end, and a well-thought-out cognitive
architecture.

On the other hand, my own strong feeling is that this kind of
crisp-logic-theorem-proving based approach to AI is never going to achieve
any kind of broad, deep or interesting general intelligence ... even though
it may do things that on the surface, in toy domains, give that appearance
due to their clever underlying logical formalism.  I stress that Selmer is a
very deep-thinking and insightful and creative guy, but nonetheless, I think
the basic approach is far too limited and ultimately wrongheaded where AGI
is concerned.

My view is that these AI systems of his are not acting evil in any
significant way -- rather, they are formulaically enacting formal structures
that some humans created in order to capture some abstract properties of
evil.  But without the grounding in perception, action and reflective
pattern-recognition, there is no evil there ... any more than a sketch drawn
of rat-poison is actually poisonous...

-- Ben G





On Sun, Nov 2, 2008 at 4:17 PM, Nathan Cook [EMAIL PROTECTED] wrote:

 This article (http://www.sciam.com/article.cfm?id=defining-evil) about a
 chatbot programmed to have an 'evil' intentionality, from Scientific
 American, may be of some interest to this list. Reading the researcher's
 personal and laboratory websites 
 (http://www.rpi.edu/~brings/http://www.rpi.edu/%7Ebrings/,
 http://www.cogsci.rpi.edu/research/rair/projects.php), it seems clear to
 me that the program is more than an Eliza clone.

 --
 Nathan Cook
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ben and Cassio quoted in Huffington Post Article

2008-11-02 Thread Ben Goertzel
Cassio has an MBA as well as being an AI guy ... and yah, we've done a lot
of computational finance together

Of course, the reporter left out the more interesting things I said to him
in our discussion ... and, the same is probably the case for most of the
other interviews he did.  It would be nice to read the full transcripts of
his discussions with the various folks he interviewed.  But he picked and
chose selected quotes from each person to make a sort of patchwork, and I
think this was a reasonable choice to make.  The one message that came
through loud and clear from all the interviews, was that even though AI and
other advanced computer methods played a role in the crisis, they were being
explicitly used as tools by people, and the real problems came from the way
humans knowingly used the tools in irresponsible ways (too much leverage ...
pretending they understood the risks better than they did ... etc.).

-- Ben G

On Sun, Nov 2, 2008 at 11:08 AM, Ed Porter [EMAIL PROTECTED] wrote:

  Congratulations to two contributors to this list, Cassio Pennachin and
 Ben Goertzel, for being quoted in an article on Huffington Post, entitled
 Man Versus Machine about the role of computers in the recent financial
 crisis.  The article is at *
 http://www.huffingtonpost.com/2008/11/02/man-versus-machine_n_140115.html*http://www.huffingtonpost.com/2008/11/02/man-versus-machine_n_140115.html

 I thought Cassio's remarks were some of the most interesting in the
 article.  I am guessing all the financial AI he and Ben did at Web Mind made
 them pretty knowledgeable about finance.

 Ed Porter

  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Trent Waddington
On Mon, Nov 3, 2008 at 1:22 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 So, yes, his stuff is not ELIZA-like, it's based on a fairly sophisticated
 crisp-logic-theorem-prover back end, and a well-thought-out cognitive
 architecture.


From what I saw in the presentation, it looks like this is an entirely
engineered system.  Complete from NL-to-temporal-logic conversion down
to theorem proving.  If the system does any second order learning then
he didn't mention it in that presentation, and you'd expect him to do
so if it does.

The part I found most amusing was the applications slide.  It was
sort of a I don't need to tell you that there are lots of
applications.. which is not something one can comment on without
knowing the capabilities of the system.  But from what I saw, I'm
pretty sure its use in MMOs would be only slightly better than
scripting, and would require a LOT more processing power.

On the other hand, if it can do significant learning then I can
imagine it would do well in the applications he listed.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Ben Goertzel
In terms of MMOs, I suppose you could think of Selmer's approach as allowing
scripting in a highly customized variant of Prolog ... which might not be
a bad
thing, but is different from creating learning systems..

-- Ben G

On Sun, Nov 2, 2008 at 10:51 PM, Trent Waddington 
[EMAIL PROTECTED] wrote:

 On Mon, Nov 3, 2008 at 1:22 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
  So, yes, his stuff is not ELIZA-like, it's based on a fairly
 sophisticated
  crisp-logic-theorem-prover back end, and a well-thought-out cognitive
  architecture.
 

 From what I saw in the presentation, it looks like this is an entirely
 engineered system.  Complete from NL-to-temporal-logic conversion down
 to theorem proving.  If the system does any second order learning then
 he didn't mention it in that presentation, and you'd expect him to do
 so if it does.

 The part I found most amusing was the applications slide.  It was
 sort of a I don't need to tell you that there are lots of
 applications.. which is not something one can comment on without
 knowing the capabilities of the system.  But from what I saw, I'm
 pretty sure its use in MMOs would be only slightly better than
 scripting, and would require a LOT more processing power.

 On the other hand, if it can do significant learning then I can
 imagine it would do well in the applications he listed.

 Trent


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Steve Richfield
Ben,

On 11/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 First, an interesting point is that Selmer does not believe strong AI is
 possible on traditional digital computers.  Possibly related to this is that
 he is a serious Christian theological thinker.


Taking off my AGI hat and putting on my Simulated Christian hat for a
moment...

Man was thrown out of the Garden of Eden for partaking in the fruit of the
tree of knowledge of good and evil. You know that tree at the center of the
Garden of Eden? If you read the Book of Genesis in the Bible, you'll see
that it was NOT an apple tree, but rather the tree of knowledge of good and
evil. There is an important lesson here which Selmer obviously failed to get
if/when he read Genesis. If his work succeeds, by his own beliefs, he will
cast man out of the potential utopia that AGI might bring, probably by
poisoning AGIs with his ideas. By both Christian and secular reasoning,
Selmer may be one of the most dangerous people in the world. There is a
snake in every garden, and in this case, it is Selmer.

BTW, the Koran corrects this, stating that it was an apple tree in the
Garden of Eden - effectively denying this lesson.


 Second, his approach to AI is very strongly logic-based, and his approach
 to implementing morality is based on deontic logic, which is an attempt to
 formalize explicit rules defining the structure of good and evil.


Egad.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Trent Waddington
On Mon, Nov 3, 2008 at 4:50 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
 Taking off my AGI hat and putting on my Simulated Christian hat for a
 moment...

Must you?

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com