Re: [agi]

2008-03-13 Thread Eric B. Ramsay
So Ben, based on what you are saying, you fully expect them to fail their 
Turing test?

Eric B. Ramsay

Ben Goertzel [EMAIL PROTECTED] wrote: I know Selmer and his group pretty 
well...

It is well done stuff, but it is purely hard-coded-knowledge-based
logical inference --
there is no real learning there...

It's not so hard to get impressive-looking functionality in toy demo
tasks, by hard-
coding rules and using a decent logic engine

Others have failed at this, so his achievement is worthwhile and means his logic
engine and formalism are better than most ... but still ... IMO, this
is not a very likely
path to AGI ...

-- Ben

On Thu, Mar 13, 2008 at 10:30 AM, Ed Porter  wrote:
 Here is an article about RPI's attempt to pass a slightly modified version
  of the turning test using supercomputers to power their Rascals AI
  algorithm.

  http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=206903246pri
  ntable=true

  The one thing I didn't understand was that they said their Rascals AI
  algorithm used a theorem proving architectures.  I would assume that that
  would mean it as based on binary logic, and thus would not be sufficiently
  flexible to model many human thought processes, which are almost certainly
  more neural net-like, and thus much more probabilistic.

  Does anybody have any opinions on that.

  Ed Porter

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


[agi] Causality challenge

2008-03-07 Thread Eric B. Ramsay
Are any of the AI folks here competing in this challenge?

http://www.causality.inf.ethz.ch/challenge.php

Eric B. Ramsay


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Circular definitions of intelligence

2007-04-26 Thread Eric B. Ramsay
Several emails ago, both Ben and Richard said they were no longer going to 
continue this argument, yet here they are - still arguing. Will the definition 
of intelligence be able to accomodate this behavior by these gentlemen?

Benjamin Goertzel [EMAIL PROTECTED] wrote:  

   -  When you try to cash out that compression function, I claim, you
will end up in a situation where the system's real world behavior 
depends on exactly which 'patterns' it chooses to go hunting for, and
how it deploys them.  The devil is in the details that you do not
specify here, so any decision about whether this formalism really is 
coextensive with commonsense intelligence is pure speculation.

I don't really understand your response...

What I said, in less formal terms, is:

1) intelligence is defined as the ability to optimize complex functions 

2) complexity of a function is defined as having lots of patterns
in its graph

3) to make 2 operational, you need to specify it as having lots of
patterns in its graph, according to pattern-recognizer S 

Which step does your response pertain to?  The patterns hunted by
the system whose intelligence is being defined (in 1), or the patterns
hunted by the system S assessing the intelligence (in 3) ??

Ben G 


  
-
  This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

RE: [agi] How should an AGI ponder about mathematics

2007-04-24 Thread Eric B. Ramsay
The more problematic issue is what happens if you non-destructively up-load 
your mind? What do you do with the original which still considers itself you? 
The up-load also considers itself you and may suggest a bullet.

Matt Mahoney [EMAIL PROTECTED] wrote:  
--- John G. Rose wrote:

 A baby AGI has immense advantage. It's starting (life?) after billions of
 years of evolution and thousands of years of civilization. A 5 YO child
 can't float all languages, all science, all mathematics, all recorded
 history, all encyclopedia, etc. in sub-millisecond RAM and be able to
 interconnect to almost any type of electronics. There are a lot of
 comparisons of a 5YO with an AGI but I wonder about those... are we just
 anthropomorphisizing AGI by coming up with a tabula rasa feel good AGI that
 needs to learn like a cute human baby? Our brains are good I mean they are
 us but aren't they just biological blobs of goop that are half-assed excuses
 for intelligence? I mean why are AGI's coming about anyway? Is it because
 our brains are awesome and fulfill all of our needs? No. We need to be
 uploaded otherwise we die.

I thought the reason for building an AGI was so we would have a utopian
society where machines do all the work. Uploading raises troubling questions.
How far can the copied mind stray from the original before you die? How do
you distinguish between consciousness (sense of self) and the programmed
belief in consciousness, free will, and fear of death that all animals possess
because it confers a survival advantage? What happens if you reprogram your
uploaded mind not to have these beliefs? Would it then be OK to turn it off?



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] How should an AGI ponder about mathematics

2007-04-24 Thread Eric B. Ramsay
Your twin example is not a good choice. The upload will consider itself to have 
a claim on the contents of your life - financial resources for example.

Eugen Leitl [EMAIL PROTECTED] wrote:  On Tue, Apr 24, 2007 at 07:09:22AM 
-0700, Eric B. Ramsay wrote:

 The more problematic issue is what happens if you non-destructively
 up-load your mind? What do you do with the original which still

It's a theoretical problem for any of us on this list. Nondestructive
scans require medical nanotechnology.

 considers itself you? The up-load also considers itself you and may
 suggest a bullet.

How is that different from identical twins? I hope you're not suggesting
suicide to your twin brother.

-- 
Eugen* Leitl leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [tt] [agi] Definition of 'Singularity' and 'Mind'

2007-04-18 Thread Eric B. Ramsay
Actually Richard, these are the things you imagine you would like to do given 
your current level of intelligence. I suspect very much that the moment you 
went super intelligent there would be a paradigm change in what you consider 
fun.
  Eric

Richard Loosemore [EMAIL PROTECTED] wrote:
  Eugen Leitl wrote:
 On Wed, Apr 18, 2007 at 03:54:50AM -0400, Randall wrote:
 
 I can't for the life of me imagine why anyone who had seen the 
 elephant would choose to go back to being Mundane.
 
 The question is also whether they could, if they wanted to.
 A neanderthal wouldn't function well in today's society,
 and anything lesser would run a good chance of becoming roadkill.
 
 If I could flip a switch and increase my _g_ by two orders of 
 magnitude, I'd never flip that switch back. Why would anybody?
 
 I wouldn't. But I wouldn't max out the knob immediately, either.
 I would just go for a slow, sustainable growth, at least as long
 nobody else is rushing ahead.
 

[META COMMENT. Is it my imagination, or have some funny things have 
been happening to the AGI and/or Singularity lists recently... e.g. 
delivery of messages as if they were offlist?]

I think you are looking at the possibilities through far to narrow a prism.

Consider. Would it be interesting to find what it is like to be, say, a 
tiger? A whale? A dolphin? I can think of ways to temporarily get 
transferred into the form of any reasonably high-level animal, then come 
back again to human later, with at least some memories of what it was 
like to have been in that state.

In a future in which all these things are possible, why would people not 
be interested in having this kind of fun?

Now imagine the possibility of becoming superintelligent. That could 
get kind of heavy after a while. I do not necessarily think that I want 
to know about all of the science in human history, for example, to such 
a deep extent that it would be as if I had been teaching it for 
centuries, and was bored with every last bit of it. Would you?

I would want to have fun. And the big part of having fun would be 
finding out new stuff.

So, yes, I would want to become superintelligent occasionally, but it 
seems to me that the more intelligent I become, the more I know about 
complex problems I cannot fix, and the more that frustrates me. That's 
not fun after a while. Sometimes it would be nice to go back to just 
being a kid for a while.

Then there is the possibility of recreating historical situations. I 
would like to be able to be one of the people who was around when none 
of modern science existed, just so I could try to discover that stuff 
when it was new. To do that I would have to reduce my current knowledge 
by putting it on ice for a while.

And on and on I can think of vast numbers of reasons not to do the 
boring thing of just trying to get into a high-intelligence brain.

It's not the destination, folks, its the journey.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

[agi] Low I.Q. AGI

2007-04-15 Thread Eric B. Ramsay
There is an easy assumption of most writers on this board that once the AGI 
exists, it's route to becoming a singularity is a sure thing. Why is that? In 
humans there is a wide range of smartness in the population. People face 
intellectual thresholds that they cannot cross because they just do not have 
enough of this smartness thing. Although as a physicist I understand General 
Relativity, I really doubt that if it had been left up to me that it would ever 
have been discovered - no matter how much time I was given. Do neuroscientists 
know where this talent difference comes from in terms of brain structure? Where 
in the designs for other AGI (Ben's for example) is the smartness of the AGI 
designed in? I can see how an awareness may bubble up from a design but this 
diesn't mean a system smart enough to move itself towards being a singularity. 
Even if you feed the system all the information in the world, it would know a 
lot but not be any smarter or even know how to make
 itself smarter. How many years of training will we give a brand new AGI before 
we decide it's retarded?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] A Course on Foundations of Theoretical Psychology...

2007-04-13 Thread Eric B. Ramsay
I would certainly be interested. Ask Ben if you can use the Novamente pavilion 
in Second Life and conduct the worksop there (or maybe the IBM pavilion which 
is actually better set up). More people could attend this way and keep costs 
down.
   
  Eric B. Ramsay

Richard Loosemore [EMAIL PROTECTED] wrote:
  


I wonder...

How many people on this list would actually go to the trouble, if they 
could, of signing up for a truly comprehensive course in the foundations 
of AI/CogPsy/Neuroscience, which would give them a grounding in all of 
these fields and put them in a position where they had a unified picture 
of what kind of skills would be needed to build an AGI.

I am sure the people who already have established careers would not be 
interested, but what of the people who are burning with passion to get 
some real progress to happen in the AGI, and don't know where to put 
their energies.

What if I organised a summer school to do such a thing?

This is just my spontaneous first thought about the idea. If there was 
enough initial interest, I would be happy to scope it out more thoroughly.





Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936