RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-09 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 
 --- John G. Rose [EMAIL PROTECTED] wrote:
 
  
   There is no way to know if we are living in a nested simulation, or
 even
   in a
   single simulation.  However there is a mathematical model: enumerate
 all
   Turing machines to find one that simulates a universe with
 intelligent
   life.
  
 
  What if that nest of simulations loop around somehow? What was that
 idea
  where there is this new advanced microscope that can see smaller than
 ever
  before and you look into it and see an image of yourself looking into
 it...
 
 The simulations can't loop because the simulator needs at least as much
 memory
 as the machine being simulated.
 

You're making assumptions when you say that. Outside of a particular
simulation we don't know the rules. If this universe is simulated the
simulator's reality could be so drastically and unimaginably different from
the laws in this universe. Also there could be data busses between
simulations and the simulations could intersect or, a simulation may break
the constraints of its contained simulation somehow and tunnel out. 

John


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-09 Thread Matt Mahoney

--- John G. Rose [EMAIL PROTECTED] wrote:

  From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  The simulations can't loop because the simulator needs at least as much
  memory
  as the machine being simulated.
  
 
 You're making assumptions when you say that. Outside of a particular
 simulation we don't know the rules. If this universe is simulated the
 simulator's reality could be so drastically and unimaginably different from
 the laws in this universe. Also there could be data busses between
 simulations and the simulations could intersect or, a simulation may break
 the constraints of its contained simulation somehow and tunnel out. 

I am assuming finite memory.  For the universe we observe, the Bekenstein
bound of the Hubble radius is 2pi^2 T^2 c^5/hG = 2.91 x 10^122 bits.  (T = age
of the universe = 13.7 billion years, c = speed of light, h = Planck's
constant, G = gravitational constant).  There is not enough material in the
universe to build a larger memory.  However, a universe up the hierarchy might
be simulated by a Turing machine with infinite memory or by a more powerful
machine such as one with real-valued registers.  In that case the restriction
does not apply.  For example, a real-valued function can contain nested copies
of itself infinitely deep.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-09 Thread Matt Mahoney
--- Mike Tintner [EMAIL PROTECTED] wrote:
 How do you resolve disagreements? 

This is a problem for all large databases and multiuser AI systems.  In my
design, messages are identified by source (not necessarily a person) and a
timestamp.  The network economy rewards those sources that provide the most
useful (correct) information. There is an incentive to produce reputation
managers which rank other sources and forward messages from highly ranked
sources, because those managers themselves become highly ranked.

Google handles this problem by using its PageRank algorithm, although I
believe that better (not perfect) solutions are possible in a distributed,
competitive environment.  I believe that these solutions will be deployed
early and be the subject of intense research because it is such a large
problem.  The network I described is vulnerable to spammers and hackers
deliberately injecting false or forged information.  The protocol can only do
so much.  I designed it to minimize these risks.  Thus, there is no procedure
to delete or alter messages once they are posted.  Message recipients are
responsible for verifying the identity and timestamps of senders and for
filtering spam and malicious messages at risk of having their own reputations
lowered if they fail.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore

Matt Mahoney wrote:

--- Mike Tintner [EMAIL PROTECTED] wrote:


My point was how do you test the *truth* of items of knowledge. Google tests
the *popularity* of items. Not the same thing at all. And it won't work.


It does work because the truth is popular.  Look at prediction markets.  Look
at Wikipedia.  It is well known that groups make better decisions as a whole
than the individuals in those groups (e.g. democracies vs. dictatorships). 
Combining knowledge from independent sources and testing their reliability is

a well known machine learning technique which I use in the PAQ data
compression series.  I understand the majority can sometimes be wrong, but the
truth eventually comes out in a marketplace that rewards truth.

Perhaps you have not read my proposal at http://www.mattmahoney.net/agi.html
or don't understand it.


Some of us have read it, and it has nothing whatsoever to do with 
Artificial Intelligence.  It is a labor-intensive search engine, nothing 
more.


I have no idea why you would call it an AI or an AGI.  It is not 
autonomous, contains no thinking mechanisms, nothing.  Even as a alabor 
intensive search engine there is no guarantee it would work, because 
the conflict resolution issues are all complexity-governed.


I am astonished that you would so blatantly call it something that it is 
not.




Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-09 Thread Matt Mahoney

--- Ben Goertzel [EMAIL PROTECTED] wrote:

   Of course what I imagine emerging from the Internet bears little
 resemblance
   to Novamente.  It is simply too big to invest in directly, but it will
 present
   many opportunities.
 
 But the emergence of superhuman AGI's like a Novamente may eventually
 become,
 will both dramatically alter the nature of, and dramatically reduce
 the cost of, global
 brains such as you envision...

Yes, like the difference between writing a web browser and defining the HTTP
protocol, each costing a tiny fraction of the value of the Internet but with a
huge impact on its outcome.



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  Perhaps you have not read my proposal at
 http://www.mattmahoney.net/agi.html
  or don't understand it.
 
 Some of us have read it, and it has nothing whatsoever to do with 
 Artificial Intelligence.  It is a labor-intensive search engine, nothing 
 more.
 
 I have no idea why you would call it an AI or an AGI.  It is not 
 autonomous, contains no thinking mechanisms, nothing.  Even as a alabor 
 intensive search engine there is no guarantee it would work, because 
 the conflict resolution issues are all complexity-governed.
 
 I am astonished that you would so blatantly call it something that it is 
 not.

It is not now.  I think it will be in 30 years.  If I was to describe the
Internet to you in 1978 I think you would scoff too.  We were supposed to have
flying cars and robotic butlers by now.  How could Google make $145 billion by
building an index of something that didn't even exist?

Just what do you want out of AGI?  Something that thinks like a person or
something that does what you ask it to?



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Samantha  Atkins


On Apr 9, 2008, at 12:33 PM, Derek Zahn wrote:


Matt Mahoney writes:

 Just what do you want out of AGI? Something that thinks like a  
person or

 something that does what you ask it to?


The or is interesting.  If it really thinks like a person and at  
at least human level then I doubt very much it will do what you ask  
any more often than people do.


What I want out of AGI is something that thinks a lot better, deeper,  
faster and richer than human beings do.  I would refer it to be a  
colleague but I doubt it would find me very interesting for long.





I think this is an excellent question, one I do not have a clear  
answer to myself, even for my own use.


Imagine we have an AGI.  What exactly does it do?  What *should*  
it do?


It does whatever we tell it is not good enough.  What would we  
tell it to do?


Beware the wish granting genie conundrum.

And no wigged-out scifi allowed; you can't say invent molecular  
nanotechnology and build me a Dyson sphere -- first, because such a  
vision is completely unhelpful in guiding how to get there, and  
second because there's no reason to think a currently-envisionable  
AGI would be millions of times smarter than all of humanity put  
together.




It doesn't need to be.  If it could simply pull together all relevant  
research more efficiently and have greater capacity to consider more  
facets at once then it could suggest new directions and form new  
integrations that humans would not see and thus be more likely to  
arrive at solutions than all current researchers.




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

Perhaps you have not read my proposal at

http://www.mattmahoney.net/agi.html

or don't understand it.
Some of us have read it, and it has nothing whatsoever to do with 
Artificial Intelligence.  It is a labor-intensive search engine, nothing 
more.


I have no idea why you would call it an AI or an AGI.  It is not 
autonomous, contains no thinking mechanisms, nothing.  Even as a alabor 
intensive search engine there is no guarantee it would work, because 
the conflict resolution issues are all complexity-governed.


I am astonished that you would so blatantly call it something that it is 
not.


It is not now.  I think it will be in 30 years.  If I was to describe the
Internet to you in 1978 I think you would scoff too.  We were supposed to have
flying cars and robotic butlers by now.  How could Google make $145 billion by
building an index of something that didn't even exist?

Just what do you want out of AGI?  Something that thinks like a person or
something that does what you ask it to?


Either will do:  your suggestion achieves neither.

If I ask your non-AGI the following question:  How can I build an AGI 
that can think at a speed that is 1000 times faster than the speed of 
human thought? it will say:


   Hi, my name is Ben and I just picked up your question.  I would
love to give you the answer but you have to send $20 million
and give me a few years.

That is not the answer I would expect of an AGI.  A real AGI would do 
original research to solve the problem, and solve it *itself*.


Isn't this, like, just too obvious for words?  ;-)



Richard Loosemore


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Derek Zahn
I asked: Imagine we have an AGI.  What exactly does it do?  What *should* it 
do? 
Note that I think I roughly understand Matt's vision for this:  roughly, it is 
google, and it will gradually get better at answering questions and taking 
commands as more capable systems are linked in to the network.  When and 
whether it passes the AGI threshold is rather an arbitrary and unimportant 
issue, it just gets more capable of answering questions and taking orders.
 
I find that a very interesting and clear vision.  I'm wondering if there are 
others.
 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Derek Zahn
Richard Loosemore: I am not sure I understand.  There is every reason to 
think that a currently-envisionable AGI would  be millions of times smarter 
than all of humanity put together.  Simply build a human-level AGI, then get 
it to bootstrap to a level of,  say, a thousand times human speed (easy 
enough: we are not asking for  better thinking processes, just faster 
implementation), then ask it to  compact itself enough that we can afford to 
build and run a few billion  of these systems in parallel
 
This viewpoint assumes that human intelligence is essentially trivial; I see no 
evidence for this and tend to assume that a properly-programmed gameboy is not 
going to pass the turing test.  I realize that people on this list tend to be 
more optimistic on this subject so I do accept your answer as one viewpoint.  
It is surely a minority view, though, and my question only makes sense if you 
assume significant limitations in the capability of near-term hardware.
 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore

Derek Zahn wrote:

I asked:
  Imagine we have an AGI.  What exactly does it do?  What *should* it do?
 
Note that I think I roughly understand Matt's vision for this:  roughly, 
it is google, and it will gradually get better at answering questions 
and taking commands as more capable systems are linked in to the 
network.  When and whether it passes the AGI threshold is rather an 
arbitrary and unimportant issue, it just gets more capable of answering 
questions and taking orders.
 
I find that a very interesting and clear vision.  I'm wondering if there 
are others.


Surely not!

This line of argument looks like a new version of the same story that 
occurred in the very early days of science fiction.  People looked at 
the newly-forming telephone system and they thought that maybe if it 
just got big enough it might become .. intelligent.


Their reasoning was ... well, there wasn't any reasoning behind the 
idea.  It was just a mystical maybe lots of this will somehow add up to 
more than the sum of the parts, without any justification for why the 
whole should be more than the sum of the parts.


In exactly the same way, there is absolutely no reason to believe that 
Google will somehow reach a threshold and (magically) become 
intelligent.  Why would that happen?


If they deliberately set out to build an AGI somewhere, and then hook 
that up to google, that is a different matter entirely.  But that is not 
what is being suggested here.






Richard Loosemore.

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore

Derek Zahn wrote:

Richard Loosemore:

  I am not sure I understand.
 
  There is every reason to think that a currently-envisionable AGI would
  be millions of times smarter than all of humanity put together.
 
  Simply build a human-level AGI, then get it to bootstrap to a level of,
  say, a thousand times human speed (easy enough: we are not asking for
  better thinking processes, just faster implementation), then ask it to
  compact itself enough that we can afford to build and run a few billion
  of these systems in parallel
 
This viewpoint assumes that human intelligence is essentially trivial; I 
see no evidence for this and tend to assume that a properly-programmed 
gameboy is not going to pass the turing test.  I realize that people on 
this list tend to be more optimistic on this subject so I do accept your 
answer as one viewpoint.  It is surely a minority view, though, and my 
question only makes sense if you assume significant limitations in the 
capability of near-term hardware.


But if you want to make a meaningful statement about limitations, would 
it not be prudent to start from a clear understanding of how the size of 
the task can be measured, and how those measurements relate to the 
available resources?  If there is no information at all, we could not 
make a statement either way.


Without knowing how to bake a cake, or what the contents of your pantry 
are, I don't think you can state that We simply do not have what it 
takes to bake a cake in the near future.


I am only saying that I see no particular limitations, given the things 
that I know about how to buld an AGI.  That is the best I can do.




Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Derek Zahn
Samantha Atkins writes:
 
 Beware the wish granting genie conundrum.
 
Yeah, you put it better than I did; I'm not asking what wishes we'd ask a genie 
to grant, I'm wondering specifically what we want from the machines that Ben 
and Richard and Matt and so on are thinking about and building.
 
Simple things like robot, clean my house are valuable because they focus 
clearly on what capabilities 'robot' probably has to have to achieve it.  
Similarly, pass the turing test is valuable; we can wonder specifically about 
what the machine would have to do to achieve that goal.  Do Science is a bit 
vague for me though.
 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Derek Zahn
Richard Loosemore: I am only saying that I see no particular limitations, 
given the things  that I know about how to buld an AGI. That is the best I can 
do.
Sorry to flood everybody's mailbox today; I will make this my last message.
 
I'm not looking to impose a viewpoint on anybody; you have communicated yours 
and from your perspective a question what should an AGI do (or what is AGI 
for?) is not particularly meaningful -- I gather that from your perspective 
once your many-year multibillion dollar research program is finished the result 
will be almost magically powerful so speaking of its uses or capabilities in 
terms that could help us figure out what we are building is not useful.  Others 
have different viewpoints, either published (Kurzweil, Moravec, etc, and even 
some pessimists perhaps :) ) or privately held about the cognitive 
capabilities of near-term computing power that lead to different conclusions.  
You argue forcefully for your position, but it isn't the only one.
 
 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  Just what do you want out of AGI?  Something that thinks like a person or
  something that does what you ask it to?
 
 Either will do:  your suggestion achieves neither.
 
 If I ask your non-AGI the following question:  How can I build an AGI 
 that can think at a speed that is 1000 times faster than the speed of 
 human thought? it will say:
 
 Hi, my name is Ben and I just picked up your question.  I would
  love to give you the answer but you have to send $20 million
  and give me a few years.
 
 That is not the answer I would expect of an AGI.  A real AGI would do 
 original research to solve the problem, and solve it *itself*.
 
 Isn't this, like, just too obvious for words?  ;-)

Your question is not well formed.  Computers can already think 1000 times
faster than humans for things like arithmetic.  Does your AGI also need to
know how to feed your dog?  Or should it guess and build it anyway?  I would
think such a system would be dangerous.

I expect a competitive message passing network to improve over time.  Early
versions will work like an interactive search engine.  You may get web pages
or an answer from another human in real time, and you may later receive
responses to your persistent query.  If your question can be matched to an
expert in some domain that happens to be on the net, then it gets routed
there.  Google already does this.  For example, if you type an address, it
gives you a map and offers driving directions.  If you ask it how many
teaspoons in a cubic parsec? it will compute the answer (try it).  It won't
answer every question, but with 1000 times more computing power than Google, I
expect there will be many more domain experts.

I expect as hardware gets more powerful, peers will get better at things like
recognizing people in images, writing programs, and doing original research. 
I don't claim that I can solve these problems.  I do claim that there is an
incentive to provide these services and that the problems are not intractable
given powerful hardware, and therefore the services will be provided.  There
are two things to make the problem easier.  First, peers will have access to a
vast knowledge source that does not exist today.  Second, peers can specialize
in a narrow domain, e.g. only recognize one particular person in images, or
write software or do research in some obscure, specialized field.

Is this labor intensive?  Yes.  A $1 quadrillion system won't just build
itself.  People will build it because they will get back more value than they
put in.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com