RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-10 Thread John G. Rose
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> 
> --- "John G. Rose" <[EMAIL PROTECTED]> wrote:
> 
> > > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > > The simulations can't loop because the simulator needs at least as
> much
> > > memory
> > > as the machine being simulated.
> > >
> >
> > You're making assumptions when you say that. Outside of a particular
> > simulation we don't know the rules. If this universe is simulated the
> > simulator's reality could be so drastically and unimaginably different
> from
> > the laws in this universe. Also there could be data busses between
> > simulations and the simulations could intersect or, a simulation may
> break
> > the constraints of its contained simulation somehow and tunnel out.
> 
> I am assuming finite memory.  For the universe we observe, the
> Bekenstein
> bound of the Hubble radius is 2pi^2 T^2 c^5/hG = 2.91 x 10^122 bits.  (T
> = age
> of the universe = 13.7 billion years, c = speed of light, h = Planck's
> constant, G = gravitational constant).  There is not enough material in
> the
> universe to build a larger memory.  However, a universe up the hierarchy
> might
> be simulated by a Turing machine with infinite memory or by a more
> powerful
> machine such as one with real-valued registers.  In that case the
> restriction
> does not apply.  For example, a real-valued function can contain nested
> copies
> of itself infinitely deep.
> 
> 

That's assuming that our whole universe is simulated, not just portions of
it. Also Turing machines with real valued registers would still be limited
by the same universal constants that the Turing machine is running in? IOW
real valued registers are only theoretically infinitely subdividable not
realistically. And then simulation characteristics - what exactly is implied
by simulation, is everything precalculated and rendered? The characteristics
may be undeterminable since the simulation data bus probably IMO is running
at some speed faster than c.

John

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-09 Thread John G. Rose
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> 
> 
> --- "John G. Rose" <[EMAIL PROTECTED]> wrote:
> 
> > >
> > > There is no way to know if we are living in a nested simulation, or
> even
> > > in a
> > > single simulation.  However there is a mathematical model: enumerate
> all
> > > Turing machines to find one that simulates a universe with
> intelligent
> > > life.
> > >
> >
> > What if that nest of simulations loop around somehow? What was that
> idea
> > where there is this new advanced microscope that can see smaller than
> ever
> > before and you look into it and see an image of yourself looking into
> it...
> 
> The simulations can't loop because the simulator needs at least as much
> memory
> as the machine being simulated.
> 

You're making assumptions when you say that. Outside of a particular
simulation we don't know the rules. If this universe is simulated the
simulator's reality could be so drastically and unimaginably different from
the laws in this universe. Also there could be data busses between
simulations and the simulations could intersect or, a simulation may break
the constraints of its contained simulation somehow and tunnel out. 

John


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread John G. Rose
> 
> There is no way to know if we are living in a nested simulation, or even
> in a
> single simulation.  However there is a mathematical model: enumerate all
> Turing machines to find one that simulates a universe with intelligent
> life.
> 

What if that nest of simulations loop around somehow? What was that idea
where there is this new advanced microscope that can see smaller than ever
before and you look into it and see an image of yourself looking into it... 

John

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread John G. Rose
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> 
> You won't see a singularity.  As I explain in
> http://www.mattmahoney.net/singularity.html an intelligent agent (you)
> is not capable of recognizing agents of significantly greater
> intelligence.  We don't know whether a singularity has already occurred
> and the world we observe is the result.  It is consistent with the
> possibility, e.g. it is finite, Turing computable, and obeys Occam's
> Razor (AIXI).
> 

You should be able to see it coming. That's how people like Kurzweil make
their estimations based on technological rates of change. When it gets
really close though then you can only imagine how it will unfold. 

If a singularity has already occurred how do you know how many there have
been? Has somebody worked out the math on this? And if this universe is a
simulation is that simulation running within another simulation? Is there a
simulation forefront or is it just one simulation within another ad
infinitum? Simulation raises too many questions. Seems like simulation and
singularity would be easier to keep separate, except for uploading. But then
the whole concept of uploading is just ...too.. confusing... unless our
minds are complex systems like Richard Loosemore proposes and uploading
would only be a sort of echo of the original.

John

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Vista/AGI

2008-04-08 Thread John G. Rose
Tipping Point may not be the right word for it. I see it as sort of an
unraveling and then a remolding. Much of the internet is still coming out of
resource compression. It has to stretch out and reoptimize like seeking a
lower energy expenditure structure for higher complexity traffic, but the
lower energy structure has more inherent intelligence. Kind of. it's like it
needs to jump into another efficiency plateau and there is an increasing
daily pressure for this to happen. And once it's reconfigured more of it
will flow into other plateaus or, I see them as harmonic "sweet spots"..
blah blah.

 

But a wise and savvy investor who has vision, remember much of investing is
hit or miss but a few investors know how to nail the bull's-eye more often
than their less informed counterparts, that wise investor can sense these
things. They know that something's going on and realize that now is the time
to take action, getting in early and gaining a foothold *wink*.

 

John

 

 

From: Eric B. Ramsay [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 08, 2008 8:03 AM
To: singularity@v2.listbox.com
Subject: RE: [singularity] Vista/AGI

 


"John G. Rose" <[EMAIL PROTECTED]> wrote:

>If you look at the state of internet based intelligence now, all the data
>and its structure, the potential for chain reaction or a sort of structural
>vacuum exists and it is accumulating a potential at an increasing rate.
>IMO...

So you see the arrival of a Tipping Point as per  Malcolm Gladwell. Whether
I physically benefit from the arrival of the Singularity or not, I just want
to see the damn thing. I would invest some modest sums in AGI if we could
get a huge collection plate going around (these collection plate amounts add
up!).

Eric B. Ramsay

  _  


singularity |  <http://www.listbox.com/member/archive/11983/=now> Archives
<http://www.listbox.com/member/archive/rss/11983/> |
<http://www.listbox.com/member/?&;>
Modify Your Subscription

 <http://www.listbox.com> 

 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Vista/AGI

2008-04-08 Thread John G. Rose
Well when you had the Manhattan project that started, the real Manhattan
project, there was one thing that made it all happen that if it did not
exist we'd still be pre-nuclear. That is the chain reaction. The chain
reaction, something that is taken for granted and assumed, was critical for
any sort of motivation behind the mass gathering of people power.

If you look at the state of internet based intelligence now, all the data
and its structure, the potential for chain reaction or a sort of structural
vacuum exists and it is accumulating a potential at an increasing rate.
IMO...

John

> -Original Message-
> From: Ben Goertzel [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, April 08, 2008 6:34 AM
> To: singularity@v2.listbox.com
> Subject: Re: [singularity] Vista/AGI
> 
> This is part of the idea underlying OpenCog (opencog.org), though it's
> being done
> in a nonprofit vein rather than commercially...
> 
> On Tue, Apr 8, 2008 at 1:55 AM, John G. Rose <[EMAIL PROTECTED]>
> wrote:
> > Just a thought, maybe there are some commonalities across AGI designs
> where
> >  components could be built at a lower cost. An investor invests in the
> >  company that builds component x that is used by multiple AGI
> projects. Then
> >  you have your little AGI ecosystem of companies all competing yet
> >  cooperating. After all, we need to get the Singularity going ASAP so
> that we
> >  can upload before inevitable biologic death? I prefer not to become
> >  nano-dust I'd rather keep this show a rockin' capiche?
> >
> >  So it's like this - need standards. Somebody go bust out an RFC. Or
> is there
> >  work done on this already like is there a CogML? I don't know if the
> >  Semantic Web is going to cut the mustard... and the name "Semantic
> Web" just
> >  doesn't have that ring to it. Kinda reminds me of the MBone - names
> really
> >  do matter. Then who's the numnutz that came up with "Web 3 dot oh"
> geezss!
> >
> >  John
> >
> >
> >
> >  > -Original Message-
> >  > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> >  > Sent: Monday, April 07, 2008 7:07 PM
> >  > To: singularity@v2.listbox.com
> >
> >
> > > Subject: Re: [singularity] Vista/AGI
> >  >
> >  > Perhaps the difficulty in finding investors in AGI is that among
> people
> >  > most
> >  > familiar with the technology (the people on this list and the AGI
> list),
> >  > everyone has a different idea on how to solve the problem.  "Why
> would I
> >  > invest in someone else's idea when clearly my idea is better?"
> >  >
> >  >
> >  > -- Matt Mahoney, [EMAIL PROTECTED]
> >  >
> >
> >  ---
> >  singularity
> >  Archives: http://www.listbox.com/member/archive/11983/=now
> >  RSS Feed: http://www.listbox.com/member/archive/rss/11983/
> >  Modify Your Subscription: http://www.listbox.com/member/?&;
> >  Powered by Listbox: http://www.listbox.com
> >
> 
> 
> 
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
> 
> "If men cease to believe that they will one day become gods then they
> will surely become worms."
> -- Henry Miller
> 
> ---
> singularity
> Archives: http://www.listbox.com/member/archive/11983/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/11983/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> bbf9c1
> Powered by Listbox: http://www.listbox.com

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Vista/AGI

2008-04-07 Thread John G. Rose
Just a thought, maybe there are some commonalities across AGI designs where
components could be built at a lower cost. An investor invests in the
company that builds component x that is used by multiple AGI projects. Then
you have your little AGI ecosystem of companies all competing yet
cooperating. After all, we need to get the Singularity going ASAP so that we
can upload before inevitable biologic death? I prefer not to become
nano-dust I'd rather keep this show a rockin' capiche?

So it's like this - need standards. Somebody go bust out an RFC. Or is there
work done on this already like is there a CogML? I don't know if the
Semantic Web is going to cut the mustard... and the name "Semantic Web" just
doesn't have that ring to it. Kinda reminds me of the MBone - names really
do matter. Then who's the numnutz that came up with "Web 3 dot oh" geezss!

John


> -Original Message-
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> Sent: Monday, April 07, 2008 7:07 PM
> To: singularity@v2.listbox.com
> Subject: Re: [singularity] Vista/AGI
> 
> Perhaps the difficulty in finding investors in AGI is that among people
> most
> familiar with the technology (the people on this list and the AGI list),
> everyone has a different idea on how to solve the problem.  "Why would I
> invest in someone else's idea when clearly my idea is better?"
> 
> 
> -- Matt Mahoney, [EMAIL PROTECTED]
> 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Vista/AGI

2008-03-17 Thread John G. Rose
> From: Vladimir Nesov [mailto:[EMAIL PROTECTED]
> On Mon, Mar 17, 2008 at 4:48 PM, John G. Rose <[EMAIL PROTECTED]>
> wrote:
> >
> > I think though that particular proof of concepts may not need more
> than a
> > few people. Putting it all together would require more than a few.
> Then the
> > resources needed to make it interact with various systems in the world
> would
> > make the number of people needed grow exponentially.
> >
> 
> Then what's the point? We have this problem with existing software
> already, and it's precisely the magic bullet of AGI that should allow
> free lunch of automatic interfacing with real-world issues...
> 

The assumed value of AGI is blanketed magic bullets. They'll be quite a bit
of automatic interfacing. There will be quite a bit of prevented and
controlled automatic interfacing. But in the beginning, think about it, it's
not instantaneous super-intelligence. 

John

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Vista/AGI

2008-03-17 Thread John G. Rose
The payoff on AGI justifies investment. The problem is that the probability
of success is in question. But spinoff technologies developed along the way
could have value.

 

I think though that particular proof of concepts may not need more than a
few people. Putting it all together would require more than a few. Then the
resources needed to make it interact with various systems in the world would
make the number of people needed grow exponentially.

 

John

 

From: Eric B. Ramsay [mailto:[EMAIL PROTECTED] 
Sent: Sunday, March 16, 2008 10:14 AM
To: singularity@v2.listbox.com
Subject: [singularity] Vista/AGI

 

It took Microsoft over 1000 engineers, $6 Billion and several years to make
Vista.  Will building an AGI be any less formidable? If the AGI effort is
comparable, how can the relatively small efforts of Ben (comparatively
speaking) and others possibly succeed? If the effort to build an AGI is not
comparable, why not? Perhaps a consortium (non-governmental) should be
created specifically for the building of an AGI. Ben talks about a Manhattan
style project. A consortium could pool all resources currently available
(people and hardware), actively seek private funds on a  continuing basis
and give coherence to the effort.

Eric B. Ramsay

  _  


singularity |   Archives
 |

Modify Your Subscription

  

 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-11 Thread John G. Rose
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> 
> That's true.  The visual perception process is altered after the
> experiment to
> favor recognition of objects seen in the photos.  A recall test doesn't
> measure this effect.  I don't know of a good way to measure the quantity
> of
> information learned.
> 

When you learn something is it stored as electrical state or are molecules
created? Perhaps precise measurements of particular chemicals in certain
regions could correlate to data differential. A problem though is that the
data may be spread over a wide region making it difficult to measure. And
you'd have to be able to measure chemicals in tissue structure though
software could process out the non-applicable.

Also you could estimate by calculating average data intake and estimate what
is thrown away. So many bits are consumed, so many are tossed, the rest is
stored, independent of recall.

But a curious number in addition to average long term memory storage is
MIPS. How many actual bit flips are occurring? This is where you have to be
precise as even trace chemicals, light, temperature, effect this number.
Though just a raw number won't tell you that much compared to say
spatiotemporal MIPS density graphs.

John

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread John G. Rose
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> --- "John G. Rose" <[EMAIL PROTECTED]> wrote:
> > Is there really a bit per synapse? Is representing a synapse with a
> bit an
> > accurate enough simulation? One synapse is a very complicated system.
> 
> A typical neural network simulation uses several bits per synapse.  A
> Hopfield
> net implementation of an associative memory stores 0.15 bits per
> synapse.  But
> cognitive models suggest the human brain stores .01 bits per
> synapse.
> (There are 10^15 synapses but human long term memory capacity is 10^9
> bits).

A cognitive model may only allocate so much data per synapse but the REAL
data being stored in one biological synapse has got to be quite high. How
much of it is unique among a group of synapses and how much of that affects
the running biological cognitive entity grossly is in a degree particular to
that brain. Any simulation that throws x bits per synapse IS a simulation
and not a copy. A "copied" simulation could adapt itself to its new "home"
if given enough latitude to model itself as it was in its biological host,
if you are trying to copy a consciousness it depends on what it actually is,
how much it can be simplified or molded to a digital transistor-like
environment verses the rich unique electro-chemical environment of a
biological brain. A simulation of a brain is a lossy compression since you
can't get it all, each cell ultimately holds many gigs of data. You can try
to get a functionally isomorphic compressed copy but due to the size you
still going to have to average out much of it.

A computer software simulation is going to be WAY more flexible and
extensible. Biological electrochemical systems are, at least with current
technology, not very changeable. But looking at the sophistication of
natural molecular digital physics there has to a number of breakthroughs
down the road...

John

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-04 Thread John G. Rose
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > >
> > > By "equivalent computation" I mean one whose behavior is
> > > indistinguishable
> > > from the brain, not an approximation.  I don't believe that an exact
> > > simulation requires copying the implementation down to the neuron
> level,
> > > much
> > > less the molecular level.
> > >
> >
> > So how would you approach constructing such a model? I suppose a
> superset
> > intelligence structure could analyze properties and behaviors of a
> brain and
> > simulate it within itself. If it absorbed enough data it could
> reconstruct
> > and eventually come up with something close.
> 
> Well, nobody has solved the AI problem, much less the uploading problem.
> Consider the problem in stages:
> 
> 1. The Turing test.
> 
> 2. The "personalized" Turing test.  The machine pretends to be you and
> the
> judges are people who know you well.
> 
> 3. The "planned, personalized" Turing test.  You are allowed to
> communicate
> with judges in advance, for example, to agree on a password.
> 
> 4. The "embodied, planned, personalized" Turing test.  Communication is
> not
> restricted to text.  The machine is planted in the skull of your clone.
> Your
> friends and relatives have to decide who has the carbon-based brain.
> 
> Level 4 should not require simulating every neuron and synapse.  Without
> the
> constraints of slow, noisy neurons, we could use other algorithms.  For
> example, low level visual processing such as edge and line detection
> would not
> need to be implemented as a 2-D array of identical filters.  It could be
> implemented serially by scanning the retinal image with a window filter.
> Fine
> motor control would not need to be implemented by combining thousands of
> pulsing motor neurons to get a smooth average signal.  The signal could
> be
> computed numerically.  The brain has about 10^15 synapses, so a
> straightforward simulation at the neural level would require 10^15 bits
> of
> memory.  But cognitive tests suggest humans have only about 10^9 bits of
> long
> term memory, suggesting that more compressed representation is possible.
> 
> In any case, level 1 should be sufficient to argue convincingly that
> either
> consciousness can exist in machines, or that it doesn't in humans.



These tests still though are very subjective, nothing exact.

Is there really a bit per synapse? Is representing a synapse with a bit an
accurate enough simulation? One synapse is a very complicated system.

John







---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread John G. Rose
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> 
> By "equivalent computation" I mean one whose behavior is
> indistinguishable
> from the brain, not an approximation.  I don't believe that an exact
> simulation requires copying the implementation down to the neuron level,
> much
> less the molecular level.
> 

So how would you approach constructing such a model? I suppose a superset
intelligence structure could analyze properties and behaviors of a brain and
simulate it within itself. If it absorbed enough data it could reconstruct
and eventually come up with something close.

John


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread John G. Rose
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> 
> And that is the whole point.  You don't need to simulate the brain at
> the
> molecular level or even at the level of neurons.  You just need to
> produce an
> equivalent computation.  The whole point of such fine grained
> simulations is
> to counter arguments (like Penrose's) that qualia and consciousness
> cannot be
> explained by computation or even by physics.  Penrose (like all humans)
> is
> reasoning with a brain that is a product of evolution, and therefore
> biased
> toward beliefs that favor survival of the species.
> 

An equivalent computation will be some percentage of the complexity of a
perfect molecular simulation. You can simplify the computation but you have
to know what to simplify out and what to discard. Losing too much of the
richness may produce a simulation that is like a scratchy audio recording of
a philharmonic or probably even worse the simulated system will not function
as a coherent entity, it'll just be contentious noise unless there is ample
abetting by external control. But a non-molecular and non-neural simulation
may require even more computational complexity than a direct model.
Reformatting the consciousness to operate within another substrate without
first understanding its natural substrate, ya, still may be the best choice
due to technological limitations.

John

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-27 Thread John G. Rose
> From: Stathis Papaioannou [mailto:[EMAIL PROTECTED]
> Well, maybe you can't actually rule it out until you make a copy and
> see how close it has to be to think the same as the original, but I
> strongly suspect that getting it right down to the molecular level
> would be enough. Even if quantum effects are important in
> consciousness (and I don't think there is any clear evidence that this
> is so), these would be generic quantum effects, reproduced by
> reproducing the molecular structure. Transistors function using
> quantum level effects, but you don't need to replace a particular
> transistor with a perfect copy to have an identically functioning
> electronic device.
> 

Actually a better way to do it as getting even just the molecules right is a 
wee bit formidable - you need a really powerful computer with lots of RAM. Take 
some DNA and grow a body double in software. Then create an interface from the 
biological brain to the software brain and then gradually kill off the 
biological brain forcing the consciousness into the software brain.

The problem with this approach naturally is that to grow the brain in RAM 
requires astronomical resources. But ordinary off-the-shelf matter holds so 
much digital memory compared to modern computers. You have to convert matter 
into RAM somehow. For example one cell with DNA is how many gigs? And cells 
cost a dime a billion. But the problem is that molecular interaction is too 
slow and cluncky. 

John


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-27 Thread John G. Rose
> From: Stathis Papaioannou [mailto:[EMAIL PROTECTED]
> There are some who think that all you need to simulate a brain (and
> effectively copy a person) is to fix it, slice it up, and examine it
> under a microscope to determine the synaptic structure. This is almost
> certainly way too crude: consider the huge difference to cognition
> made by small molecules in tiny concentrations, such as LSD, which do
> no more than slightly alter the conformation of certain receptor
> proteins on neurons by binding to them non-covalently. On the other
> hand, it is equally implausible to suppose that you have to get it
> right down to the subatomic level, since otherwise cosmic rays or
> changing the isotope composition of the brain would have a major
> effect, and they clearly don't.
> 

I don't know if you can rule out subatomic and quantum. There seems to be more 
and more evidence pointing to an amount of activity going on there. A small 
amount of cosmic rays don't have obvious immediate gross effects but 
interaction is occurring. Exactly how much of it would need to be replicated is 
not known. You could be missing out on important psi elements in consciousness 
which are taken for granted :)

Either way it would be approximation unless there was some way using 
theoretical physics where an exact instantaneous snapshot could occur with the 
snapshot existing in precisely equivalent matter at that instant.

John



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-27 Thread John G. Rose
> From: Stathis Papaioannou [mailto:[EMAIL PROTECTED]
> 
> On 26/02/2008, John G. Rose <[EMAIL PROTECTED]> wrote:
> 
> > There is an assumed simplification tendency going on that a human
> brain could be represented as a string of bits. It's easy to assume but
> I think that a more correct way to put it would be that it could be
> approximated. Exactly how close the approximation could theoretically
> get is entirely unknown.
> 
> It's not entirely unknown. The maximum simulation fidelity that would
> be required is at the quantum level, which is still finite. But
> probably this would be overkill, since you remain you from moment to
> moment despite changes in your brain which are gross compared to the
> quantum level.
> 
> 

Well if you spend some time theorizing a model of a brain digitizer that 
operates within known physics constraints it's not an easy task getting just 
the molecular and atomic digital data. You have to sample over a period of time 
and space using photons and particle beams. This in itself interferes with the 
sample. Then say this sample is reconstructed within a theoretically capable 
computer, the computer will most likely have to operate in slow time to 
simulate the physics of all the atoms and molecules as the computer is itself 
constrained by the speed of light. I'm going this route because I don't think 
that it is possible to get an instantaneous reading of all the atoms in a 
brain, you have to reconstruct over time and space. THEN, this is ignoring the 
subatomic properties and forget about quantum data sample digitization I think 
it is impossible to get an exact copy.

So this leaves you with a reconstructed approximation. Exactly how much of this 
would be you is unknown because any subatomic and quantum properties of you are 
- started from scratch - this includes any macroscopic and environmental 
properties of subatomic and quantum and superatomic molecular state and 
positioning effects. And if the whole atomic level model is started from 
scratch in the simulator it could disintegrate or diverge as it is all forced 
fit together. Your copy is an approximation of which it is unknown how close it 
is actually of you or if you could be even put together accurately enough in 
the simulator.

John

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-25 Thread John G. Rose
There is an assumed simplification tendency going on that a human brain
could be represented as a string of bits. It's easy to assume but I think
that a more correct way to put it would be that it could be approximated.
Exactly how close the approximation could theoretically get is entirely
unknown. Though something could be achieved and even different forms of
consciousness even ones that may be superior and more efficient and
structured better than biological ones are there for discovery and I believe
that there are potentially many variations. There is a tendency to think of
levels of consciousness but perhaps this is wrong there are just variants
some of which have stronger properties than others but common denominators
are there IOW there are certain required properties for something to be
classified as conscious. Consciousness seems not to be a point but an n
dimensional continuous function.

 

John

 

From: Panu Horsmalahti [mailto:[EMAIL PROTECTED] 
Sent: Sunday, February 24, 2008 12:08 PM
To: singularity@v2.listbox.com
Subject: Re: [singularity] Re: Revised version of Jaron Lanier's thought
experiment.

 

If we assume 2x2x2 block of space floating somewhere, and would assign each
element the value 1 if a single atom happens to be inside the subspace
defined by the grid, and 0 if not. How many ways would there be to read this
grid to create (2*2*2) = 8bits? The answer is 8! = 40 320. Lets then assume
that a single atom can hold atleast 100 bits[1], there would be atleast 9 *
10^157 ways to read a single atom. This is just by calculating the different
permutations, but we can also apply *any* mathematical calculation to our
information reading algorithm. One would be the 'NOT' argument, which simply
inverses our bits. This already doubles the amount of bits we can extract.
If you take this further, we can read *all* different permutations of 100
bits from a single atom. For any string of bits, there exists at least one
algorithm to calculate it from any input, since a single bit could be
calculated into 10 if the bit is 0, and 11 if the bit is 1 or example.

It must then be concluded that you can construct an algorithm/computer to
read a static string of bits that defines any human state of consciousness
(the string of bits could for example be calculated to match exactly those
bits that would be in the memory of a computer that simulates a human brain)
from pretty much any space or substrate.

One opposition people have is that "most" of that complexity is actually in
the algorithm itself, but that is irrelevant if it still creates
consciousness.

If we assume that our universe has some kind of blind physical law, that has
as input the atoms/matter/energy in some space, and then searches through
all the possible algorithms, it is bound to find atleast one that should
create consciousness. It would be quite a miracle if this physical law would
have a human bias, and would think like humans to only create consciousness
when the computation is done in biological neurons. If you say that
computers cannot be truly conscious, you're saying that the universe has
some kind of magical human bias, which seems to be religious thinking to me.

As I showed, some space can be interpreted as many different kinds of
computation (actually a *massive* number), only our human perspective forces
us to choose the interpretation that fits us. For example, if we create a
computer that calculates bullet trajectories, we interpret it to do just
that. But it can be interpreted 'in theory' to mean many other things, but
we only care of the computation we designed it for. A small box of 3 atoms
bouncing around can be interpreted to mean a massive number of different
computations, in addition of simulation 3 atoms.

As it is trivial to read a static string of bits to match some state of
consciousness, some argue that it is not enough. They claim that a single
state is not enough to create consciousness. However, to imagine a computer
that not only creates the first string of bits in the consciousness
computation, but also the second one (and possibly more ad infinitum) just
makes the algorithm/computer more complex, but is not an argument against
the thought experiment.


1. The Singularity is Near, Ray Kurzweil

  _  


singularity |   Archives
 |

Modify Your Subscription

  

 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-24 Thread John G. Rose
The program that is isomorphically equivalent to raindrop positions inputted
into the hypothetical computer implements a brain. I have a blinkey safety
light on the back of my bicycle that goes on and off at 1 sec frequency.
There exists a hypothetical computer that that takes a 1 sec on/off pulse as
program instructions and implements my brain. This doesn't say much as the
hypothetical computer is almost 100% equivalent to my brain. Where is the
hypothetical computer? Still have to come up with it.

 

But Lanier does scrape the surface of something bigger with all this. He is
pointing to an intelligence in all things or some structure in all things
that has some amount of potential intelligent as with potential energy in
physics, or some effect with intelligence IOW the structure means something.


 

And I found this interesting that he said - 

 

This means that software packaged as being "non-intelligent" is more likely
to improve, because the designers will receive better critical feedback from
users. The idea of intelligence removes some of the "evolutionary pressure"
from software, by subtly indicating to users it is they, rather than the
software, that should be changing.

 

As it happens, machine decision making is already running our household
finances to a scary degree, but it's doing so with a Wizard of Oz-like
remote authority that keeps us from questioning it. I'm referring to the
machines that calculate our credit ratings. Most of us have decided to
change our habits in order to appeal to these machines. We have simplified
ourselves in order to be comprehensible to simplistic data-bases, making
them look smart and authoritative. Our demonstrated willingness to
accommodate machines in this way is ample reason to adopt a standing bias
against the idea of artificial intelligence.

 

As it is true. There is a herding effect by AI and computers in general to
be aware of.

 

John

 

 

 

From: Eric B. Ramsay [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 22, 2008 10:12 AM
To: singularity@v2.listbox.com
Subject: [singularity] Re: Revised version of Jaron Lanier's thought
experiment.

 

I came across an old Discover magazine this morning with yet another article
by Lanier on his rainstorm thought experiment. After reading the article it
occurred to me that what he is saying may be equivalent to:

Imagine a sufficiently large computer that works according to the
architecture of our ordinary PC's. In the space of Operating Systems (code
interpreters), we can find an operating system such that it will run the
input from the rainstorm such that it appears identical to a computer
running a brain.

If this is true, then functionalism is not affected since we must not forget
to combine program + OS. Thus the rainstorm by itself has no emergent
properties.

Eric B. Ramsay



  _  


singularity |   Archives
 |

Modify Your Subscription

  

 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Definitions

2008-02-23 Thread John G. Rose
> From: Ben Goertzel [mailto:[EMAIL PROTECTED]
> 
> "Consciousness", like many natural language terms,
> is extremely polysemous
> 
> A formal definition of "reflective consciousness"
> was given by me in a blog post a few days ago
> 
> http://goertzel.org/blog/blog.htm
> 
> -- Ben G
> 

This is a great post BTW.

Consciousness does exhibit hypersetual(sic) behavior and combining that with
virtual multiverses and your model of will you could actually build a
robo-mechanical mind that, well if you put it into a little bug like robot
would exhibit lifelike behavior. I will have to think about the hyperset
consciousness model since it is nice and you can apply it to different
approaches to AGI but I don't know if it explains enough or what else is
needed to get to higher levels of consciousness as in humans or above.
Naturally it would have to be flushed out as is but I'm just wondering if
hypersets are the best tools for the primary descriptor of consciousness,
they may very well be but will have to do some research on similar
mathematical structures to concur as they may just be a pig-in-the-poke.

John


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] mass-market Singularity fiction

2008-02-12 Thread John G. Rose
How about – the movie is going along, year 2028, an AGI goes Singularity and 
burns to the center of the earth and converts the core into a giant AGI 
supercomputer. And then average Joe out of the blue gets a message in his ears 
“We would like to make your transition to Singularity mode as smooth as 
possible. Please stand by…” Then everything goes from normal matter and physics 
and is converted into Singularity AGI RAM. You see the earth with a disturbance 
wave going across the surface as it gets converted into memory. Then the 
physics start gettin’ weird from what people experience, everything starts 
floating, physics goes to on-demand per user. The average Joe then hears “Thank 
you. The transition to Singularity is now complete. We hope to continue serving 
you in your new simulatory state of being.”

 

John

 

 

From: Joshua Fox [mailto:[EMAIL PROTECTED] 
Sent: Monday, February 11, 2008 4:01 AM
To: singularity@v2.listbox.com
Subject: Re: [singularity] mass-market Singularity fiction

 

A nice idea would be a "Bad Singularity Science" website along the lines of  
http://intuitor.com/moviephysics/ and http://www.badastronomy.com/bad/movies.

Like it or not, sci-fi is already the main gateway in making people aware about 
the Singularity  ("Terminator!" "Matrix!") and this should be used to educate.

SIAI's "Three Laws" campaign http://www.asimovlaws.com was excellent, although 
if we are to take the model of those sites, the style would be a bit more 
entertainment-oriented (while still educating) .

Joshua



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://www.listbox.com/member/?member_id=4007604&id_secret=94885151-ef48f7

RE: [singularity] mass-market Singularity fiction

2008-02-10 Thread John G. Rose
One role of singularity fiction is to explore what the singularity really is 
and the views of it from different perspectives. Mass market many times wants 
to see the exciting dangers of technological advances. Maybe there should be a 
movie about the nirvana-esque post human possibilities. A few movies that I 
know have touched on it for example “Zardoz”, one of my faves and for some 
reason “What Dreams May Come” comes to mind but that was about something else 
though the idea could be applicable.

 

 John

 

From: Joshua Fox [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 05, 2008 12:36 PM
To: singularity@v2.listbox.com
Subject: [singularity] mass-market Singularity fiction

 

The new Terminator series brings up again 

  the concept of  mass-market Singularity fiction.

The folly of argument from fiction has been discussed enough. 
 
I want to raise another question: What is the relevance, if any, of Singularity 
fiction?
 
Speculative fiction has often influenced the real world. Walden 2, Old-New 
Land, Utopia, 1984, Brave New World, etc. changed people's political outlook.  
Science fiction inspired  many people to become engineers and scientists,  
including some who accomplished great things. 
 
What is the real-world role of Singularity fiction?  Should we differentiate 
between fiction which reaches the narrow intellectual/geek audience and that 
which spreads to a mass audience?

Should we use fiction as a platform for getting  attention by contrasting it to 
a more insightful  analysis (as in the SIAI I Robot campaign) ? Or can we hold 
up specific works of fiction, with all their  limitations, as  inspiration for 
working to attain a better future or head off threats?

Joshua

  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?  &

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=93178406-19a6fb

RE: [singularity] Multi-Multi-....-Multiverse

2008-02-03 Thread John G. Rose
> From: Samantha Atkins [mailto:[EMAIL PROTECTED]
> On Feb 2, 2008, at 7:29 AM, gifting wrote:
> 
> > WTF (I can only assume what that stands for) are you such an angry
> > person. Or is linear thinking the only possible solution for your
> > VotW  (guess what that stands for)?
> 
> I am not angry.  I am bored with what seems like endless often off
> subject prattling going nowhere.
> 
> - samantha
> 

A bug's view of the earth is analogous to our view of the universe. Our view
of the universe may be analogous to a post-singularity AGI's view of
multiverse. AGI may consider multiverse intuitively obvious whereas we are
just beginning to explore the idea. Why does it apply? You have to at least
speculate on what an AGI is going to encounter, how it will to deal with it,
even if we don't know - honestly it will probably run into some bizarre
sh*t. But then, just in this universe as we know it there are bizarrities
from what little is known 'cept it seems out of reach due to speed of light.
The attraction of multiverse is that they may in some way be theoretically
more reachable and be like cans of infinite worms waiting to be opened or
already open just that we don't see or we see but just don't
acknowledge, perhaps due to intelligence limitations or other human reasons.
AGI machines don't have our particular human limitations as much.

I just watch that scary move "Event Horizon" again and it reminds me of this
conversation heh

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=93178406-19a6fb


RE: [singularity] World as Simulation

2008-01-12 Thread John G. Rose
Well everything *seems* real... right? Though sometimes it all feels a bit
ersatz J

 

John

 

From: Eric B. Ramsay [mailto:[EMAIL PROTECTED] 
Sent: Saturday, January 12, 2008 12:03 PM
To: singularity@v2.listbox.com
Subject: RE: [singularity] World as Simulation

 

Your suggestion baffles me. I was thinking of far more prosaic efforts such
as comparisons to physical observations that we actually know something
about.

"John G. Rose" <[EMAIL PROTECTED]> wrote: 

I would look at multiverses with different physical constants. Say
speed-of-light in one multiverse was larger than ours, say WAY larger
example 10^100*c. If intermultiverse communication is possible how would the
physics work out if a simulation or manipulation was conducted from one to
the other.

 

John

 

 

From: Eric B. Ramsay [mailto:[EMAIL PROTECTED] 

Apart from all this philosophy (non-ending as it seems), Table 1. of the
paper referred to at the start of this thread gives several consequences of
a simulation that offer to explain what's behind current physical
observations such as the upper speed limit of light, relativistic and
quantum effects etc. Without worrying about whether we are a simulation of a
sinmulation of a simulation etc, it would be interesting to work out all the
qualitative/quantitative (?) implications of the idea and see if
observations strongly or weakly support it. If the only thing we can do with
the idea is discuss phiosophy then the idea is useless. 



  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/? <http://v2.listbox.com/member/?&;> & 

 

  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;>
&

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85291856-229bf2

RE: [singularity] World as Simulation

2008-01-12 Thread John G. Rose
I would look at multiverses with different physical constants. Say
speed-of-light in one multiverse was larger than ours, say WAY larger
example 10^100*c. If intermultiverse communication is possible how would the
physics work out if a simulation or manipulation was conducted from one to
the other.

 

John

 

 

From: Eric B. Ramsay [mailto:[EMAIL PROTECTED] 



Apart from all this philosophy (non-ending as it seems), Table 1. of the
paper referred to at the start of this thread gives several consequences of
a simulation that offer to explain what's behind current physical
observations such as the upper speed limit of light, relativistic and
quantum effects etc. Without worrying about whether we are a simulation of a
sinmulation of a simulation etc, it would be interesting to work out all the
qualitative/quantitative (?) implications of the idea and see if
observations strongly or weakly support it. If the only thing we can do with
the idea is discuss phiosophy then the idea is useless. 




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85277236-49386d

RE: [singularity] World as Simulation

2008-01-12 Thread John G. Rose
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> It is unlikely that any knowledge you now have would be useful in
> another
> simulation.  Knowledge is only useful if it helps propagate your DNA.
> 

An agent taking data from one simulation to the next could store the data
within the agent or the data could be stored in the simulating entity which
is outside of both simulations. OR no data(knowledge) could be stored
between simulations.

Knowledge is useful if it helps propagate the species DNA. It may not help
propagate the individual agent's DNA but other members in the species may
achieve DNA propagation while benefitting from that agent's knowledge.

If the simulations are similar enough knowledge retained from one simulation
after death could be useful in another simulation. Also knowledge retained
by an agent in one simulation after death could be used to propagate DNA if
that same agent was reinstantiated within the same simulation. 

John


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85275939-333ae6


RE: [singularity] World as Simulation

2008-01-12 Thread John G. Rose
> From: Bryan Bishop [mailto:[EMAIL PROTECTED]
> 
> I think "simulation" is becoming the new "reality". Just a new name.
> 

Yes reality is relative. Our view of reality is probably so far off from what 
it actually is in this universe. And reality I would think is very species 
dependant. It's only an approximation as a simulation is an approximation.

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85273975-513f3e

RE: [singularity] World as Simulation

2008-01-12 Thread John G. Rose
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> 
> Interesting question.  Suppose you simulated a world where agents had
> enough
> intelligence to ponder this question.  What do you think they would do?
> 
> My guess is that agents in a simulated evolutionary environment that
> correctly
> believe that the world is a simulation would be less likely to pass on
> their
> genes than agents that falsely believe the world is real.
> 

In a sim world there are many variables that can overcome other motivators
so a change in the rate of gene proliferation would be difficult to predict.
The agents that correctly believe that it is a simulation could say OK this
is all fake, I'm going for pure pleasure with total disregard for anything
else. But still too many variables to predict. In humanity there have been
times in the past where societies have given credence to simulation through
religious beliefs and weighted more heavily on a disregard for other groups
existence. A society would say that this is all fake, we all gotta die
sometime anyway so we are going to take as much as we can from other tribes
and decimate them for sport. Not saying this was always the reason for
intertribal warfare but sometimes it was.

But the problem is in the question of what really is a simulation? For the
agents constrained, it doesn't matter they still have to live in it - feel
pain, fight for food, get along with other agents... Moving an agent from
one simulation to the next though, that gives it some sort of extra
properties...

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85201273-86d420


RE: [singularity] World as Simulation

2008-01-11 Thread John G. Rose
If this universe is simulated the simulator could also be a simulation and
that simulator could also be a simulation. and so on.

 

.

 

What is that behavior of an organism called when the organism, alife or not,
starts analyzing things and questioning whether or not it is a simulation?
It's not only self-awareness but something in addition to that.

 

John

 

 

From: Eric B. Ramsay [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 03, 2008 9:17 PM
To: singularity@v2.listbox.com
Subject: [singularity] World as Simulation

 

Some of you may be interested in this link (if you haven't already seen the
article).

 

http://arxiv.org/abs/0801.0337

 

Eric B. Ramsay

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85121534-d2d878

RE: [singularity] Species Divergence

2007-08-24 Thread John G. Rose
> I was more questioning the belief that a pure biological entity, or a
> purely software consciousness is somehow more aesthetically pleasing
> than something in between...
> 
> Especially since engineered biological entities could be so wildly
> diverse and really terrifying to human sensibilities.

I guess that it is in the eye of the beholder, aside from any cultural shock
during initial perception, a hybrid could be even more stunning.

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=35592782-d8f24a


RE: [singularity] Species Divergence

2007-08-23 Thread John G. Rose
They could always be prettied up I guess, part human, part machine,
nano-mush. 

Also the plain old biologics will undergo a lot of genetic engineering and
artificial selection so these guys will change but maybe not a species
divergence.

Yes bizzarro variants will happen. It's gonna get weird. Us old skool red
blooded biologics are going to have to find ways to defend ourselves just
like in the sci-fi novels :)

John


> From: Joel Pitt [mailto:[EMAIL PROTECTED]
> Why would the hybrid be a "terrifying" creature, as opposed to a
> biological
> or software consciousness?
> 
> There could also be those entities that embody themselves when
> necessary and travel between the three "species" you've described,
> depending on their goals and direction. c.f. Greg Egan's Diaspora.
> 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=34831851-9720be


[singularity] Species Divergence

2007-08-21 Thread John G. Rose
During the singularity process there will be a human species split into at
least 3 new species - totally software humans where even birth occurs in
software, the plain old biological human, and the hybrid
man-machine-computer. The software humans will rapidly diverge into other
species, the biologics will die off rapidly or stick around for a while for
various reasons and the hybrid could grow into a terrifying creature. The
software humans will basically exist in other dimensions and evolve and
disperse rapidly. They also may just meld into whichever AGI successfully
takes over the world as human software will just be a tiny subset(or should
I say Subgroup) of AGI.

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=33904940-68c01d


RE: [singularity] Reduced activism

2007-08-21 Thread John G. Rose
Well it is summer still J People are vacationing and kids are home.

 

My experience has been that when you mention “Singularity” within certain 
groups or to certain individuals they don’t get it and/or think that there are 
religious correlations or cultish intonations.  I’m not so sure “singularity” 
is the best word for it but ahh I suppose at some point someone chose that and 
it stuck.

 

John

 

 

From: Joshua Fox [mailto:[EMAIL PROTECTED] 



This is the wrong place to ask this question, but I can't think of anywhere 
better: 

There are people who used to be active in blogging, writing to the email lists, 
donating money, public speaking, or holding organizational positions in 
Singularitarian and related fields -- and are no longer anywhere near as 
active. I'd very much like to know why. 

Possible answers might include: 

1. I still believe in the truthfulness and moral value of the Singularitarian 
position, but...

a. ... eventually we all grow up and need to focus on career rather than 
activism.
b. ... I just plain ran out of energy and interest. 
c. ... public outreach is of no value or even dangerous; what counts is the 
research work of a few small teams. 
d. ... why write on this when I'll just be repeating what's been said so often.
e. ... my donations are meaningless compared to what a dot-com millionaire can 
give.

2. I came to realize the deep logical (or: moral) flaws in the Singularitarian 
position. [Please tell us they are.] 
3. I came to understand that Singularitarianism has some logical and moral 
validity, but no more than many other important causes to which I give my time 
and money. 

And of course I am also interested to learn other answers. 

Again, I would like to hear from those who used to be more involved, not just 
those who have  disagreed with Singularitarianism all along.

Unfortunately, most such people are not reading this, but perhaps some have 
maintained at least this connection; or list members may be able to report 
indirectly (but please, only well-confirmed reports rather than supposition). 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=33903896-9426cf

RE: [singularity] AI is almost here (2/2)

2007-08-02 Thread John G. Rose
I suggest audio conferencing with or w/o web collaboration.  Audio
conferencing with multiple speakers is very efficient (some conferences are
listen-only).  I just finished working several years in conferencing R&D
developing pay systems but I know there are multiple free ones.  There's
Skype as P2P VOIP conferencing...  But VOIP, PSTN and hybrid services are
available and perhaps something should be setup periodically so that people
can present and discuss ideas.  Also BTW you can build a VOIP server and
just run it on the network so there is a one-time fee and then just network
BW utilization after that...

John


> From: Mike Tintner [mailto:[EMAIL PROTECTED]
 
> This is a VERY constructive suggestion -  although after a quick try,
> I'm
> still wrestling with skrbl!
> 
> I've often thought that many discussions on these boards are hopelessly
> limited by being purely print. It would be great to be able to easily
> incorporate photos/ vids and especially this whiteboard for
> graphics/drawings plus text.
> 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=27977174-65549a


RE: [singularity] AI is almost here (2/2)

2007-07-31 Thread John G. Rose
I do wonder though if you can have an intelligent entity that does not take
any input.  Basically just a pattern generator/injector on output streams or
internal data store.  Would it have to do pattern matching internally?  Just
wondering if pattern matching could be thrown out of the equation
completely.  

John

> >
> I think that pattern matching is crucial, and pretty basic.  It's also
> not sufficient.  Calling it ancillary is improperly denigrating it's
> centrality and importance.  Asserting that it's all that's needed,
> however, is WAY overstating it's importance.  Saying "All AIs will be,
> at their core, pattern matching machines." is overstating the importance
> of pattern matching so much that it's grotesque, but pattern matching
> WILL be very central, and one of the basic modes of thought.  Think of
> asserting that "All computers will be, at their core, adding machines."
> to get what appears to me to be the right feeling tone.
> 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=26932623-39fcd5


RE: [singularity] AI is almost here (2/2)

2007-07-31 Thread John G. Rose

> From: Alan Grimes [mailto:[EMAIL PROTECTED]
> 
> Yes, that is all it does. All AIs will be, at their core, pattern
> matching machines. Every one of them. You can then procede to tack on
> any other function which you believe will improve the AI's performance
> but in every case you will be able to strip it down to pretty much a
> bare pattern matching system and still obtain a truly formidable
> intelligence!

I think that pattern matching is an ancillary function and yes the more
resources devoted to this allows for a better AI/AGI.  Pattern matching
operates on data streams for example video and pattern matching at various
abstractions can operate on a data store.  It seems pattern matching is a
part of consciousness and a component of intelligence but just part of the
core.  If you strip everything out except pattern matching operations, what
do you do with the matched patterns?  How do you store and organize them?
What decides further action?  How are the pattern matching operators
adjusted?

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=26907816-dc79aa


RE: [singularity] Re: SL4 booting policy

2007-05-30 Thread John G. Rose
Booted, kicked, silenced, whatever.  

http://sl4.org/intro.html

But to keep certain standards on a list I suppose someone needs to wield a
whip.  Not a place for me though I'm too egalitarian.

John

> From: Aleksei Riikonen [mailto:[EMAIL PROTECTED]
> 
> On 5/30/07, John G. Rose <[EMAIL PROTECTED]> wrote:
> > Seems like SL4 has some whimsical booting policy so if you don't
> curtsey to
> > the moderator you get booted.  That speaks for itself.
> 
> SL4 is full of people who are very critical of Eliezer and have never
> been in danger of getting booted. Richard Loosemore's banning is the
> only controversial case that I can recall.
> 
> Even the message to which you replied here said the following, and the
> writer has never been in danger of getting booted:
> 
> > I also think his ideas on AI are castles in the air with no connection
> whatsoever
> > to reality, and I've said as much on SL4.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


RE: [singularity] Re: Personal attacks

2007-05-30 Thread John G. Rose
Seems like SL4 has some whimsical booting policy so if you don't curtsey to
the moderator you get booted.  That speaks for itself.

 

From: Russell Wallace [mailto:[EMAIL PROTECTED] 



The long and the short of it is that Richard and Eliezer got into a slagging
match about who had the longest dick and Eliezer lost his temper and booted
Richard. When it comes down to it, we've all done stupid crap like that;
it's part of human nature, we get on with our lives. 

(For what it's worth, I think Eliezer's a brilliantly eloquent philosopher,
particularly in his writing about Bayesian epistemology. I also think his
ideas on AI are castles in the air with no connection whatsoever to reality,
and I've said as much on SL4.) 

  _  

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

RE: Machine Motivation Gets Distorted Again [WAS Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page]

2007-05-15 Thread John G. Rose
> From: Tom McCabe [mailto:[EMAIL PROTECTED]
> 
> You're missing the point- obviously this specific
> example isn't going to be completely accurate, because
> an AI doesn't require dead organics. And it's not like
> the animals are actively helping us- they just sit
> there, growing, until we harvest them up. And even if
> all the animals did revolt and refuse to feed us,

Sorry I wasn't clear on my last point - animals in addition to us eating
them, labor for us throughout the world but this is slowly ending as
machines become cheaper.  They may not be chimps bringing us Martini's but
there are numerous examples of other types of creatures laboring.  Yes there
is vegetarianism or even inedia like sungazing :)
http://en.wikipedia.org/wiki/Sungazing

> > It doesn't matter how smart the AGI is, it does have
> > to survive in some sort
> > of symbiotic relationship for some period of time
> > before it turns into your
> > AGI-Zilla and takes over the earth in a few seconds
> > and then borgs the rest
> > of the universe within 5 minutes.
> 
> Why? When humans evolved intelligence, we did not
> exist in some sort of "symbiotic relationship" with
> chimps for a few million years. We took technology and

??? Huh?  Ok ok u are right there may have been no symbiotic relationships
with chimps :) heh What is this Planet of the Apes c'mon!  Think!  What goes
on in evolution?  Do you think it just happens?  Everything is interlocked
in all of nature intricately.  For aeons.

> About as realistic as the new humans and the chimps
> "living together and thriving", instead of the humans
> developing technology and confining the chimps to
> parks and zoos.

The chimps again. LOFL!

> What liberties? For who? Sorry, you cannot be free of
> dependence on technology without reverting to the
> Stone Age, and even then you're still not free- you
> are tied to your patch of savanna and can't leave it
> without starving.

I suppose so, even some of the Amish drive tractors and monks have laptops.

> > etc.. this would never happen with the moral and
> > ethical leadership that
> > runs modern organizations though ;/ ...
> 
> A competent- not necessarily Friendly, just competent-
> AGI organization would never cut corners when dealing
> with AGI safety, because they'd know what the
> consequences are. Even if the North Koreans built an
> AGI, they wouldn't cut safety corners for fear of
> killing the Dear Leader.

Yes organizations would never cut corners on nuclear reactors and weaponry.
It would never happen...

> Why would software "try" to do anything, without
> having motivation programmed into it? If you've
> programmed on anything resembling a large project
> before, please realize that 2020 software is going to
> be very much like today's- it doesn't do so much as
> add two and three without a programmer adding in the
> capability specifically.

I've done so many projects small and massive that's why I'm so opinionated
on software :)  2020 - it's getting different faster, I see it, I've seen
it.  Maybe not wildly asymptotic yet ... yet...

John


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07


RE: Machine Motivation Gets Distorted Again [WAS Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page]

2007-05-15 Thread John G. Rose
> From: Tom McCabe [mailto:[EMAIL PROTECTED]
> > The AGI is going to have to embed itself into some
> > organizational
> > bureaucracy in order to survive.  It'll appear
> > friendly to individual humans
> > but to society it will need to get itself fed, kind
> > of like a queen ant, and
> > we are the worker ants all feeding it.
> 
> What?! Why even bother? Humans have managed to feed
> themselves, to more than feed themselves (look at all
> this civilization stuff we've built!), and you think
> an AGI ten thousand times smarter than us is going to
> need to rely on us for basic resources?! After all,
> humans still rely on the chimps to fetch our dinner.
> Riiight.

Humans eat domesticated animals which beget domesticated animals.  In many
parts of the world we still rely on animals and not machines to bring dinner
just like you say!

It doesn't matter how smart the AGI is, it does have to survive in some sort
of symbiotic relationship for some period of time before it turns into your
AGI-Zilla and takes over the earth in a few seconds and then borgs the rest
of the universe within 5 minutes.

I was thinking about a more realistic scenario where people and computers
sort of live together and thrive like what is happening now.  But
unfortunately the computers are embedded and reliance builds up, liberties
are lost yet these are seen as not necessary.  The AGI needs to survive and
an organization's people need to survive as well so perhaps corners are cut
when things get tight, restrictions are loosened, AGI is given more liberty,
etc.. this would never happen with the moral and ethical leadership that
runs modern organizations though ;/ ...

The AGI-Zilla scenario I don't think will happen within 20 years but within
a few years (or even now) we'll have some semi-smart AGI-like softwares
trying to embed.  Though the AGI-Zilla scenario is very possible I suppose
from a nano-tech perspective...

John


> 
> > Eventually
> > it will become
> > indispensible.  If an individual human rebels
> > against it - like someone
> > rebelling against IRS computers, good luck.  Once it
> > is embedded it ain't
> > going away except for newer and better versions.
> > And then different
> > bureaucracies will have their own embedded AGI's all
> > vying for control.  But
> > without some sort of economic feeding base the AGI's
> > won't embed they'll
> > wane... it's a matter of survival.
> 
> Why wouldn't the AI simply take whatever it wants? If
> you unleash a rogue AGI, by the time you take the five
> seconds to pull the power cord, it's already gotten
> out over the Internet, more than likely taken over
> several nanotech and biotech labs, increased its
> computing power several hundred fold, and planted
> hundreds of copies of its own source code in every
> writable medium should it ever get erased. In five
> seconds. And that's not even a superintelligent AGI;
> that's an AGI with a human intelligence level that
> just thinks a few thousand times faster than we do.
> 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07


RE: Machine Motivation Gets Distorted Again [WAS Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page]

2007-05-14 Thread John G. Rose
The AGI is going to have to embed itself into some organizational
bureaucracy in order to survive.  It'll appear friendly to individual humans
but to society it will need to get itself fed, kind of like a queen ant, and
we are the worker ants all feeding it.  Eventually it will become
indispensible.  If an individual human rebels against it - like someone
rebelling against IRS computers, good luck.  Once it is embedded it ain't
going away except for newer and better versions.  And then different
bureaucracies will have their own embedded AGI's all vying for control.  But
without some sort of economic feeding base the AGI's won't embed they'll
wane... it's a matter of survival.

John

> -Original Message-
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> 
> --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > Legg's paper is of no relevance to the argument whatsoever, because it
> > first redefines "intelligence" as something else, without giving any
> > justification for th redefinition, then proves theorems about the
> > redefined meaning.  So it supports nothing in any discussion of the
> > behavior of intelligent systems.  I have discussed this topic on a
> > number of occasions.
> 
> Since everyone defines intelligence as something different, I picked a
> definition where we can actually say something about it that doesn't
> require
> empirical experimentation.  What definition would you like to use
> instead?
> 
> We would all like to build a machine smarter than us, yet still be able
> to
> predict what it will do.  I don't believe you can have it both ways.
> And if
> you can't predict what a machine will do, then you can't control it.  I
> believe this is true whether you use Legg's definition of universal
> intelligence or the Turing test.
> 
> Suppose you build a system whose top level goal is to act in the best
> interest
> of humans.  You still have to answer:
> 
> 1. Which humans?
> 2. What does "best interest" mean?
> 3. How will you prevent the system from reprogramming its goals, or
> building a
> smarter machine with different goals?
> 4. How will you prevent the system from concluding that extermination of
> the
> human race is in our best interest?
> 
> Here are some scenarios in which (4) could happen.  The AGI concludes
> (or is
> programmed to believe) that what "best interest" means to humans is goal
> satisfaction.  It understands how human goals like pain avoidance, food,
> sleep, sex, skill development, novel stimuli such as art and music, etc.
> all
> work in our brains.  The AGI ponders how it can maximize collective
> human goal
> achievement.  Some possible solutions:
> 
> 1. By electrical stimulation of the nucleus accumbens.
> 2. By simulating human brains in a simple artificial environment with a
> known
> solution to maximal goal achievement.
> 3. By reprogramming the human motivational system to remove all goals.
> 4. Goal achievement is a zero sum game, and therefore all computation
> (including human intelligence) is irrelevant.  The AGI (including our
> uploaded
> minds) turns itself off.
> 
> 
> -- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07


RE: [singularity] Implications of an already existing singularity.

2007-04-25 Thread John G. Rose
There may in fact be giant pink invisible unicorns flying around SJ but yes
it's not worth airing speculation about as you say.  Now the notion of
simulated reality is ancient if not prehistoric and may have been around for
hundreds of thousands of years.  And this perhaps influenced the evolution
of our minds in significant ways?  It has definitely influenced civilization
(religions, philosophy, etc.).  A simulation is worthy of speculation.
 
> From: Samantha Atkins [mailto:[EMAIL PROTECTED]
> > What I was trying to say is similar to - let's say that you are trying
> to
> > prove using only your eyeballs that a certain substance emits light.
> If you
> > see light emitted you proved it.  If you don't see the light then you
> > haven't proven it because the substance may be emitting light on a
> > wavelength that your eyeballs don't see.  Same thing with a
> simulation.  If
> > we can see it is a simulation with current instruments then it is, but
> if
> > not, that just means that our latest instruments may not be able to
> detect
> > it.  It doesn't rule out a simulation.
> >
> >
> 
> I can't prove that there aren't thousands of invisible pink unicorns
> flying around San Jose either.  But that doesn't make it worth airing
> speculation about.  Only evidence or at least explanatory and predictive
> power could.
> 
> - s
> 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07