Re: Was:Singularity - Re: Intelligence

2010-04-17 Thread John Mikes
On Apr 15, 11:21 pm, Brent Meeker  wrote:
> I agree with the above and pushing the idea further has led me to the
> conclusion that intelligence is only relative to an environment. If you
> consider Hume's argument that induction cannot be justified - yet it is
> the basis of all our beliefs - you are led to wonder whether humans have
> "general intelligence".  Don't we really just have intelligence in this
> particular world with it's regularities and "natural kinds"?  Our
> "general intelligence" allows us to see and manipulate objects - but not
> quantum fields or space-time.

Hi, and I agree even more:
without really proposing to identify 'intelligence' (from 'inter-lego' Lat,
*to read between the words, not fixed on the ONE meaning of them we are used
*to)
I come to the word *'we' *(several times asked on this list: who does it
refer to?)
and consider it the bunch of self-reflective gods in discussion and in
concert,
(in Bruno's words *ASSUMED)*  - only within*THIS 'environment* (World,
Universe)
*without actual **assumptions of teleportation or being copied (?) into
other 'environments, *
of which we have no knowledge (not even imagination)
and we are PRODUCTS (whatever that may mean) of *this* very environment -
we can think (being 'intelligent' also) only *within this environment *
**
*A*s Brent put it:
>"...this particular world with it's regularities and "netural kinds"..."<

into which I would include also our (human) ideas of "quantum fields or
space-time"
in congruence with Skeletori:
>>..."I think intelligence in the context of a particular world requires
acting within that world."...<<

And so is our logic (whichever we consider). Including  all "our" *illogical,
impossible, or even *
*supernatural,* all extensions to our limitations within THIS environment
even if called "artificial",
"digital", or else.
We are "prisoners" of this world. No way to escape.
((With my wife's addition of the "ZOOKEEPERS" - 'aliens' - with the
assumption of getting
ideas, bases for religions and sciences to keep us happy - or miserable))

JohnM







On 4/16/10, Skeletori  wrote:
>
> Hi, I'm trying to move this to the intelligence thread.
>
> On Apr 15, 11:21 pm, Brent Meeker  wrote:
> > I agree with the above and pushing the idea further has led me to the
> > conclusion that intelligence is only relative to an environment. If you
> > consider Hume's argument that induction cannot be justified - yet it is
> > the basis of all our beliefs - you are led to wonder whether humans have
> > "general intelligence".  Don't we really just have intelligence in this
> > particular world with it's regularities and "natural kinds"?  Our
> > "general intelligence" allows us to see and manipulate objects - but not
> > quantum fields or space-time.
>
> Yeah, I think some no-free-lunch theorems in AI also point to this. I
> was thinking about the simple goal problem - what if we gave an AI all
> the books in the world and tell it to compress them? That could yield
> some very complex internal models... but how would it relate them to
> the real world? When humans are taught language they learn to "ground"
> the concepts at the same time.
>
> That leads me to believe that AIs will in practice need special
> training programs where they proceed from simple problems to more
> complex ones (this is called shaping), much like humans, while staying
> "grounded" from the start. It's a really interesting race: which will
> arrive first, brain digitization or strong AI? My money's on the
> former right now because I believe the engineering of the training
> programs is a big task.
>
> Anybody think strong AI is inherently much easier? I'd very much like
> to be proven wrong because I think early brain digitization will
> likely lead to digital exploitation.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-l...@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Was:Singularity - Re: Intelligence

2010-04-17 Thread Bruno Marchal


On 16 Apr 2010, at 19:07, Brent Meeker wrote:



I think intelligence in the context of a particular world requires  
acting within that world.  Humans learn language starting with  
ostensive definition: (pointing) "There that's a chair.  Sit in  
it.   That's what it's for. Move it where you want to sit."  An AI  
that was given all the books in the world to learn might very well  
learn something, but it would have a different kind of intelligence  
than human because it developed and functions in a different  
context.  For an AI to develop human like intelligence I think it  
would need to be in something like a robot, something capable of  
acting in the world.



I agree with you. Unless you mean by 'world' a necessarily physical or  
material world. The 'robot' cannot distinguish a physical world (if  
that exists primarily) from a virtual world, nor from an arithmetical  
world.
But I follow you that an intelligence may have to follow a long/deep   
computational history to acquire some skills.


We *can* program a machine with the instruction (roughly described) by  
"help yourself", and such a program may succeed in developing  
intelligence, but it may take a very long time.
Once done, it can be copied, like 'nature' does all the time. In that  
way evolution can be sped up, and the embryogenesis does that by  
"simulating" the phylogenesis in part.


For a platonist, AI research can be compared to fishing. The  
'intelligent entities' are already 'there', and we may isolated by  
filtring technic (like genetic programming, virtual evolution, or more  
abstract technics). Initial intelligence can take time, but  
intelligence (and consciousness) are, for the programs having them,   
self-speeding up.


Do you agree that the nature of the base environment(s) is irrelevant  
for the development of intelligence?
It depends only on mathematical truth like "from brain's state A the  
history-measure of Brent's brain relative states B with Brent  
asserting "we need primary matter" is bigger than the history measure  
where Brent asserts "we don't".


The measure is mathematical, but not arithmetical, although it refers  
only to number relations. But then "experiences" are epistemological,  
not ontological.


Bruno




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Was:Singularity - Re: Intelligence

2010-04-16 Thread Brent Meeker

On 4/16/2010 3:16 AM, Skeletori wrote:

Hi, I'm trying to move this to the intelligence thread.

On Apr 15, 11:21 pm, Brent Meeker  wrote:
   

I agree with the above and pushing the idea further has led me to the
conclusion that intelligence is only relative to an environment. If you
consider Hume's argument that induction cannot be justified - yet it is
the basis of all our beliefs - you are led to wonder whether humans have
"general intelligence".  Don't we really just have intelligence in this
particular world with it's regularities and "natural kinds"?  Our
"general intelligence" allows us to see and manipulate objects - but not
quantum fields or space-time.
 

Yeah, I think some no-free-lunch theorems in AI also point to this. I
was thinking about the simple goal problem - what if we gave an AI all
the books in the world and tell it to compress them? That could yield
some very complex internal models... but how would it relate them to
the real world? When humans are taught language they learn to "ground"
the concepts at the same time.
   


I think intelligence in the context of a particular world requires 
acting within that world.  Humans learn language starting with ostensive 
definition: (pointing) "There that's a chair.  Sit in it.   That's what 
it's for. Move it where you want to sit."  An AI that was given all the 
books in the world to learn might very well learn something, but it 
would have a different kind of intelligence than human because it 
developed and functions in a different context.  For an AI to develop 
human like intelligence I think it would need to be in something like a 
robot, something capable of acting in the world.


Brent


That leads me to believe that AIs will in practice need special
training programs where they proceed from simple problems to more
complex ones (this is called shaping), much like humans, while staying
"grounded" from the start. It's a really interesting race: which will
arrive first, brain digitization or strong AI? My money's on the
former right now because I believe the engineering of the training
programs is a big task.

Anybody think strong AI is inherently much easier? I'd very much like
to be proven wrong because I think early brain digitization will
likely lead to digital exploitation.
   



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Was:Singularity - Re: Intelligence

2010-04-16 Thread Skeletori
Hi, I'm trying to move this to the intelligence thread.

On Apr 15, 11:21 pm, Brent Meeker  wrote:
> I agree with the above and pushing the idea further has led me to the
> conclusion that intelligence is only relative to an environment. If you
> consider Hume's argument that induction cannot be justified - yet it is
> the basis of all our beliefs - you are led to wonder whether humans have
> "general intelligence".  Don't we really just have intelligence in this
> particular world with it's regularities and "natural kinds"?  Our
> "general intelligence" allows us to see and manipulate objects - but not
> quantum fields or space-time.

Yeah, I think some no-free-lunch theorems in AI also point to this. I
was thinking about the simple goal problem - what if we gave an AI all
the books in the world and tell it to compress them? That could yield
some very complex internal models... but how would it relate them to
the real world? When humans are taught language they learn to "ground"
the concepts at the same time.

That leads me to believe that AIs will in practice need special
training programs where they proceed from simple problems to more
complex ones (this is called shaping), much like humans, while staying
"grounded" from the start. It's a really interesting race: which will
arrive first, brain digitization or strong AI? My money's on the
former right now because I believe the engineering of the training
programs is a big task.

Anybody think strong AI is inherently much easier? I'd very much like
to be proven wrong because I think early brain digitization will
likely lead to digital exploitation.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Was:Singularity - Re: Intelligence

2010-04-15 Thread Brent Meeker

On 4/15/2010 1:06 PM, Skeletori wrote:

On Apr 9, 7:39 pm, Jason Resch  wrote:
   

You would need to design a very general fitness test for measuring
intelligence, for example the shortness and speed at which it can find
proofs for randomly generated statements in math, for example.  Or the
accuracy and efficiency at which it can predict the next element given
sequenced pattern, the level of compression it can achieve (shortest
description) given well ordered information, etc.  With this fitness test
you could evolve better intelligences with genetic programming or a genetic
algorithm.
 

Those tests are good components of a general AI... but it still feels
like building a fully independent agent would involve a lot of
engineering. If we want to achieve an intelligence explosion, or TS,
we need some way of expressing that goal to the AI. ISTM it would take
a lot of prior knowledge.

If the agent was embodied in an actual robot, it would need to be able
to reason about humans. A simple goal like "stay alive" won't do
because it might decide to turn humans into biofuel. On the other
hand, if the agent was put in a virtual world things would be easier
because its interactions could be easily restricted... but it would
need some way of performing experiments in the real world to develop
new technologies. Unless it could achieve IE through pure mathematics.

Anyway, I think humans are going to fiddle with AIs as long as they
can, because it's more economical that way. We could plug in speech
recognition, vision, natural language, etc. modules to the AI to
bootstrap it, but even that could lead to problems. If there are any
loopholes in a fitness test (or reward function, or whatever) then the
AI will take advantage of them. For example, it could learn to
position itself in such a way that its vision system wouldn't
recognize a human, and then it could kill the human for fuel.

So I'm still suspecting that what we want a general AI to do wouldn't
be general at all but something very specific and complex. Are there
simple goals for a general AI?
   


I agree with the above and pushing the idea further has led me to the 
conclusion that intelligence is only relative to an environment. If you 
consider Hume's argument that induction cannot be justified - yet it is 
the basis of all our beliefs - you are led to wonder whether humans have 
"general intelligence".  Don't we really just have intelligence in this 
particular world with it's regularities and "natural kinds"?  Our 
"general intelligence" allows us to see and manipulate objects - but not 
quantum fields or space-time.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Was:Singularity - Re: Intelligence

2010-04-15 Thread Skeletori
On Apr 9, 7:39 pm, Jason Resch  wrote:
> You would need to design a very general fitness test for measuring
> intelligence, for example the shortness and speed at which it can find
> proofs for randomly generated statements in math, for example.  Or the
> accuracy and efficiency at which it can predict the next element given
> sequenced pattern, the level of compression it can achieve (shortest
> description) given well ordered information, etc.  With this fitness test
> you could evolve better intelligences with genetic programming or a genetic
> algorithm.

Those tests are good components of a general AI... but it still feels
like building a fully independent agent would involve a lot of
engineering. If we want to achieve an intelligence explosion, or TS,
we need some way of expressing that goal to the AI. ISTM it would take
a lot of prior knowledge.

If the agent was embodied in an actual robot, it would need to be able
to reason about humans. A simple goal like "stay alive" won't do
because it might decide to turn humans into biofuel. On the other
hand, if the agent was put in a virtual world things would be easier
because its interactions could be easily restricted... but it would
need some way of performing experiments in the real world to develop
new technologies. Unless it could achieve IE through pure mathematics.

Anyway, I think humans are going to fiddle with AIs as long as they
can, because it's more economical that way. We could plug in speech
recognition, vision, natural language, etc. modules to the AI to
bootstrap it, but even that could lead to problems. If there are any
loopholes in a fitness test (or reward function, or whatever) then the
AI will take advantage of them. For example, it could learn to
position itself in such a way that its vision system wouldn't
recognize a human, and then it could kill the human for fuel.

So I'm still suspecting that what we want a general AI to do wouldn't
be general at all but something very specific and complex. Are there
simple goals for a general AI?

On Apr 9, 7:39 pm, Jason Resch  wrote:
> That kind of reminds me of the proposals in many countries to tax virtual
> property, like items in online multiplayer games.  It is rather absurd, it
> is nothing but computations going on inside some computer which lead to
> different visual output on people's monitors.  Then there are also things
> such as network neutrality, which threaten the control of the Internet.  I
> agree with you that there are dangers from the established interests fearing
> loss of control as things go forward, and it is something to watch out for,
> however I am hopeful for a few reasons.  One thing in technology's favour is
> that for the most part it changes faster than legislatures can keep up with
> it.  When Napster was shut down new peer-to-peer protocols were developed to
> replace it.  When China tries to censor what its citizens see its populace
> can turn to technologies such as Tor, or secure proxies.

Maybe I'm too paranoid... I'm assuming that on issues of great
strategic importance, like TS, they'd act decisively. Like the PATRIOT
Act was enacted less than 2 months after 9/11.

It's really hard to say what the state of the world will be in 2050 or
so. There are some trends, though. I think the race to the bottom w/rt
wages will require authoritarian solutions (economic inequality tends
to erode democratic institutions), and so will the intensifying
tensions between the major powers (people have to persuaded to accept
wars). If destructive technologies continue to outpace defensive ones
then that will mean more control, too (or we'll just blow ourselves
up).

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Was:Singularity - Re: Intelligence

2010-04-14 Thread Brent Meeker

On 4/14/2010 12:59 PM, John Mikes wrote:
Bruno, mea culpa. - My slip is showing. I could not eliminate entirely 
the brainwashing I got in college for the figment of 'natural 
sciences' - no matter how long ago that was.
So: please never mind if I go with Dr. Johnson's toe "that REALLY 
hurt" and cannot get over my experience of having been hungry.
There is another question (from the other side of the coin in this 
discussion???)

Why do we always mention "DIGITAL"?


I think it's because there's no good theory of "computable" for 
functions over the real numbers or for continua in general except those 
that can be approximated digitally by Turing machines.


it reminds us to that embryonic contraption of our binary computer we 
have been using over the past 1/2 century. Even if we 'assume' a 
million of them in concert. I assume a better way than using this 
unidentified - so called - *'electricity'* with its 2 poles only - 
facilitating the 'binary' effect. I wrote a sci-fi with characters 
using some driving force with 3 poles (still a measly one-step up) and 
physicists turned upside down with "objective" questions (I had no 
answers). That was at the very end of the last millennium.


There have been some digital computers built using three states instead 
of two (base three computation).  But early on John von Neumann proved 
that in terms of error rate a binary basis was optimum.  Of course 
that's purely an engineering/physics consideration for reliable 
operation in the presence of noise - it has nothing to do with 
arithmetic or the theories of  computation and Turing machines.


Brent

I also hinted to the English twentyfourary computing system (41ary in 
Hungarian) called language, driven by a so far 'physically' 
_unidentified_ driving force - _mentality?_ - but applied with pretty 
good efficiency so far.  And I don't even consider this the ultimate.
I wanted to ask several times what kind of 'system' and 'driving' is 
assumed for the Loebian(?),
the Universal computing? Is there any? and this continues into a more 
involved one:
I, as a 'universal computer' (god?) am driven *by what* in my 
/_computing_/? What does my mentality assumedly apply beyond that 
6Wmin or so keeping the neurons biologically alive?

John M


On 4/12/10, *Bruno Marchal* > wrote:


John,

On 12 Apr 2010, at 16:31, John Mikes wrote:


To Jason's fantasy-contest (just imagine and put it as 'reality?)
upon his



John, Jason did not /imagine/ and then put as /real/. Instead, he
was *assuming* and then *deriving* consequences. You talk like if
we could ever know for sure anything objective. But sciences are
collection of beliefs/theories/assumption/hypothesis/postulates,
and if a belief is true, we cannot never know it as such.

What we can know does not belong to the scientific discourse, be
it the existence of god, or of headache.
We can make theories about those non communicable knowledge.
Yet, such theories about knowledge are beliefs, not knowledge.
They may be false.




/_> In an uploaded state you could spend all day eating from an
unlimited buffet
> of any food you could think of (and more) and get neither full
nor fat. _/
//


Well the Romans did that. Eating all the day, even days after
days, without stopping. Just vomit after the meal!




I have a memory of the same, when I had nothing to eat, was
miserable and hungry during WWII and *'dreamed'* about delicious
food...
Not a good memory though



I think I can understand, having known many people having survive
that period. But of course Jason was not talking about a human
imagining eating, but about an uploaded human in a virtual
environment. (That is possible assuming digital mechanism).

Now, if it has been completely uploaded with a genuine virtual
body: Jason is correct when saying that he *may* have an unlimited
buffet, but is, strictly speaking wrong that he can enjoy it,
without bringing modifications and changes in its 'virtual' body
and brain, so as to be able to appreciate it without vomiting in
the virtual reality!

But this is irrelevant for Jason point, to be sure.

Nevertheless, this points on the fact that one day we will almost
all become virtual, for the purely economical reason that virtual
food and life will be less expensive that carbon based stuff. And
more easily spreadable in the galaxy. With some hope we can make
Earth a Carbon Museum.

This will not prove that mechanism is true. It will just be the
time to hope it to be true.

Of course, we are already /arithmetical /(by UDA). "Arithmetical"
= virtual and executed by the elementary arithmetical dovetailing,
which exists provably so in elementary arithmetic (assuming comp).
And this is refutable, given that the laws of physics become
completely derivable from number the

Re: Was:Singularity - Re: Intelligence

2010-04-14 Thread John Mikes
Bruno, mea culpa. - My slip is showing. I could not eliminate entirely the
brainwashing I got in college for the figment of 'natural sciences' - no
matter how long ago that was.
So: please never mind if I go with Dr. Johnson's toe "that REALLY hurt" and
cannot get over my experience of having been hungry.

There is another question (from the other side of the coin in this
discussion???)

Why do we always mention "DIGITAL"? it reminds us to that embryonic
contraption of our binary computer we have been using over the past 1/2
century. Even if we 'assume' a million of them in concert. I assume a better
way than using this unidentified - so called - *'electricity'* with its 2
poles only - facilitating the 'binary' effect. I wrote a sci-fi
with characters using some driving force with 3 poles (still a measly
one-step up) and physicists turned upside down with "objective" questions (I
had no answers). That was at the very end of the last millennium.

I also hinted to the English twentyfourary computing system (41ary in
Hungarian) called language, driven by a so far 'physically'
*unidentified*driving force -
*mentality?* - but applied with pretty good efficiency so far.  And I don't
even consider this the ultimate.

I wanted to ask several times what kind of 'system' and 'driving' is assumed
for the Loebian(?),
the Universal computing? Is there any? and this continues into a more
involved one:
I, as a 'universal computer' (god?) am driven *by what* in my *computing*?
What does my mentality assumedly apply beyond that 6Wmin or so keeping the
neurons biologically alive?

John M




On 4/12/10, Bruno Marchal  wrote:
>
> John,
>
>  On 12 Apr 2010, at 16:31, John Mikes wrote:
>
>  To Jason's fantasy-contest (just imagine and put it as 'reality?) upon
> his
>
>
>
>
>
> John, Jason did not *imagine* and then put as *real*. Instead, he was
> *assuming* and then *deriving* consequences. You talk like if we could ever
> know for sure anything objective. But sciences are collection of
> beliefs/theories/assumption/hypothesis/postulates, and if a belief is true,
> we cannot never know it as such.
>
>
> What we can know does not belong to the scientific discourse, be it the
> existence of god, or of headache.
> We can make theories about those non communicable knowledge.
> Yet, such theories about knowledge are beliefs, not knowledge. They may be
> false.
>
>
>
>
>
>
> *> In an uploaded state you could spend all day eating from an unlimited
> buffet
> > of any food you could think of (and more) and get neither full nor fat.
> *
> **
>
>
>
> Well the Romans did that. Eating all the day, even days after days, without
> stopping. Just vomit after the meal!
>
>
>
>
>
>  I have a memory of the same, when I had nothing to eat, was miserable and
> hungry during WWII and *'dreamed'* about delicious food...
>
> Not a good memory though
>
>
>
>
>
> I think I can understand, having known many people having survive that
> period. But of course Jason was not talking about a human imagining eating,
> but about an uploaded human in a virtual environment. (That is possible
> assuming digital mechanism).
>
>
> Now, if it has been completely uploaded with a genuine virtual body: Jason
> is correct when saying that he *may* have an unlimited buffet, but is,
> strictly speaking wrong that he can enjoy it, without bringing modifications
> and changes in its 'virtual' body and brain, so as to be able to appreciate
> it without vomiting in the virtual reality!
>
>
> But this is irrelevant for Jason point, to be sure.
>
>
> Nevertheless, this points on the fact that one day we will almost all
> become virtual, for the purely economical reason that virtual food and life
> will be less expensive that carbon based stuff. And more easily spreadable
> in the galaxy. With some hope we can make Earth a Carbon Museum.
>
>
> This will not prove that mechanism is true. It will just be the time to
> hope it to be true.
>
>
> Of course, we are already *arithmetical *(by UDA). "Arithmetical" =
> virtual and executed by the elementary arithmetical dovetailing, which
> exists provably so in elementary arithmetic (assuming comp). And this is
> refutable, given that the laws of physics become completely derivable from
> number theory.
>
>
> If you want, roughly speaking:  sciences = sharable and correctable third
> person beliefs. Religion = non communicable personal knowledge.
> Now, *in* the theory "mechanism", you can *prove *many theorems about the
> relations between beliefs and knowledge, and between science and religion.
> But proving a proposition concerning reality does not make it true. It
> makes it only a theorem in a theory, which we can never know to be true.
> Even if that very theory gives the correct mass of the Higgs boson with one
> billion correct decimals, this will not make it possible to *know* the
> mass of the boson, *as such*. We can know it only in the serendipitously
> Theatetical sense, that we believe in a theory/machine, and it happen

Re: Was:Singularity - Re: Intelligence

2010-04-14 Thread Bruno Marchal


On 12 Apr 2010, at 20:24, Brent Meeker wrote:


On 4/12/2010 6:26 AM, Jason Resch wrote:




On Sun, Apr 11, 2010 at 5:13 PM, silky   
wrote:
On Mon, Apr 12, 2010 at 5:50 AM, Jason Resch   
wrote:


[...]

> In an uploaded state you could spend all day eating from an  
unlimited buffet
> of any food you could think of (and more) and get neither full  
nor fat.  In
> the end it is just firings of your neurons (artificial or  
otherwise) and if
> uploaded, that would be all there is to you, there would be no  
metabolism,
> and no additional resources would be sacrificed to provide the  
experience of

> eating that food.

Potentially an interesting question, though, is would it still mean
anything, if there were no consequences?



I think there are still consequences of your actions, I don't  
imagine uploading would be an entirely solitary experience, you  
would still interact with others and create new relationships.   
There would be few external consequences seen from outside the  
computer, but I don't think that diminishes the goings on within.   
Much like someone could point to a dreaming person and say it  
doesn't matter if he is having a nice dream or a terrible  
nightmare, I think it still matters (to person who is dreaming).


Jason


I think, according to Bruno, this is where we already are - being  
generated digitally by the UD.  :-)


Yes, and from UDA + last conversation, it seems you are willing to  
think it follows from mechanism, unless you think that consciousness  
here and now might depend on inactive pieces of physical stuff there  
and later,  and you have to believe also that a movie of a brain  
activity is conscious in real time (despite there is no computations  
done at all, and "real time" makes no sense (cf MGA3)), or to believe  
that all consciousness supervenes on nothing at all, etc.


To be sure none of us (first person) is ever generated by the UD, only  
our infinitely many third person computational states are generated,  
most of them relatively equivalent (locally), but different (globally,  
they appear at different UD-time steps). The first person view is  
relatively indeterminate on all those third person states/histories,  
and that is why eventually we have to justify the physical laws by  
that indeterminacy. UDA is really a reduction of the mind body problem  
to a pure mathematical body problem. AUDA provides a beginning of  
answer, and Quantum Mechanics provides an embryo of confirmation. Only  
future will refute it, or not .


This provides a many-worlds, or many dreams, interpretation of  
elementary arithmetic (or combinatory logics, etc.). It extends  
Everett embedding of the physicist in physics to an embedding of the  
mathematician in mathematics, or the arithmetician in arithmetic.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Was:Singularity - Re: Intelligence

2010-04-12 Thread Brent Meeker

On 4/12/2010 6:26 AM, Jason Resch wrote:



On Sun, Apr 11, 2010 at 5:13 PM, silky > wrote:


On Mon, Apr 12, 2010 at 5:50 AM, Jason Resch mailto:jasonre...@gmail.com>> wrote:

[...]

> In an uploaded state you could spend all day eating from an
unlimited buffet
> of any food you could think of (and more) and get neither full
nor fat.  In
> the end it is just firings of your neurons (artificial or
otherwise) and if
> uploaded, that would be all there is to you, there would be no
metabolism,
> and no additional resources would be sacrificed to provide the
experience of
> eating that food.

Potentially an interesting question, though, is would it still mean
anything, if there were no consequences?



I think there are still consequences of your actions, I don't imagine 
uploading would be an entirely solitary experience, you would still 
interact with others and create new relationships.  There would be few 
external consequences seen from outside the computer, but I don't 
think that diminishes the goings on within.  Much like someone could 
point to a dreaming person and say it doesn't matter if he is having a 
nice dream or a terrible nightmare, I think it still matters (to 
person who is dreaming).


Jason


I think, according to Bruno, this is where we already are - being 
generated digitally by the UD.  :-)


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Was:Singularity - Re: Intelligence

2010-04-12 Thread Bruno Marchal

John,

On 12 Apr 2010, at 16:31, John Mikes wrote:

To Jason's fantasy-contest (just imagine and put it as 'reality?)  
upon his



John, Jason did not imagine and then put as real. Instead, he was  
*assuming* and then *deriving* consequences. You talk like if we could  
ever know for sure anything objective. But sciences are collection of  
beliefs/theories/assumption/hypothesis/postulates, and if a belief is  
true, we cannot never know it as such.


What we can know does not belong to the scientific discourse, be it  
the existence of god, or of headache.

We can make theories about those non communicable knowledge.
Yet, such theories about knowledge are beliefs, not knowledge. They  
may be false.






> In an uploaded state you could spend all day eating from an  
unlimited buffet
> of any food you could think of (and more) and get neither full nor  
fat.




Well the Romans did that. Eating all the day, even days after days,  
without stopping. Just vomit after the meal!




I have a memory of the same, when I had nothing to eat, was  
miserable and hungry during WWII and 'dreamed' about delicious food...


Not a good memory though



I think I can understand, having known many people having survive that  
period. But of course Jason was not talking about a human imagining  
eating, but about an uploaded human in a virtual environment. (That is  
possible assuming digital mechanism).


Now, if it has been completely uploaded with a genuine virtual body:  
Jason is correct when saying that he *may* have an unlimited buffet,  
but is, strictly speaking wrong that he can enjoy it, without bringing  
modifications and changes in its 'virtual' body and brain, so as to be  
able to appreciate it without vomiting in the virtual reality!


But this is irrelevant for Jason point, to be sure.

Nevertheless, this points on the fact that one day we will almost all  
become virtual, for the purely economical reason that virtual food and  
life will be less expensive that carbon based stuff. And more easily  
spreadable in the galaxy. With some hope we can make Earth a Carbon  
Museum.


This will not prove that mechanism is true. It will just be the time  
to hope it to be true.


Of course, we are already arithmetical (by UDA). "Arithmetical" =  
virtual and executed by the elementary arithmetical dovetailing, which  
exists provably so in elementary arithmetic (assuming comp). And this  
is refutable, given that the laws of physics become completely  
derivable from number theory.


If you want, roughly speaking:  sciences = sharable and correctable  
third person beliefs. Religion = non communicable personal knowledge.
Now, *in* the theory "mechanism", you can prove many theorems about  
the relations between beliefs and knowledge, and between science and  
religion.
But proving a proposition concerning reality does not make it true. It  
makes it only a theorem in a theory, which we can never know to be  
true. Even if that very theory gives the correct mass of the Higgs  
boson with one billion correct decimals, this will not make it  
possible to know the mass of the boson, as such. We can know it only  
in the serendipitously Theatetical sense, that we believe in a theory/ 
machine, and it happens that it is correct/self-referentially-correct.



Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Was:Singularity - Re: Intelligence

2010-04-12 Thread John Mikes
To Jason's fantasy-contest (just imagine and put it as 'reality?) upon his

*> In an uploaded state you could spend all day eating from an unlimited
buffet
> of any food you could think of (and more) and get neither full nor fat. *
**
I have a memory of the same, when I had nothing to eat, was miserable and
hungry during WWII and *'dreamed'* about delicious food...

Not a good memory though

John Mikes





On 4/11/10, silky  wrote:
>
> On Mon, Apr 12, 2010 at 5:50 AM, Jason Resch  wrote:
>
> [...]
>
> > In an uploaded state you could spend all day eating from an unlimited
> buffet
> > of any food you could think of (and more) and get neither full nor
> fat.  In
> > the end it is just firings of your neurons (artificial or otherwise) and
> if
> > uploaded, that would be all there is to you, there would be no
> metabolism,
> > and no additional resources would be sacrificed to provide the experience
> of
> > eating that food.
>
> Potentially an interesting question, though, is would it still mean
> anything, if there were no consequences?
>
>
> > Jason
>
> --
> silky
>
> http://www.programmingbranch.com/
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-l...@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Was:Singularity - Re: Intelligence

2010-04-12 Thread Jason Resch
On Sun, Apr 11, 2010 at 5:13 PM, silky  wrote:

> On Mon, Apr 12, 2010 at 5:50 AM, Jason Resch  wrote:
>
> [...]
>
> > In an uploaded state you could spend all day eating from an unlimited
> buffet
> > of any food you could think of (and more) and get neither full nor fat.
> In
> > the end it is just firings of your neurons (artificial or otherwise) and
> if
> > uploaded, that would be all there is to you, there would be no
> metabolism,
> > and no additional resources would be sacrificed to provide the experience
> of
> > eating that food.
>
> Potentially an interesting question, though, is would it still mean
> anything, if there were no consequences?
>
>
>
I think there are still consequences of your actions, I don't imagine
uploading would be an entirely solitary experience, you would still interact
with others and create new relationships.  There would be few external
consequences seen from outside the computer, but I don't think that
diminishes the goings on within.  Much like someone could point to a
dreaming person and say it doesn't matter if he is having a nice dream or a
terrible nightmare, I think it still matters (to person who is dreaming).

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Was:Singularity - Re: Intelligence

2010-04-11 Thread silky
On Mon, Apr 12, 2010 at 5:50 AM, Jason Resch  wrote:

[...]

> In an uploaded state you could spend all day eating from an unlimited buffet
> of any food you could think of (and more) and get neither full nor fat.  In
> the end it is just firings of your neurons (artificial or otherwise) and if
> uploaded, that would be all there is to you, there would be no metabolism,
> and no additional resources would be sacrificed to provide the experience of
> eating that food.

Potentially an interesting question, though, is would it still mean
anything, if there were no consequences?


> Jason

-- 
silky

  http://www.programmingbranch.com/

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Was:Singularity - Re: Intelligence

2010-04-11 Thread Jason Resch
On Sun, Apr 11, 2010 at 8:40 AM, John Mikes  wrote:

> On Fri, Apr 9, 2010 at 9:40 AM, Skeletori  wrote:
> > My hope and wish is that by this time, wealth and the economy as we know
> it
> > will be obsolete.  In a virtual world, where anyone can do or experience
> > anything, and everyone is immortal and perfectly healthy, the only
> commodity
> > would be the creativity to generate new ideas and experiences.  (I highly
>
> > recommend reading page this to see what such an existence could be:
> http://frombob.to/you/aconvers.htmlthis one is also interestinghttp://
> www.marshallbrain.com/discard1.htm).  If anyone can in the comfort
> > of their own virtual house experience drinking a soda, what need would
> there
> > be for Pepsi or Coke to exist as companies?
>
> Before bankrupting big companies, we may take a look at ourselves
> (humanity?) in the situation of being immortal, healthy
> with unlimited creativity (in facto). Does it include sex?
>

Sure I think so.  The Marshall Brain piece address that topic in a few of
his chapters.  You need not worry about disease or unwanted pregnancy and
you could look however you like.


> should we include 'having babies' (the ultimate happiness)? in which case
> humanity would proliferated even at a higher level than now, all of them
> enjoying sex and proliferation?
>

The problem with reproduction, as mentioned in the frombob.to website above,
is that even at an extremely slow rate of producing new minds, it is still
an exponential rate.  And exponential growth means all resources in the
universe would be quickly exhausted in a very short period of time (perhaps
as little as a million years if intelligent life arises in every galaxy),
precluding new life elsewhere from evolving.  It would be almost as selfish
and as unjust for one civilization to take all the resources (before life
could evolve somewhere) as it would be to come and take those resources
after it had evolved, therefore I disagree with Kurzweil that intelligent
matter will spread at the speed of light in all directions consuming
everything in its wake.  Consider how many eons all humans on Earth could
live powering our computations using Jupiter as a fuel source.

However, reproduction and child raising could still be experienced in game
worlds, where all the participants are consenting individuals who have
uploaded.  During the experience there would be both parents and children,
and after the game ends you would have some new very close bonds with some
of the minds you met and knew within that game.  Spending 70 years on a game
would be nothing when you could live for trillions of years.



> Or should we include a 'mind-only' restriction and shrink away the
> sex-related part of life and eliminate the sex-related organs? Would it be
> worth the survival?  similarly: if our mentality can produce 'everything',
> how about food to enjoy? are we eliminating as well our metabolism - not to
> get unlimitedly fat?
>

In an uploaded state you could spend all day eating from an unlimited buffet
of any food you could think of (and more) and get neither full nor fat.  In
the end it is just firings of your neurons (artificial or otherwise) and if
uploaded, that would be all there is to you, there would be no metabolism,
and no additional resources would be sacrificed to provide the experience of
eating that food.


> thinking in wider domains of the suggested utopy brings up points beyond
> nixing the Pepsi or Coke stocks.
>
> I rather limit my unlimited capabilities and have a beer.
>
>

Not a bad choice :-)

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Was:Singularity - Re: Intelligence

2010-04-11 Thread John Mikes
On Fri, Apr 9, 2010 at 9:40 AM, Skeletori  wrote:
> My hope and wish is that by this time, wealth and the economy as we know
it
> will be obsolete.  In a virtual world, where anyone can do or experience
> anything, and everyone is immortal and perfectly healthy, the only
commodity
> would be the creativity to generate new ideas and experiences.  (I highly

> recommend reading page this to see what such an existence could be:
http://frombob.to/you/aconvers.htmlthis one is also interestinghttp://
www.marshallbrain.com/discard1.htm).  If anyone can in the comfort
> of their own virtual house experience drinking a soda, what need would
there
> be for Pepsi or Coke to exist as companies?

Before bankrupting big companies, we may take a look at ourselves
(humanity?) in the situation of being immortal, healthy
with unlimited creativity (in facto). Does it include sex? should we include
'having babies' (the ultimate happiness)? in which case humanity would
proliferated even at a higher level than now, all of them enjoying sex and
proliferation? Or should we include a 'mind-only' restriction and shrink
away the sex-related part of life and eliminate the sex-related
organs? Would it be worth the survival?  similarly: if our mentality can
produce 'everything', how about food to enjoy? are we eliminating as well
our metabolism - not to get unlimitedly fat? thinking in wider domains of
the suggested utopy brings up points beyond nixing the Pepsi or Coke stocks.

I rather limit my unlimited capabilities and have a beer.

John Mikes

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.