Re: Simple proof that our intelligence transcends that of computers

2012-09-13 Thread benjayk


Bruno Marchal wrote:
 
 
 Some embeddings that could be represented by this number relations  
 could
 prove utter nonsense. For example, if you interpret 166568 to mean  
 != or
 ^6 instead of =, the whole proof is nonsense.
 
 Sure, and if I interpret the soap for a pope, I can be in trouble.  
Right, but that's exactly what Gödel is doing. 11132 does not mean =
anymore than soap means pope, except if artificially defined. But even
than the meaning/proof is in the decoding not in 11132 or soap.
If we just take Gödel to make a statement about what encodings together with
decoding can express, he is right, we can encope pope with soap as well,
but this shows something about our encodings, not about what we use to do
it.


Bruno Marchal wrote:
 
 That is why we fix a non ambiguous embedding once and for all.
How using only arithmetics?


Bruno Marchal wrote:
 

 Thus Gödel's proof necessarily needs a meta-level,
 
 Yes. the point is that the metalevel can be embedded non ambiguously  
 in a faithfull manner in arithmetic.
 It is the heart of theoretical computer science. You really should  
 study the subject.
You should stop studying and start to actually start to question the
validity of what you are studying ;)
Sorry, I just had to say that, now that you made that remark numerous times.
It is like saying You should really study the bible to understand why
christianity is right..
Studying the bible in detail will not reveal the flaw unless you are willing
to question it (and then studying it becomes relatively superfluous).


Bruno Marchal wrote:
 
 

 I don't see how any explanation of Gödel could even adress the  
 problem.
 
 You created a problem which is not there.
Nope. You try to talk away a problem that is there.



Bruno Marchal wrote:
 
 It
 seems to be very fundamental to the idea of the proof itself, not  
 the proof
 as such. Maybe you can explain how to solve it?

 But please don't say that we can embed the process of assigning Gödel
 numbers in arithmetic itself.
 
 ?
 
 a number like s(s(0))) can have its description, be 2^'s' * 3^(... ,  
 which will give a very big number, s(s(s(s(s(s(s(s(s(s(s(s...  
 (s(s(s(0...))). That correspondence will be defined in  
 term of addition, multiplication and logical symbols, equality.
I don't see what your reply has to do with my remark. In fact, it just
demonstrates that you ignore it. How to do this embedding without a
meta-language (like you just used by saying 'have its description' - there
is no such axiom in arithmetic).


Bruno Marchal wrote:
 
 This would need another non-unique embedding
 of syntax, hence leading to the same problem (just worse).
 
 Not at all. You confuse the embedding and its description of the  
 embedding, and the description of the description, but you get this  
 trivially by using the Gödel number of a Gödel number.
Maybe actually show how I am wrong rather than just saying that I confuse
everything?


Bruno Marchal wrote:
 

 For more detail and further points about Gödel you may take a look  
 at this
 website: http://jamesrmeyer.com/godel_flaw.html
 
 
 And now you refer to a site pretending having found a flaw in Gödel's  
 proof. (sigh).
 You could tell me at the start that you believe Gödel was wrong.
I tried to be fair and admit that Gödel did prove something (about what
numbers can express together with a meta-level).
If you believe that Gödel proved something about arithmetics as seperate
axiomatic systems, then the site clearly shows numerous cricitical flaws. It
is not pretending anything. It is clearly pointing out where the flaws lie
(and similar flaws in other related proofs). I haven't even see any real
attempt to show how he is wrong. All responses amount to little more than
denial or authoritative argument or obfuscaction.

The main reason that people don't see the flaw is because they abstract so
much that they abstract away the error (but also the meaning of the proof)
and because they are dogmatic about authorities being right.
That's why studying will not help much. It just creates more abstraction,
further hiding the error.

benjayk

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34427624.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Why the Church-Turing thesis?

2012-09-12 Thread benjayk


Quentin Anciaux-2 wrote:
 
 2012/9/11 Quentin Anciaux allco...@gmail.com
 


 2012/9/11 benjayk benjamin.jaku...@googlemail.com



 Quentin Anciaux-2 wrote:
 
  2012/9/11 benjayk benjamin.jaku...@googlemail.com
 
 
 
  Quentin Anciaux-2 wrote:
  
   2012/9/11 benjayk benjamin.jaku...@googlemail.com
  
  
  
   Quentin Anciaux-2 wrote:
   
2012/9/10 benjayk benjamin.jaku...@googlemail.com
   
   
   
  No program can determine its hardware.  This is a
 consequence
  of
   the
  Church
  Turing thesis.  The particular machine at the lowest level
 has
  no
 bearing
  (from the program's perspective).
 If that is true, we can show that CT must be false, because
 we
  *can*
 define
 a meta-program that has access to (part of) its own
 hardware
   (which
 still
 is intuitively computable - we can even implement it on a
  computer).

   
It's false, the program *can't* know that the hardware it has
  access
   to
is
the *real* hardware and not a simulated hardware. The program
 has
  only
access to hardware through IO, and it can't tell (as never
 ever)
  from
that
interface if what's outside is the *real* outside or simulated
   outside.
\quote
Yes that is true. If anything it is true because the hardware
 is
  not
   even
clearly determined at the base level (quantum uncertainty).
I should have expressed myself more accurately and written 
   hardware
   
or
relative 'hardware'. We can define a (meta-)programs that
 have
   access
to
their hardware in the sense of knowing what they are running
 on
relative
to some notion of hardware. They cannot be emulated using
  universal
turing
machines
   
   
Then it's not a program if it can't run on a universal turing
  machine.
   
   The funny thing is, it *can* run on a universal turing machine.
 Just
  that
   it
   may lose relative correctness if we do that.
  
  
   Then you must be wrong... I don't understand your point. If it's a
  program
   it has access to the outside through IO, hence it is impossible
 for a
   program to differentiate real outside from simulated outside...
 It's
  a
   simple fact, so either you're wrong or what you're describing is
 not
 a
   program, not an algorithm and not a computation.
  OK, it depends on what you mean by program. If you presume that a
  program
  can't access its hardware,
 
 
  I *do not presume it*... it's a *fact*.
 
 
 Well, I presented a model of a program that can do that (on some level,
 not
 on the level of physical hardware), and is a program in the most
 fundamental
 way (doing step-by-step execution of instructions).
 All you need is a program hierarchy where some programs have access to
 programs that are below them in the hierarchy (which are the hardware
 though not the *hardware*).


 What's your point ? How the simulated hardware would fail ? It's
 impossible, so until you're clearer (your point is totally fuzzy), I
 stick
 to you must be wrong.

 
 So either you assume some kind of oracle device, in this case, the thing
 you describe is no more a program, but a program + an oracle, the oracle
 obviously is not simulable on a turing machine, or an infinite regress of
 level.
 
 
The simulated hardware can't fail in the model, just like a turing machine
can't fail. Of course in reality it can fail, that is beside the point.

You are right, my explanation is not that clear, because it is a quite
subtle thing.

Maybe I shouldn't have used the word hardware. The point is just that we
can define (meta-)programs that have access to some aspect of programs that
are below it on the program hierarchy (normal programs), that can't be
accessed by the program themselves. They can't emulated in general, because
sometimes the emulating program will necessarily emulate the wrong level
(because it can't correctly emulate that the meta-program is accessing what
it is *actually* doing on the most fundamental level).
They still are programs in the most fundamental sense.

They don't require oracles or something else that is hard to actually use,
they just have to know something about the hierarchy and the programs
involved (which programs or kind of programs are above or below it) and have
access to the states of other programs. Both are perfectly implementable on
a normal computer. They can even be implemented on a turing machine, but not
in general. They can also be simulated on turing machines, just not
necessarily correctly (the turing machine may incorrectly ignore which level
it is operating on relative to the meta-program).

We can still argue that these aren't programs in every sense but I think
what is executable on a normal computer can be rightfully called program.

benayk
-- 
View this message in context: 
http://old.nabble.com/Why-the-Church-Turing-thesis--tp34348236p34423089.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message

Re: Simple proof that our intelligence transcends that of computers

2012-09-12 Thread benjayk


Bruno Marchal wrote:
 
 
 On 11 Sep 2012, at 12:39, benjayk wrote:
 

 Our discussion is going nowhere. You don't see my points and assume  
 I want to
 attack you (and thus are defensive and not open to my criticism),  
 and I am
 obviously frustrated by that, which is not conducive to a good  
 discussion.

 We are not opertaing on the same level. You argue using rational,  
 precise
 arguments, while I am precisely showing how these don't settle or even
 adress the issue.
 Like with Gödel, sure we can embed all the meta in arithmetic, but  
 then we
 still need a super-meta (etc...).
 
 I don't think so. We need the understanding of elementary arithmetic,  
 no need of meta for that.
 You might confuse the simple truth 1+1=2, and the complex truth  
 Paul understood that 1+1=2. Those are very different, but with comp,  
 both can be explained *entirely* in arithmetic. You have the right to  
 be astonished, as this is not obvious at all, and rather counter- 
 intuitive.
 
 There is no proof that can change this,
 and thus it is pointless to study proofs regarding this issue (as  
 they just
 introduce new metas because their proof is not written in arithmetic).
 
 But they are. I think sincerely that you miss Gödel's proof. There  
 will be opportunity I say more on this, here, or on the FOAR list. It  
 is hard to sum up on few lines. May just buy the book by Davis (now  
 print by Dover) The undecidable, it contains all original papers by  
 Gödel, Post, Turing, Church, Kleene, and Rosser.
 
Sorry, but this shows that you miss my point. It is not about some subtle
aspect of Gödel's proof, but about the main idea. And I think I understand
the main idea quite well.

If Gödels proof was written purely in arithmetic, than it could not be
unambigous, and thus not really a proof. The embedding is not unique, and
thus by looking at the arithmetic alone you can't have a unambigous proof.
Some embeddings that could be represented by this number relations could
prove utter nonsense. For example, if you interpret 166568 to mean != or
^6 instead of =, the whole proof is nonsense.

Thus Gödel's proof necessarily needs a meta-level, or alternatively a
level-transcendent intelligence (I forgot that in my prior post) to be true,
because only then can we fix the meaning of the Gödel numbers.
You can, of course *believe* that the numbers really exists beyond their
axioms and posses this transcendent intelligence, so that they somehow
magically know what they are really representing. But this is just a
belief and you can't show that this is true, nor take it to be granted that
others share this assumption.

I don't see how any explanation of Gödel could even adress the problem. It
seems to be very fundamental to the idea of the proof itself, not the proof
as such. Maybe you can explain how to solve it?

But please don't say that we can embed the process of assigning Gödel
numbers in arithmetic itself. This would need another non-unique embedding
of syntax, hence leading to the same problem (just worse).

For more detail and further points about Gödel you may take a look at this
website: http://jamesrmeyer.com/godel_flaw.html

benjayk
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34423214.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-09-12 Thread benjayk


Platonist Guitar Cowboy wrote:
 
 On Wed, Sep 12, 2012 at 2:05 PM, benjayk
 benjamin.jaku...@googlemail.comwrote:
 


 Bruno Marchal wrote:
 
 
  On 11 Sep 2012, at 12:39, benjayk wrote:
 
 
  Our discussion is going nowhere. You don't see my points and assume
  I want to
  attack you (and thus are defensive and not open to my criticism),
  and I am
  obviously frustrated by that, which is not conducive to a good
  discussion.
 
  We are not opertaing on the same level. You argue using rational,
  precise
  arguments, while I am precisely showing how these don't settle or even
  adress the issue.
  Like with Gödel, sure we can embed all the meta in arithmetic, but
  then we
  still need a super-meta (etc...).
 
  I don't think so. We need the understanding of elementary arithmetic,
  no need of meta for that.
  You might confuse the simple truth 1+1=2, and the complex truth
  Paul understood that 1+1=2. Those are very different, but with comp,
  both can be explained *entirely* in arithmetic. You have the right to
  be astonished, as this is not obvious at all, and rather counter-
  intuitive.
 
  There is no proof that can change this,
  and thus it is pointless to study proofs regarding this issue (as
  they just
  introduce new metas because their proof is not written in arithmetic).
 
  But they are. I think sincerely that you miss Gödel's proof. There
  will be opportunity I say more on this, here, or on the FOAR list. It
  is hard to sum up on few lines. May just buy the book by Davis (now
  print by Dover) The undecidable, it contains all original papers by
  Gödel, Post, Turing, Church, Kleene, and Rosser.
 
 Sorry, but this shows that you miss my point. It is not about some subtle
 aspect of Gödel's proof, but about the main idea. And I think I
 understand
 the main idea quite well.

 If Gödels proof was written purely in arithmetic, than it could not be
 unambigous, and thus not really a proof. The embedding is not unique, and
 thus by looking at the arithmetic alone you can't have a unambigous
 proof.
 Some embeddings that could be represented by this number relations could
 prove utter nonsense. For example, if you interpret 166568 to mean !=
 or
 ^6 instead of =, the whole proof is nonsense.

 Thus Gödel's proof necessarily needs a meta-level, or alternatively a
 level-transcendent intelligence (I forgot that in my prior post) to be
 true,
 because only then can we fix the meaning of the Gödel numbers.
 You can, of course *believe* that the numbers really exists beyond their
 axioms and posses this transcendent intelligence, so that they somehow
 magically know what they are really representing. But this is just a
 belief and you can't show that this is true, nor take it to be granted
 that
 others share this assumption.

 
 
 Problem of pinning down real representation in itself aside. Can human
 prove to impartial observer that they magically know what they are really
 representing or that they really understand?
 
 How would we prove this? Why should I take for granted that humans do
 this,
 other than legitimacy through naturalized social norms, which really don't
 have that great a track record?
 
Can we even literally prove anything apart from axiomatic systems at all? I
don't think so. What would we base the claim that something really is a
proof on?
The notion of proving seems to be a quite narrow and restricted one to me.

Apart from that, it seems human understanding is just delusion in many
cases, and the rest is very limited at best. Especially when we think we
really understand fundamental issues we are the most deluded.

benjayk
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34425351.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Why the Church-Turing thesis?

2012-09-11 Thread benjayk


Quentin Anciaux-2 wrote:
 
 2012/9/10 benjayk benjamin.jaku...@googlemail.com
 


   No program can determine its hardware.  This is a consequence of the
   Church
   Turing thesis.  The particular machine at the lowest level has no
  bearing
   (from the program's perspective).
  If that is true, we can show that CT must be false, because we *can*
  define
  a meta-program that has access to (part of) its own hardware (which
  still
  is intuitively computable - we can even implement it on a computer).
 

 It's false, the program *can't* know that the hardware it has access to
 is
 the *real* hardware and not a simulated hardware. The program has only
 access to hardware through IO, and it can't tell (as never ever) from
 that
 interface if what's outside is the *real* outside or simulated outside.
 \quote
 Yes that is true. If anything it is true because the hardware is not even
 clearly determined at the base level (quantum uncertainty).
 I should have expressed myself more accurately and written  hardware 
 or
 relative 'hardware'. We can define a (meta-)programs that have access
 to
 their hardware in the sense of knowing what they are running on
 relative
 to some notion of hardware. They cannot be emulated using universal
 turing
 machines
 
 
 Then it's not a program if it can't run on a universal turing machine.
 
The funny thing is, it *can* run on a universal turing machine. Just that it
may lose relative correctness if we do that. We can still use a turing
machine to run it and interpret what the result means.

So for all intents and purposes it is quite like a program. Maybe not a
program as such, OK, but it certainly can be used precisely in a
step-by-step manner, and I think this is what CT thesis means by
algorithmically computable.
Maybe not, but in this case CT is just a statement about specific forms of
algorithms.

-- 
View this message in context: 
http://old.nabble.com/Why-the-Church-Turing-thesis--tp34348236p34417440.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-09-11 Thread benjayk

Our discussion is going nowhere. You don't see my points and assume I want to
attack you (and thus are defensive and not open to my criticism), and I am
obviously frustrated by that, which is not conducive to a good discussion.

We are not opertaing on the same level. You argue using rational, precise
arguments, while I am precisely showing how these don't settle or even
adress the issue.
Like with Gödel, sure we can embed all the meta in arithmetic, but then we
still need a super-meta (etc...). There is no proof that can change this,
and thus it is pointless to study proofs regarding this issue (as they just
introduce new metas because their proof is not written in arithmetic).

benjayk
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34417635.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Why the Church-Turing thesis?

2012-09-11 Thread benjayk


Quentin Anciaux-2 wrote:
 
 2012/9/11 benjayk benjamin.jaku...@googlemail.com
 


 Quentin Anciaux-2 wrote:
 
  2012/9/10 benjayk benjamin.jaku...@googlemail.com
 
 
 
No program can determine its hardware.  This is a consequence of
 the
Church
Turing thesis.  The particular machine at the lowest level has no
   bearing
(from the program's perspective).
   If that is true, we can show that CT must be false, because we *can*
   define
   a meta-program that has access to (part of) its own hardware
 (which
   still
   is intuitively computable - we can even implement it on a computer).
  
 
  It's false, the program *can't* know that the hardware it has access
 to
  is
  the *real* hardware and not a simulated hardware. The program has only
  access to hardware through IO, and it can't tell (as never ever) from
  that
  interface if what's outside is the *real* outside or simulated
 outside.
  \quote
  Yes that is true. If anything it is true because the hardware is not
 even
  clearly determined at the base level (quantum uncertainty).
  I should have expressed myself more accurately and written 
 hardware
 
  or
  relative 'hardware'. We can define a (meta-)programs that have
 access
  to
  their hardware in the sense of knowing what they are running on
  relative
  to some notion of hardware. They cannot be emulated using universal
  turing
  machines
 
 
  Then it's not a program if it can't run on a universal turing machine.
 
 The funny thing is, it *can* run on a universal turing machine. Just that
 it
 may lose relative correctness if we do that.
 
 
 Then you must be wrong... I don't understand your point. If it's a program
 it has access to the outside through IO, hence it is impossible for a
 program to differentiate real outside from simulated outside... It's a
 simple fact, so either you're wrong or what you're describing is not a
 program, not an algorithm and not a computation.
OK, it depends on what you mean by program. If you presume that a program
can't access its hardware, then what I am describing is indeed not a
program.

But most definitions don't preclude that. Carrying out instructions
precisely and step-by-step can be done with or without access to your
hardware.

Anyway, meta-programs can be instantiated using real computer (a program
can, in principle, know and utilize part of a more basic computational layer
if programmed correctly), so we at least know that real computers are beyond
turing machines.

benjayk

-- 
View this message in context: 
http://old.nabble.com/Why-the-Church-Turing-thesis--tp34348236p34417676.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Why the Church-Turing thesis?

2012-09-11 Thread benjayk


Quentin Anciaux-2 wrote:
 
 2012/9/11 benjayk benjamin.jaku...@googlemail.com
 


 Quentin Anciaux-2 wrote:
 
  2012/9/11 benjayk benjamin.jaku...@googlemail.com
 
 
 
  Quentin Anciaux-2 wrote:
  
   2012/9/10 benjayk benjamin.jaku...@googlemail.com
  
  
  
 No program can determine its hardware.  This is a consequence
 of
  the
 Church
 Turing thesis.  The particular machine at the lowest level has
 no
bearing
 (from the program's perspective).
If that is true, we can show that CT must be false, because we
 *can*
define
a meta-program that has access to (part of) its own hardware
  (which
still
is intuitively computable - we can even implement it on a
 computer).
   
  
   It's false, the program *can't* know that the hardware it has
 access
  to
   is
   the *real* hardware and not a simulated hardware. The program has
 only
   access to hardware through IO, and it can't tell (as never ever)
 from
   that
   interface if what's outside is the *real* outside or simulated
  outside.
   \quote
   Yes that is true. If anything it is true because the hardware is
 not
  even
   clearly determined at the base level (quantum uncertainty).
   I should have expressed myself more accurately and written 
  hardware
  
   or
   relative 'hardware'. We can define a (meta-)programs that have
  access
   to
   their hardware in the sense of knowing what they are running on
   relative
   to some notion of hardware. They cannot be emulated using
 universal
   turing
   machines
  
  
   Then it's not a program if it can't run on a universal turing
 machine.
  
  The funny thing is, it *can* run on a universal turing machine. Just
 that
  it
  may lose relative correctness if we do that.
 
 
  Then you must be wrong... I don't understand your point. If it's a
 program
  it has access to the outside through IO, hence it is impossible for a
  program to differentiate real outside from simulated outside... It's
 a
  simple fact, so either you're wrong or what you're describing is not a
  program, not an algorithm and not a computation.
 OK, it depends on what you mean by program. If you presume that a
 program
 can't access its hardware,
 
 
 I *do not presume it*... it's a *fact*.
 
 
Well, I presented a model of a program that can do that (on some level, not
on the level of physical hardware), and is a program in the most fundamental
way (doing step-by-step execution of instructions).
All you need is a program hierarchy where some programs have access to
programs that are below them in the hierarchy (which are the hardware
though not the *hardware*).

So apparently it is not so much a fact about programs in a common sense way,
but about a narrow conception of what programs can be.

benjayk
-- 
View this message in context: 
http://old.nabble.com/Why-the-Church-Turing-thesis--tp34348236p34417762.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Two reasons why computers IMHO cannot exhibit intelligence

2012-09-10 Thread benjayk


Bruno Marchal wrote:
 
 
 On 08 Sep 2012, at 16:08, benjayk wrote:
 


 Bruno Marchal wrote:


 On 07 Sep 2012, at 14:22, benjayk wrote:



 Bruno Marchal wrote:


 On 06 Sep 2012, at 13:31, benjayk wrote:

 Quantum effects beyond individual brains (suggested by psi)  
 can't be
 computed as well: No matter what I compute in my brain, this  
 doesn't
 entangle it with other brains since computation is classical.

 The UD emulates all quantum computer, as they do not violate Church
 Thesis.
 I am not talking about quantum computers, which are not entangled
 with their
 surroundings.
 I am talking about systems that are entangled to other systems.

 This is just lowering the comp level of substitution. It does not
 change the reasoning, thanks to the use of the notion of generalized
 brain.
 It does, because you can't simulate indefinite entanglement. No  
 matter how
 many entangled systems you simulate, you are always missing the  
 entanglement
 of this combined system to another (which may be as crucial as the  
 system
 itself, because it may lead to a very different unfoldment of events).
 
 To use this argument, you need to postulate that the physical universe  
 exists and is describe by a quantum garden of Eden, that is a infinite  
 quantum pattern, and that *you* are that pattern.
 In that case, you are just working in a different theory than the comp  
 theory, and are out of the scope of my expertize. But then develop  
 your theory.
Nope. I am not saying that is the case (though I do believe that such
entanglement exists), I am just saying that COMP does not exclude that
possibility. Whether or not some digital substitution exists, what is
required to correctly implement it (which also is part of yourself) may
itself be not be emulable in the sense that your reasoning requires.
I remind you, COMP does not say we are digital, it says that a correctly
implemented digital substitution may substitute my current brain/body. It
does not say that this can't require some non-digital component (you are
still getting an artificial brain/body).

benjayk

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34413398.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-09-10 Thread benjayk


Bruno Marchal wrote:
 
 
 On 08 Sep 2012, at 15:47, benjayk wrote:
 


 Bruno Marchal wrote:

 even though the paper actually
 doesn't even begin to adress the question.

 Which question? The paper mainly just formulate a question, shows how
 comp makes it possible to translate the question in math, and show
 that the general shape of the possible solution is more close to  
 Plato
 than to Aristotle.
 The problem is that the paper is taking the most fundamental issue for
 granted,
 
 Absolutely not. I am open that UDA could lead to a refutation of comp,  
 either purely logical, or by the possible testing it implies.
 My opinion on the truth or falsity of comp is private, and to be  
 honest, varying.
 
 You want me to be more than what I am. A logician. Not a philosopher.  
 It is simply not my job.
OK, but if you are solely a logician, you should concern yourself with
logical proofs. You don't even define the assumption of your paper in a
(theoretically speaking) logical way and your proof contains many
philosophical reasonings.
Especially step 8, which is criticial in your reasoning. It uses occams
razor (which is philosophical, and not necessarily valid in any mathematical
or logical context), you use appeals to absurdity (with regards to aribtrary
inner experience being associated to null physical activity), you use
additional philosophical assumptions (you assume materialist mechanism
cannot mean that physical computations are not *exactly* like abstract
digital computations, just enough to make a practically digital substitution
possible),...

So take my criticism to mean that your proof is simply not what you present
it as, somehow being beyond philsophy (which is always on some shaky
ground).
This is what I perceive as slightly dishonest, because it allows you retract
from the actual point by demanding to be given a precise refutation or a
specific error (as required in logic or math). But your paper is
philosophical, and here this logic does not apply.

If you'd admit that I am perfectly happy with your paper. It does show
something, just not rigorously and not necessarily and not for everyone
(some may rightfully disagree with your reasoning due to philosophical
reasons which can't be proven or be precisely stated).

If someone believes that physics behaves perfectly like abstract
computations would and if he doesn't want to invoke some very mysterious
form of matter (which does not rely on how it behaves and also not on how it
feels or is perceived to be) to sidestep the problem, yes, than your paper
may indeed show that this does not make much sense.
Unfortunately most materialist do actually believe (perhaps unconsciously)
in some very mysterious and strange (and IMO meaningless) kind of matter, so
they won't be convinced by your paper.


Bruno Marchal wrote:
 
 
 (kinda digital, digital at some level are not enough for a strict
 reasoning).

 You also say that a 1p view can be recovered by incompleteness, but  
 actually
 you always present *abstractions* of points of view, not the point  
 of view.
 
 What could that mean? How could any theory present a point of view?
 I think you are confusing level. You could as well mock the quantum  
 analysis of the hydrogen atom as ridiculous because the theory cannot  
 react with real oxygen.
That's the point. A theory cannot conceivably present and acutal point of
view. But then your theory just derives something which you call point of
view, which in fact may have little to do at all with the actual point of
view.
QM does not claim to show how a hydrogen atom leads to a real reaction of
oxygen, it just describes it.
To make it coherent, you would have to weaken your statement to we can
derive some description of points of view, or we can show how some
description of points of view emerge from arithmetics, which I will happily
agree with. However, this would destroy your main point that arithmetics and
its point of view is enough as the ontology / epistemology (we need the
*actual* point of view).


Bruno Marchal wrote:
 


 Bruno Marchal wrote:

 How am I supposed to argue with
 that?

 There is no point of studying Gödel if we have a false assumption
 about what
 the proof even is about. It is never, at no point, about numbers as
 axiomatic systems. It is just about what we can express with them  
 on a
 meta-level.

 On the contrary. The whole Gödel's thing relies on the fact that the
 meta-level can be embedded at the level.
 Feferman fundamental papers extending Gödel is arithmetization of
 metamathematics. It is the main point: the meta can be done at the
 lower level. Machines can refer to themselves in the 3p way, and by
 using the Theatetus' definition we get a notion of 1p which provides
 some light on the 1//3 issue.
 But Gödel does not show this. The meta-level can only be embedded at  
 that
 level on the *meta-level*.
 
 This is just false.
Sorry, I meant on *a* meta-level (not the meta-level that can be embedded,
obviously

Re: Why the Church-Turing thesis?

2012-09-10 Thread benjayk


  No program can determine its hardware.  This is a consequence of the
  Church
  Turing thesis.  The particular machine at the lowest level has no
 bearing
  (from the program's perspective).
 If that is true, we can show that CT must be false, because we *can*
 define
 a meta-program that has access to (part of) its own hardware (which
 still
 is intuitively computable - we can even implement it on a computer).


It's false, the program *can't* know that the hardware it has access to is
the *real* hardware and not a simulated hardware. The program has only
access to hardware through IO, and it can't tell (as never ever) from that
interface if what's outside is the *real* outside or simulated outside.
\quote
Yes that is true. If anything it is true because the hardware is not even
clearly determined at the base level (quantum uncertainty).
I should have expressed myself more accurately and written  hardware  or
relative 'hardware'. We can define a (meta-)programs that have access to
their hardware in the sense of knowing what they are running on relative
to some notion of hardware. They cannot be emulated using universal turing
machines (in general - in specific instances, where the hardware is fixed on
the right level, they might be). They can be simulated, though, but in this
case the simulation may be incorrect in the given context and we have to put
it into the right context to see what it is actually emulating (not the
meta-program itself, just its behaviour relative to some other context). 
 
We can also define an infinite hierarchy of meta-meta--programs (n
metas) to show that there is no universal notion of computation at all.
There is always a notion of computation that is more powerful than the
current one, because it can reflect more deeply upon its own hardware.

See my post concerning meta-programs for further details.
-- 
View this message in context: 
http://old.nabble.com/Why-the-Church-Turing-thesis--tp34348236p34413719.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-09-08 Thread benjayk


Bruno Marchal wrote:
 
 even though the paper actually
 doesn't even begin to adress the question.
 
 Which question? The paper mainly just formulate a question, shows how  
 comp makes it possible to translate the question in math, and show  
 that the general shape of the possible solution is more close to Plato  
 than to Aristotle.
The problem is that the paper is taking the most fundamental issue for
granted, and it does not actually show anything if the main assumption is
not true and at the end presents a conclusion that is mainly just what is
being taken for granted (we are abstractly digital, and computations can
lead to a 1p of view).

You say assuming COMP, but COMP is either impossible with respect to its
own conclusion (truly, purely digital substitutions are not possible due to
matter being non-digital), or it is too vague to allow for any conclusion
(kinda digital, digital at some level are not enough for a strict
reasoning).

You also say that a 1p view can be recovered by incompleteness, but actually
you always present *abstractions* of points of view, not the point of view.


Bruno Marchal wrote:
 
 How am I supposed to argue with
 that?

 There is no point of studying Gödel if we have a false assumption  
 about what
 the proof even is about. It is never, at no point, about numbers as
 axiomatic systems. It is just about what we can express with them on a
 meta-level.
 
 On the contrary. The whole Gödel's thing relies on the fact that the  
 meta-level can be embedded at the level.
 Feferman fundamental papers extending Gödel is arithmetization of  
 metamathematics. It is the main point: the meta can be done at the  
 lower level. Machines can refer to themselves in the 3p way, and by  
 using the Theatetus' definition we get a notion of 1p which provides  
 some light on the 1//3 issue.
But Gödel does not show this. The meta-level can only be embedded at that
level on the *meta-level*. Apart from this level, we can't even formulate
representation or embedding (using the axioms of N - except on another
meta-level).

You act like Gödel eliminates the meta-level, but he does not do this and
indeed the notion of doing that doesn't make sense (because otherwise the
whole reasoning ceases to make sense).


Bruno Marchal wrote:
 
 You just use fancy words to obfuscate that.
 It i#s like saying  study the bible for scientific education (you  
 just don't
 understand how it adresses scientific questiosn yet).
 
 No reason to be angry. It is the second time you make an ad hominem  
 remark. I try to ignore that.
I am not angry, just a little frustrated that you don't see how you ignore
the main issue (both in our discussions and you paer), while acting like you
are only showing rational consequences of some belief.

I have said nothing about you, actually you seem to be a genuine, open and
nice person to me. I am just being honest about what you appear to be doing
in your paper and on this list. It is probably not even intentional at all.
So, sorry if I offended you, but I'd rather be frank than to argue with your
points which don't even adress the issue (which is what perceive as being
obfuscation).


Bruno Marchal wrote:
 
  I work in a theory and I do my best to  
 help making things clear. You don't like comp, but the liking or not  
 is another topic.
Well, I am not saying your being *intentionally* misleading or avoiding, but
it certainly appears to me that you are avoiding the issue - perhaps because
you just don't see it.
You are defending your reasoning, while always avoiding the main point that
your reasoning does either depend on unstated assumption (we are already
digital, or only the digital part of a substitution can matter), or rely on
a vague (practically digital substitution) or contradictory (purely digital
substitution, which is not possible, because purely digital is nonsense with
regards to matter) premise.
The same goes for the derivation of points of view. You just derive
abstractions, while not adressing that abstractions of points of view don't
necessarily have anything to do with an actual point of view (thus confusing
your reader which thinks that you actually showed a relation between
*actual* points of view and arithmetics).

It doesn't matter whether I like COMP or not. I don't find it a very
fruitful assumption, but that's not the issue.

benjayk

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34406752.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Two reasons why computers IMHO cannot exhibit intelligence

2012-09-08 Thread benjayk


Bruno Marchal wrote:
 
 
 On 07 Sep 2012, at 14:22, benjayk wrote:
 


 Bruno Marchal wrote:


 On 06 Sep 2012, at 13:31, benjayk wrote:

 Quantum effects beyond individual brains (suggested by psi) can't be
 computed as well: No matter what I compute in my brain, this doesn't
 entangle it with other brains since computation is classical.

 The UD emulates all quantum computer, as they do not violate Church
 Thesis.
 I am not talking about quantum computers, which are not entangled  
 with their
 surroundings.
 I am talking about systems that are entangled to other systems.
 
 This is just lowering the comp level of substitution. It does not  
 change the reasoning, thanks to the use of the notion of generalized  
 brain.
It does, because you can't simulate indefinite entanglement. No matter how
many entangled systems you simulate, you are always missing the entanglement
of this combined system to another (which may be as crucial as the system
itself, because it may lead to a very different unfoldment of events).
A practically digital substitution (which is assumed in COMP) could be
entangled with its surroundings, which may be very different than the
entanglement of a brain (or a generalized brain) with its surroundings. The
substitution may not only fail because the person itself is not preserved,
but also because the world was not preserved (the person would certainly
complain to the doctor if the world suddenly is substantially different - if
there is still a doctor left, that is).
And if you say that we can simulate this entanglement as well, the
entanglement of this system to outside systems may again lead to the
emulation to be not correct at all from a broader view (etc...). At every
step the emulation may actually become more false, because more of the
multiverse/universe is changed.

We can argue that all these things may not be relevant (though I think they
are), but in any case it makes the reasoning shaky.


Bruno Marchal wrote:
 

 No matter how good your simulation is, it is never going to change its
 surroundings without using I/O.
 
 QM does not allows this, unless you bring by the collapse of the wave.
Clearly QM does allow that measurement in one object changes another
object (we can argue with the word change, because the effect is
non-causal). This is even experimentally verified.
MW doesn't change this, it is the same with regards to correlations between
classically non-interacting objects.

benjayk
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34406812.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Why the Church-Turing thesis?

2012-09-08 Thread benjayk
 level?

 
 By lowest level I mean the raw hardware.  At the lowest level your
 computer's memory can only represent 2 states, often labeled '1' and '0'.
 But at the higher levels built upon this, you can have programs with much
 larger symbol sets.
 
 Maybe this is the source of our confusion and disagreement?
Yes, it seems like it. You say that the higher levels are contained in the
lower level, while I argue that they are clearly not, though they may be
relatively to a representational meta-level (but only because we use the low
levels in the right way - which is big feat in itself).


Jason Resch-2 wrote:
 
  The computer (any computer) can do the interpretation for us.  You can
  enter a description of the machine at one point in time, and the state
 of
  the machine at another time, and ask the computer is this the state the
  machine will be in N steps from now.  Where 0 is no and 1 is yes, or A
 is
  no and B is yes, or X is no and Y is yes.  Whatever symbols it might
 use,
  any computer can be setup to answer questions about any other machine
 in
  this way.
 The computer will just output zeroes and ones, and the screen will
 convert
 this into pixels. Without your interpretation the pixels (and thus the
 answers) are meaningless.

 
 When things make a difference, they aren't meaningless.  The register
 containing a value representing a plane's altitude isn't meaningless to
 the
 autopilot program, nor to those on board.
Right, but it is meaningless on the level we are speaking about. If you use
a turing machine to emulate another, more complex one, than its output is
meaningless until you interpret it the right way.


Jason Resch-2 wrote:
 
 If you don't know how to encode and decode the symbols (ie interpret them
 on
 a higher level than the level of the interpretation the machine is doing)
 the interpretation is useless.

 
 Useless to the one who failed to interpret them, but perhaps not
 generally.  If you were dropped off in a foreign land, your speech would
 be
 meaningless to others who heard you, but not to you, or others  who know
 how to interpret it.
Right. I am not objecting to this. But this is precisely why we can't ignore
the higher levels as being less important (or even irrelevant) than the low
level language / computation.
Unless we postulate some independent higher level, the lower levels don't
make sense in a high level context (like emulation only makes sense to some
observer that knows of the existence of different machines).


Jason Resch-2 wrote:
 
 
 We always have to have some information beforehand (though it may be
 implicit, without being communicated first). Otherwise every signal is
 useless because it could mean everything and nothing.


 How do infants learn language if they start with none?
Because they still have something, even though it is not a language in our
sense.
Of course we can get from no information to some information in some
relative realm.

benjayk

-- 
View this message in context: 
http://old.nabble.com/Why-the-Church-Turing-thesis--tp34348236p34406957.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Why the Church-Turing thesis?

2012-09-08 Thread benjayk

As far as I see, we mostly agree on content. 

I just can't make sense of reducing computation to emulability.
For me the intuitive sene of computation is much more rich than this.

But still, as I think about it, we can also create a model of computation
(in the sense of being intuitively computational and being implementable on
a computer) where there are computations that can't be emulated by universal
turing machine, using level breaking languages (which explicitly refer to
what is being computed on the base level). I'll write another post on this.

benjayk
-- 
View this message in context: 
http://old.nabble.com/Why-the-Church-Turing-thesis--tp34348236p34406986.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



A non turing-emulable meta-program

2012-09-08 Thread benjayk

OK, I found an example that quite clearly contradicts CT thesis, unless we
considerable weaken it (to something weaker than emulability).

The concept is rather simple. We introduce a meta-program that can,
additionally to computing what a normal program does, reflect upon the
states of program that is doing the normal computation.
For example, we have universal turing machine that computes something using
the states 0 and 1. We can write a meta-program that does the
computation that the universal turing machine is doing, but also checks
whether the states A or B has been used during the computation, and if
it has been used, it produces an error message. Of course, if we run the
program, it will not produce an error message.

If we have another universal turing machine that tries to emulate that
system, but it uses the states A and B. If it emulates the system, it
will either produce an error message (which does not replicate the function
of the original program) or it will emulate the program incorrectly, by
acting like the states used to do the computation are 0 and 1 (which
they aren't, thus the emulation is incorrect).

Don't be confused, the meta-program is reflecting on which program is
actually doing the computation (which is well defined from its perspective),
not which is doing the computation in the emulation.

It can be argued that it is possible to emulate what the program *would* do
if another program was doing the computation. But the task is to emulate the
meta-program itself, not the meta-program in another context. So every
possible emulation we do using the UTM with states A and B is
counter-factual. It doesn't replicate the function of the meta-program, only
the function of the meta-program as it would act in another context.

Note that counterfactual emulation can still be used to make sense of the
meta-program on some level, but only by using the counterfactual emulation
and mentally putting it in the right context.

We can use the emulation on the wrong level B (using machine B) to get a
result that would be correct if the computation was implemented in another
manner (on level A / using machine A). If we want to have the correct
emulation on that level B, we just need to create another emulation C that
is wrong on its level, but is correct on level B, etc...

So to actually emulate the meta-program using UTMs we need to create an
unbound amount of counterfactual emulations and interpret them correctly (to
understand in which way and at which point the emulation is correct and in
which way it is not).

benjayk
-- 
View this message in context: 
http://old.nabble.com/Why-the-Church-Turing-thesis--tp34348236p34407926.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Why the Church-Turing thesis?

2012-09-07 Thread benjayk


Bruno Marchal wrote:
 
 
 On 28 Aug 2012, at 21:57, benjayk wrote:
 

 It seems that the Church-Turing thesis, that states that an  
 universal turing
 machine can compute everything that is intuitively computable, has  
 near
 universal acceptance among computer scientists.
 
 Yes indeed. I think there are two strong arguments for this.
 
 The empirical one: all attempts to define the set of computable  
 functions have led to the same class of functions, and this despite  
 the quite independent path leading to the definitions (from Church  
 lambda terms, Post production systems, von Neumann machine, billiard  
 ball, combinators, cellular automata ... up to modular functor,  
 quantum topologies, quantum computers, etc.).
 
OK, now I understand it better. Apparently if we express a computation in
terms of a computable function we can always arrive at the same computable
function using a different computation of an abitrary turing universal
machine. That seems right to me.
 
But in this case I don't get why it is often claimed that CT thesis claims
that all computations can be done by a universal turing machine, not merely
that they lead to the same class of computable functions (if converted
appriopiately).
The latter is a far weaker statement, since computable functions abstract
from many relevant things about the machine.

And even this weaker statement doesn't seem true with regards to more
powerful models like super-recursive functions, as computable functions just
give finite results, while super-recursive machine can give
infinite/unlimited results.

 

Bruno Marchal wrote:
 
 The conceptual one: the class of computable functions is closed for  
 the most transcendental operation in math: diagonalization. This is  
 not the case for the notions of definability, provability,  
 cardinality, etc.
I don't really know what this means. Do you mean that there are just
countable many computations? If yes, what has this do with whether all
universal turing machines are equivalent?



Bruno Marchal wrote:
 

 I really wonder why this is so, given that there are simple cases  
 where we
 can compute something that an abitrary turing machine can not  
 compute using
 a notion of computation that is not extraordinary at all (and quite  
 relevant
 in reality).
 For example, given you have a universal turing machine A that uses the
 alphabet {1,0} and a universal turing machine B that uses the alphabet
 {-1,0,1}.
 Now it is quite clear that the machine A cannot directly answer any
 questions that relates to -1. For example it cannot directly compute
 -1*-1=1. Machine A can only be used to use an encoded input value and
 encoded description of machine B, and give an output that is correct  
 given
 the right decoding scheme.
 But for me this already makes clear that machine A is less  
 computationally
 powerful than machine B.
 
 Church thesis concerns only the class of computable functions.
Hm, maybe the wikipedia article is a bad one, since it mentioned computable
functions just as means of explaining it, not as part of its definition.


Bruno Marchal wrote:
 
  The  alphabet used by the Turing machine, having 1, 2, or enumerable  
 alphabet does not change the class. If you dovetail on the works of 1  
 letter Turing machine, you will unavoidably emulate all Turing  
 machines on all finite and enumerable letters alphabets. This can be  
 proved. Nor does the number of tapes, and/or  parallelism change that  
 class.
 Of course, some machine can be very inefficient, but this, by  
 definition, does not concern Church thesis.
Even so, CT thesis makes a claim about the equivalence of machines, not of
emulability.
Why are two machines that can be used to emlate each other regarded to be
equivalent?
In my view, there is a big difference between computing the same and being
able to emulate each other. Most importantly, emulation only makes sense
relative to another machine that is being emulated, and a correct
interpretation.

benjayk

-- 
View this message in context: 
http://old.nabble.com/Why-the-Church-Turing-thesis--tp34348236p34401986.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Why the Church-Turing thesis?

2012-09-07 Thread benjayk


Jason Resch-2 wrote:
 
 On Thu, Sep 6, 2012 at 12:47 PM, benjayk
 benjamin.jaku...@googlemail.comwrote:
 


 Jason Resch-2 wrote:
 
  On Tue, Aug 28, 2012 at 2:57 PM, benjayk
  benjamin.jaku...@googlemail.comwrote:
 
 
  It seems that the Church-Turing thesis, that states that an universal
  turing
  machine can compute everything that is intuitively computable, has
 near
  universal acceptance among computer scientists.
 
  I really wonder why this is so, given that there are simple cases
 where
  we
  can compute something that an abitrary turing machine can not compute
  using
  a notion of computation that is not extraordinary at all (and quite
  relevant
  in reality).
  For example, given you have a universal turing machine A that uses the
  alphabet {1,0} and a universal turing machine B that uses the alphabet
  {-1,0,1}.
  Now it is quite clear that the machine A cannot directly answer any
  questions that relates to -1.

 
 I see this at all being the case at all.  What is the symbol for -1
 supposed to look like?  Do you agree that a turing machine that used A, B,
 and C as symbols could work the same as one that used -1, 0, and 1?
Well, the symbol for -1 could be -1?
To answer your latter question, no, not necessarily. I don't take the
symbols not to be mere symbols, but to contain meaning (which they do), and
so it matters what symbols the machine use, because that changes the meaning
of its computation. Often times the meaning of the symbols also constrain
the possible relations (for example -1 * -1 normally needs to be 1, while A
* A could be A, B or C).

CT thesis wants to abstract from things like meaning, but I don't really see
the great value in acting like this is necessarily the correct theoretical
way of thinking about computations. It is only valuable as one possible,
very strongly abstracted, limited and representational model of computation
with respect to emulability.


Jason Resch-2 wrote:
 
 Everything is a representation, but what is important is that the Turing
 machine preserves the relationships.  E.g., if ABBBABAA is greater than
 AAABBAAB then 01110100 is greater than 00011001, and all the other
 properties can hold, irrespective of what symbols are used.
The problem is that relationships don't make sense apart from symbols. We
can theoretically express the natural numbers using an infinite numbers of
unique symbols for both numbers and operations (like A or B or C or X for
10, ´ or ? or [ or ° for +), but in this case it won't be clear that we are
expresing natural numbers at all (without a lengthy explanation of what the
symbols mean).
Or if we are using binary numbers to express the natural numbers, it will
also be not very clear that we mean numbers, because often binary
expressions mean something entirely else. If we then add 1 to this number
it will not be clear that we actually added one, or if we just flipped a
bit.

I admit that for numbers this is not so relevant because number relations
can be quite clearly expressed using numerous symbols (they have very few
and simple relations), but it is much more relevant for more complex
relations.


Jason Resch-2 wrote:
 
 For example it cannot directly compute
  -1*-1=1. Machine A can only be used to use an encoded input value and
  encoded description of machine B, and give an output that is correct
  given
  the right decoding scheme.
 
 
  1's or 0's, X's or O's, what the symbols are don't have any bearing on
  what
  they can compute.
 
 That's just an assertion of the belief I am trying to question here.
 In reality, it *does* matter which symbols/things we use to compute. A
 computer that only uses one symbol (for example a computer that adds
 using
 marbles) would be pretty useless.
 It does matter in many different ways: Speed of computations, effciency
 of
 computation, amount of memory, efficiency of memory, ease of programming,
 size of programs, ease of interpreting the result, amount of layers of
 programming to interpret the result and to program efficiently, ease of
 introspecting into the state of a computer...

 
 Practically they might matter but not theoretically.
In the right theoretical model, it does matter. I am precisely doubting the
value of adhering to our simplistic theoretical model of computation as the
essence of what computation means.


Jason Resch-2 wrote:
 

 Why would we abstract from all that and then reduce computation to our
 one
 very abstract and imcomplete model of computation?
 If we do this we could as well abstract from the process of computation
 and
 say every string can be used to emulate any machine, because if you know
 what program it expresses, you know what it would compute (if correctly
 interpreted). There's no fundamental difference. Strings need to be
 interpreted to make sense as a program, and a turing machine without
 negative numbers needs to be interpreted to make sense as a program
 computing the result of an equation using negative numbers.

 
 I agree

Re: Simple proof that our intelligence transcends that of computers

2012-09-06 Thread benjayk
 with
 regards to
 that part of us, unless we consider that level not-me (and this
 doesn't
 make any sense to me).


 Indeed we are not our material body. We are the result of the  
 activity
 of the program supported by that body. That's comp.

 I don't have a clue why you believe this is senseless or  
 inconsistent.
 For one thing, with COMP we postulate that we can substitute a brain  
 with a
 digital emulation (yes doctor),
 
 At some level.
 
 
 
 yet the brain
 
 The material brain.
 
 
 and every possible
 substitution can't be purely digital according to your reasoning  
 (since
 matter is not digital).
 
 Change of matter is not important if it preserves the right  
 functionality at some level.
How does that relate to the issue? We have no way of making statements about
the computational functionality of matter (and thus the right level) if
matter is non-digital.
It is ill-defined.

You even say yourself that the correct substitution level is unknowable. But
not only that, it can't exist, because the notion of digital substitution is
meaningless in a non-digital universe.
Sure we can have *relatively* digital substitutions (like a physical
computer). But you can't derive anything from that, because your reasoning
assumes that the substitution is digital (in a very strict sense of allowing
precise copying etc...).


Bruno Marchal wrote:
 

 Of course we could engage in stretching the meaning of words and  
 argue that
 COMP says functionally correct substitution, meaning that it also  
 has to
 be correctly materially implementened. But in this case we can't  
 derive
 anything from this, because a correct implementation may actually  
 require
 a biological brain or even something more.
 
 The consequences will go through as long as a level of substitution  
 exist.
But there can't, unless your assumption is taken as a vague statement,
meaning kinda digital substitution.
In this case the brain substitution might not be digital at all, except in a
very weak sense by using anything that's - practically speaking - digital
(we can already do that), so your reasoning doesn't work.

benjayk

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34396949.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Two reasons why computers IMHO cannot exhibit intelligence

2012-09-06 Thread benjayk


Bruno Marchal wrote:
 
 
 On 04 Sep 2012, at 21:47, benjayk wrote:
 


 Bruno Marchal wrote:

 Yes, we simulated some systems, but they couldn't perform the
 same function.

 A pump does the function of an heart.
 No. A pump just pumps blood. The heart also performs endocrine  
 functions, it
 can react dynamically to the brain, it can grow, it can heal, it can  
 become
 infected, etc...
 
 That is correct but not relevant. People do survive with pump at the  
 place of the heart, but of course not perfectly, and have some  
 problems through it. This is due to the fact the substitution level is  
 crude for technical reason. That will be the case with artificial  
 brain or parts of the brain, for a very long time, but is not relevant  
 with the issue which assume only truth in principle.
In any case, an artificial heart is not digital, and the substituted brain
can also not be digital (according to your reasoning), which contradicts the
assumption that there can be a digital substitution.



Bruno Marchal wrote:
 


 Bruno Marchal wrote:


 And then another, much bigger step is required in order to say
 *everything*/everyone/every part can be emulated.

 Indeed. Comp makes this impossible, as the environment is the result
 of a comptetion between infinities of universal machine in  
 arithmetic.
 See my other post to you sent  yesterday.
 Yes, OK, I understand that.
 But this also means that COMP relies on the assumption that whatever  
 is not
 emulable about our brains (or whatever else) does not matter at all  
 to what
 we (locally) are, only what is emulable matters. I find this  
 assumption
 completely unwarranted and I have yet to see evidence for it or a  
 reasoning
 behind it.
 
 It is a theory. The evidence for it is that, except for matter itself,  
 non computability has not been observed in nature.
But nature is made of lots of matter, so how can you simply dismiss that as
not relevant?



Bruno Marchal wrote:
 
  It is also hard to make sense of darwinian evolution in a non computable
 framework, as it  
 makes also hard to understand the redundant nature of the brain, and  
 the fact that we are stable for brain perturbations.
I don't see at all why this would be the case. Stability and redundancy may
exist beyond computations as well. Why not?


Bruno Marchal wrote:
 
 If you invoke something as elusive as a non computable effect in the  
 brain (beyond the 1p itself which is not computable for any machine  
 from her point of view), you have to give us an evidence that such  
 thing exists. Is it in the neocortex, in the limbic system, in the  
 cerebral stem, in the right brain?
Again, everywhere. The very fact that the brain is made of neurons is not
computable, because computation does not take structure into account (it
doesn't differentiate between different instantiations). And for all we
know, the structure of the brain *does* matter. It is heavily used in all
attempts to explain its functioning.
Quantum effects beyond individual brains (suggested by psi) can't be
computed as well: No matter what I compute in my brain, this doesn't
entangle it with other brains since computation is classical.

A computational description of the brain is just a relative, approximate
description, nothing more. It doesn't actually reflect what the brain is or
what it does.

benjayk
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34397010.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Why the Church-Turing thesis?

2012-09-06 Thread benjayk


Jason Resch-2 wrote:
 
 On Tue, Aug 28, 2012 at 2:57 PM, benjayk
 benjamin.jaku...@googlemail.comwrote:
 

 It seems that the Church-Turing thesis, that states that an universal
 turing
 machine can compute everything that is intuitively computable, has near
 universal acceptance among computer scientists.

 I really wonder why this is so, given that there are simple cases where
 we
 can compute something that an abitrary turing machine can not compute
 using
 a notion of computation that is not extraordinary at all (and quite
 relevant
 in reality).
 For example, given you have a universal turing machine A that uses the
 alphabet {1,0} and a universal turing machine B that uses the alphabet
 {-1,0,1}.
 Now it is quite clear that the machine A cannot directly answer any
 questions that relates to -1. For example it cannot directly compute
 -1*-1=1. Machine A can only be used to use an encoded input value and
 encoded description of machine B, and give an output that is correct
 given
 the right decoding scheme.

 
 1's or 0's, X's or O's, what the symbols are don't have any bearing on
 what
 they can compute.
 
That's just an assertion of the belief I am trying to question here.
In reality, it *does* matter which symbols/things we use to compute. A
computer that only uses one symbol (for example a computer that adds using
marbles) would be pretty useless.
It does matter in many different ways: Speed of computations, effciency of
computation, amount of memory, efficiency of memory, ease of programming,
size of programs, ease of interpreting the result, amount of layers of
programming to interpret the result and to program efficiently, ease of
introspecting into the state of a computer...

Why would we abstract from all that and then reduce computation to our one
very abstract and imcomplete model of computation?
If we do this we could as well abstract from the process of computation and
say every string can be used to emulate any machine, because if you know
what program it expresses, you know what it would compute (if correctly
interpreted). There's no fundamental difference. Strings need to be
interpreted to make sense as a program, and a turing machine without
negative numbers needs to be interpreted to make sense as a program
computing the result of an equation using negative numbers.


Jason Resch-2 wrote:
 
 Consider: No physical computer today uses 1's or 0's, they use voltages,
 collections of more or fewer electrons.
OK, but in this case abstraction makes sense for computer scientist because
progamers don't have access to that level. You are right, though that a chip
engineer shouldn't abstract from that level if he actually wants to build a
computer.


Jason Resch-2 wrote:
 
 This doesn't mean that our computers can only directly compute what
 electrons do.
In fact they do much more.  Electrons express strictly more than just 0 and
1. So it's not a good anology, because 0 and 1 express *less* than 0, 1 and
-1.


Jason Resch-2 wrote:
 
 But for me this already makes clear that machine A is less computationally
 powerful than machine B. Its input and output when emulating B do only
 make
 sense with respect to what the machine B does if we already know what
 machine B does, and if it is known how we chose to reflect this in the
 input
 of machine A (and the interpretation of its output). Otherwise we have no
 way of even saying whether it emulates something, or whether it is just
 doing a particular computation on the alphabet {1,0}.

 
 These are rather convincing:
 http://en.wikipedia.org/wiki/Video_game_console_emulator
 
 There is software that emulates the unique architectures of an Atari,
 Nintendo, Supernintendo, PlayStation, etc. systems.  These emulators can
 also run on any computer, whether its Intel X86, x86_64, PowerPC, etc. 
 You
 will have a convincing experience of playing an old Atari game like space
 invaders, even though the original creators of that program never intended
 it to run on a computer architecture that wouldn't be invented for another
 30 years, and the original programmers didn't have to be called in to
 re-write their program to do so.
Yes, I use them as well. They are indeed convincing. But this doesn't really
relate to the question very much.
First, our modern computers are pretty much strictly more computationally
powerful in every practical and theoretical way. It would be more of an
argument if you would simulate a windows on a nintendo (but you can't). I am
not saying that a turing machine using 0, 1 and -1 can't emulate a machine
using only 0 and 1.
Secondly, even this emulations are just correct as far as our playing
experience goes (well, at least if you are not nostalgic about hardware).
The actual process going on in the computer is very different, and thus it
makes sense to say that it computes something else. Its computation just
have a similar results in terms of experience, but they need vastly more
ressources and compute something more (all

Re: Two reasons why computers IMHO cannot exhibit intelligence

2012-09-04 Thread benjayk


John Clark-12 wrote:
 
 On Mon, Sep 3, 2012 at 9:11 AM, benjayk
 benjamin.jaku...@googlemail.comwrote:
 
 Showing scientifically that nature is infinite isn't really possible.

 
 Maybe not. In Turing's proof he assumed that machines could not operate
 with infinite numbers, so if there is a theory of everything (and there
 might not be) and if you know it and if you can use nothing but that to
 show independently of Turing that no machine can solve the Halting Problem
 then that would prove that irrational numbers with a infinite number of
 digits play no part in the operation of the universe; on the other hand if
 this new physical theory shows you how to make such a machine then we'd
 know that nature understands and uses infinity. I admit that I used the
 word  if  a lot in all that.
 
Even the usual computer can use infinite numbers, like omega. Really going
from 1 to omega is no more special or difficult than going from 1 to 2. We
just don't do it that often because it (apparently) isn't of much use.
Transfinite numbers mostly don't express much more than finite numbers, or
at least we haven't really found the use for them.

Irrational numbers don't really have digits. We just approximately display
them using digits. Computers can also reason with irrational numbers (for
example computer algebra systems can find irrational solutions of equations
and express them precisely using terms like sqrt(n) ).

With regards to nature, it seems that it in some ways it does use irrational
numbers. Look at the earth and tell me that it has nothing to do with pi. It
is true though that it doesn't use precise irrational numbers, but there
doesn't seem to exist anything totally precise in nature at all - precision
is just an abstraction.

So according to your standard, clearly nature is infinite, because we can
calculate using transfinite numbers.
But of course this is a quite absurd conclusion, mainly because what we
really mean by infinite has nothing to do with mathematically describable
infinities like big ordinal or cardinal numbers. With regards to our
intuitive notion of infiniteness, these are pretty finite, just like all
other numbers.
What we usually mean by infinite means more something like (absolutely)
boundless or incompletable or inexhaustable or unbound or absolute.
All of these have little do with what we can measure or describe and thus it
falls outside the realm of science or math. We can only observe that we
can't find a boundary to space, or an end of time, or an end to math, but it
is hard to say how this could be made precise or how to falsify it (I'd say
it is impossible).

My take on it is simply that the infinite is too absolute to be scrutinized.
You can't falsify something which can't be conceived to be otherwise. It's
literally impossible to imagine something like an absolute boundary
(absolute finiteness). It is a nonsense concept. Nature simply is inherently
infinite and the finite is simply an expression of the infinite, and is
itself also the infinite (like the number 1 also has infinity in it
1=1*1*1*1*1*1*1* ).

benjayk
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34388985.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Two reasons why computers IMHO cannot exhibit intelligence

2012-09-04 Thread benjayk


Bruno Marchal wrote:
 
 Yes, we simulated some systems, but they couldn't perform the
 same function.
 
 A pump does the function of an heart.
No. A pump just pumps blood. The heart also performs endocrine functions, it
can react dynamically to the brain, it can grow, it can heal, it can become
infected, etc...


Bruno Marchal wrote:
 

 And then another, much bigger step is required in order to say
 *everything*/everyone/every part can be emulated.
 
 Indeed. Comp makes this impossible, as the environment is the result  
 of a comptetion between infinities of universal machine in arithmetic.  
 See my other post to you sent  yesterday.
Yes, OK, I understand that.
But this also means that COMP relies on the assumption that whatever is not
emulable about our brains (or whatever else) does not matter at all to what
we (locally) are, only what is emulable matters. I find this assumption
completely unwarranted and I have yet to see evidence for it or a reasoning
behind it.

benjayk

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34389041.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-09-04 Thread benjayk
 to suggest it. That's the
contradiction.


Bruno Marchal wrote:
 
 If there was something outside the universe
 to interpret the simulation, then this would be the level on which  
 we can't
 be substituted (and if this would be substituted, then the level  
 used to
 interpret this substitution couldn't be substituted, etc).
 In any case, there is always a non-computational level, at which no  
 digital
 substitution is possible - and we would be wrong to say YES with  
 regards to
 that part of us, unless we consider that level not-me (and this  
 doesn't
 make any sense to me).
 
 
 Indeed we are not our material body. We are the result of the activity  
 of the program supported by that body. That's comp.
 
 I don't have a clue why you believe this is senseless or inconsistent.
For one thing, with COMP we postulate that we can substitute a brain with a
digital emulation (yes doctor), yet the brain and every possible
substitution can't be purely digital according to your reasoning (since
matter is not digital).
So if we do a substitution, it could only be a semi-digital or a non-digital
substitution, but then your whole reasoning falls apart (the steps assume
you are solely digital).

COMP is simply contradictory, unless we take for granted your result (we are
already only arithmetical, and so no substitution does really take place -
yes doctor is just a metaphor for I am digital), but then it is
tautological and your reasoning is merely an explanation of what it means if
we are digital.

Of course we could engage in stretching the meaning of words and argue that
COMP says functionally correct substitution, meaning that it also has to
be correctly materially implementened. But in this case we can't derive
anything from this, because a correct implementation may actually require
a biological brain or even something more.

Actually I don't think you have any problems to understand that on an
intellectual level. More probably you just don't want to lose your proof,
because it seems to be very important you (you defended it in thousands of
posts). But honestly, this is just ego and has nothing to do with a genuine
search for truth.

benjayk

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34389259.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Two reasons why computers IMHO cannot exhibit intelligence

2012-09-03 Thread benjayk


Bruno Marchal wrote:
 

 If you disagree, please tell me why.
 
 I don't disagree. I just point on the fact that you don't give any  
 justification of your belief. If you are correct, there must be  
 something in cells and brains that is not Turing emulable, and this is  
 speculative, as nobody has found anything not Turing emulable in nature.
 

You say this often, Bruno, yet I have never seen an emulation of any living
system that functions the same as the original.

The default position is that it is not emulable. We have no a priori reason
to assume we can substitute one thing with another thing of an entirely
different class. We have no more reason to assume that we can substitute a
brain with an emulation of a brain than we have that we can substitute a
building with a drawing of a building - even if it is so accurate that the
illusion of it being a building is perfect at first glance. You still can't
live in a drawing.

Showing scientifically that nature is infinite isn't really possible.
Measurements just can't yield infinity.
It is like the natural numbers. You can't see that there are infinitely many
of them by using examples. You just have to realize it is inherent to
natural numbers that there's always another one (eg the successor).
In the same way, nature can only be seen to be infinite by realizing it is
an inherent property of it. There simply is no such thing as complete
finitiness. No thing in nature has any absolute boundary seperating it from
space, and there is no end to space - the notion of an end of space itself
seems to be empty.
We approach the limits of science here, as we leave the realm of the
quantifiable and objectifiable, so frankly your statement just seems like
scientism to me.
From a mystical perspective (which can provide a useful fundament for
science), it can be quite self-evident that everything that exists is
infinite (even the finite is just a form of the infinite).

A more pratical question would be how / in which form does infinity express
in nature?. Of course this is an unlimited question, but I see some aspects
of nature that can't be framed in terms of something finite.
First uncertainty / indeterminateness. It might be that nature is inherently
indeterminate (principle like heisenbergs uncertainty relation suggest it
from a scientific perspective) and thus can't be captured by any particular
description. So it is not emulable, because emulability rests on the premise
that what is emulated can be precisely captured (otherwise we have no way of
telling the computer what to do).
Secondly entaglement. If all of existence is entangled and it is infinite in
scope then everything that exists has an aspect of infiniteness (because you
can't make sense of it apart from the rest of existence). Even tiny changes
in very small systems might me non-locally magnified to an abitrary degree
in other things/realms. This means that entanglement can't be truly
simulated, because every simulation would be incomplete (because the state
of the system depends on infinitely many other things, which we can't ALL
simulate) and thus critically wrong at the right level.
It might be possible to simulate the behaviour of the system outwardly, but
this would be only superficial since the system would be (relatively) cut
off from the transcendental realm that connects it to the rest of existence.

For example if someone's brain is substituted he may behave similarily to
the original (though I think this would be quite superficial), but he won't
be connected to the universal field of experiencing in the same way -
because at some level his emulation is only approximate which may not matter
much on earth, but will matter in heaven or the beyond (which is what
counts, ulitmately).

benjayk
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34383078.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-09-03 Thread benjayk


Bruno Marchal wrote:
 
 
 On 25 Aug 2012, at 15:12, benjayk wrote:
 


 Bruno Marchal wrote:


 On 24 Aug 2012, at 12:04, benjayk wrote:

 But this avoides my point that we can't imagine that levels, context
 and
 ambiguity don't exist, and this is why computational emulation does
 not mean
 that the emulation can substitute the original.

 But here you do a confusion level as I think Jason tries pointing on.

 A similar one to the one made by Searle in the Chinese Room.

 As emulator (computing machine) Robinson Arithmetic can simulate
 exactly Peano Arithmetic, even as a prover. So for example Robinson
 arithmetic can prove that Peano arithmetic proves the consistency of
 Robinson Arithmetic.
 But you cannot conclude from that that Robinson Arithmetic can prove
 its own consistency. That would contradict Gödel II. When PA uses the
 induction axiom, RA might just say huh, and apply it for the sake  
 of
 the emulation without any inner conviction.
 I agree, so I don't see how I confused the levels. It seems to me  
 you have
 just stated that Robinson indeed can not substitue Peano Arithmetic,  
 because
 RAs emulation of PA makes only sense with respect to PA (in cases  
 were PA
 does a proof that RA can't do).
 
 Right. It makes only first person sense to PA. But then RA has  
 succeeded in making PA alive, and PA could a posteriori realize that  
 the RA level was enough.
Sorry, but it can't. It can't even abstract itself out to see that the RA
level would be enough.
I see you doing this all the time; you take some low level that can be made
sense of by something transcendent of it and then claim that the low level
is enough.

This is precisely the calim that I don't understand at all. You say that we
only need natural numbers and + and *, and that the rest emerges from that
as the 1-p viewpoint of the numbers. Unfortunately the 1-p viewpoint itself
can't be found in the numbers, it can only be found in what transcends the
numbers, or what the numbers really are / refer to (which also completely
beyond our conception of numbers).
That's the problem with Gödel as well. His unprovable statement about
numbers is really a meta-statement about what numbers express that doesn't
even make sense if we only consider the definition of numbers. He really
just shows that we can reason about numbers and with numbers in ways that
can't be captured by numbers (but in this case what we do with them has
little to do with the numbers themselves).

I agree that computations reflect many things about us (infinitely many
things, even), but we still transcend them infinitely. Strangely you agree
for the 1-p viewpoint. But given that's what you *actually* live, I don't
see how it makes sense to than proceed that there is a meaningful 3-p point
of view where this isn't true. This point of view is really just an
abstraction occuring in the 1-p of view.


Bruno Marchal wrote:
 
 Like I converse with Einstein's brain's book (à la Hofstatdter), just  
 by manipulating the page of the book. I don't become Einstein through  
 my making of that process, but I can have a genuine conversation with  
 Einstein through it. He will know that he has survived, or that he  
 survives through that process.
On some level, I agree. But not far from the level that he survives in his
quotes and writings.


Bruno Marchal wrote:
 
 That is, it *needs* PA to make sense, and so
 we can't ultimately substitute one with the other (just in some  
 relative
 way, if we are using the result in the right way).
 
 Yes, because that would be like substituting a person by another,  
 pretexting they both obeys the same role. But comp substitute the  
 lower process, not the high level one, which can indeed be quite  
 different.
Which assumes that the world is divided in low-level processes and
high-level processes.


Bruno Marchal wrote:
 
 It is like the word apple cannot really substitute a picture of an  
 apple
 in general (still less an actual apple), even though in many context  
 we can
 indeed use the word apple instead of using a picture of an apple  
 because
 we don't want to by shown how it looks, but just know that we talk  
 about
 apples - but we still need an actual apple or at least a picture to  
 make
 sense of it.
 
 Here you make an invalid jump, I think. If I play chess on a computer,  
 and make a backup of it, and then continue on a totally different  
 computer, you can see that I will be able to continue the same game  
 with the same chess program, despite the computer is totally  
 different. I have just to re-implement it correctly. Same with comp.  
 Once we bet on the correct level, functionalism applies to that level  
 and below, but not above (unless of course if I am willing to have  
 some change in my consciousness, like amnesia, etc.).
 
Your chess example only works because chess is already played on a computer.
Yes, you can often substitute one computer for another (though even this
often comes with problems

Re: Two reasons why computers IMHO cannot exhibit intelligence

2012-09-03 Thread benjayk


Bruno Marchal wrote:
 
 
 On 03 Sep 2012, at 15:11, benjayk wrote:
 


 Bruno Marchal wrote:


 If you disagree, please tell me why.

 I don't disagree. I just point on the fact that you don't give any
 justification of your belief. If you are correct, there must be
 something in cells and brains that is not Turing emulable, and this  
 is
 speculative, as nobody has found anything not Turing emulable in  
 nature.


 You say this often, Bruno, yet I have never seen an emulation of any  
 living
 system that functions the same as the original.
 
 This is not a valid argument. I have never seen a man walking on Mars,  
 but this does not make it impossible.
No, but we have no big gaps of belief to bridge if we consider a man walking
on Mars. It's not much different than the moon.
Yet emulating a natural system is something which we haven't even remotely
suceeded in. Yes, we simulated some systems, but they couldn't perform the
same function.
We also substituted some parts with non-living matter, but not with a mere
computer.

And then another, much bigger step is required in order to say
*everything*/everyone/every part can be emulated. It is like saying that we
can walk on all things, because we can walk on the moon. We most certainly
can't walk on the sun, though.


Bruno Marchal wrote:
 
 With comp we cannot emulate a rock, so we can't certainly emulate a  
 living creature, as it is made of the apparent matter, which needs  
 the complete UD*.
 
 But with comp all universal machine can emulate any universal machine,  
 so if I am a program, at some levcel of description, the activity of  
 that program, responsible for my consciousness here and now, can be  
 emulated exactly.
But why would you be a program? Why would you be more finite than a rock? I
can't follow your logic behind this.
Yes, assuming COMP your reasoning makes some sense, but then we are
confronted with the absurd situation of our local me's being computational,
yet everything we can actually observe being non-computational.



Bruno Marchal wrote:
 

 The default position is that it is not emulable.
 
 On the contrary. Having no evidence that there is something non Turing  
 emulable playing a role in the working mind,
We do have evidence. We can't even make sense of the notion of emulating
what is inherently indeterminate (like all matter, and so the brain as
well). How to emulate something which has no determinate state with machines
using (practically) determinate states?
We can emulate quantum computers, but they still work based on
definite/discrete states (though it allows for superposition of them, but
they are collapsed at the end of the computation).

Even according to COMP, it seems that matter is non-emulable. That this
doesn't play a role in the working of the brain is just an assumption (I
hope we agree there is a deep relation between local mind and brain). When
we actually look into the brain we can't find anything that says whatever
is going on that is not emulable doesn't matter.


Bruno Marchal wrote:
 
  beyond its material constitution which by comp is only Turing recoverable
 in the limit  
 (and thus non emulable)
But that is the point. Why would its material constitution not matter? For
all we know it matters very much, as the behaviour of the matter in the
brain (and outside of it) determines its function.


Bruno Marchal wrote:
 
  to bet that we are not machine is like  
 speculating on something quite bizarre, just to segregationate  
 negatively a class of entities.
I don't know what you arguing against. I have never negatively
segregationated any entity. It is just that computers can't do everything
humans can, just as adults can't do everything children can (or vice versa)
or plants can't do everything animals do (and vice versa) or life can't do
what lifeless matter does (and vice versa).
I have never postulated some moral hierarchy in there (though computers
don't seem to mind always doing what they are told to do, which we might
consider slavery, but that is just human bias).

Also, I don't speculate on us not being machines. We have no a priori reason
to assume we are machines in the first place, anymore than we have a reason
to assume we are plants.


Bruno Marchal wrote:
 
 This is almost akin to saying that the Indians have no souls, as if  
 they would, they would know about Jesus, or to say that the Darwinian  
 theory is rather weak, as it fails to explain how God made the world  
 in six day.
I am not saying computers have no souls. Indeed, computers are just as much
awareness as everything else. There is ONLY soul. So I am not excluding or
segregating anyone or anything.
Computers are just intelligent in a different kind of way, just as indians
are different from germans in some ways (though obviously computers are far
more different to us).


Bruno Marchal wrote:
 
 We have no a priori reason
 to assume we can substitute one thing with another thing of an  
 entirely
 different class

Re: Hating the rich

2012-09-03 Thread benjayk

I couldn't agree more, Stephen. Great post.

The most common forms of left and right really are different forms of the
same phenomenon. Statism, authority (whether of the state or of God or of
science or of the market), thinking in terms of enemies and supporters. The
difference is merely in relatively superficial political or religious
issues. Opress the rich or opress the poor? Believe in God or in the Great
Law of the universe? Belief in free markets or believe in a social
state? Belief in forcing people to be social or belief in forcing people
to adhere to societal norms?

(Obviously there are also people that consider themselves left or right to
whom not all of that or nothing applies to. I am just referring to the
majority.)

benjayk
-- 
View this message in context: 
http://old.nabble.com/Hating-the-rich-tp34372531p34384484.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-25 Thread benjayk


Bruno Marchal wrote:
 
 
 On 24 Aug 2012, at 12:04, benjayk wrote:
 
 But this avoides my point that we can't imagine that levels, context  
 and
 ambiguity don't exist, and this is why computational emulation does  
 not mean
 that the emulation can substitute the original.
 
 But here you do a confusion level as I think Jason tries pointing on.
 
 A similar one to the one made by Searle in the Chinese Room.
 
 As emulator (computing machine) Robinson Arithmetic can simulate  
 exactly Peano Arithmetic, even as a prover. So for example Robinson  
 arithmetic can prove that Peano arithmetic proves the consistency of  
 Robinson Arithmetic.
 But you cannot conclude from that that Robinson Arithmetic can prove  
 its own consistency. That would contradict Gödel II. When PA uses the  
 induction axiom, RA might just say huh, and apply it for the sake of  
 the emulation without any inner conviction.
I agree, so I don't see how I confused the levels. It seems to me you have
just stated that Robinson indeed can not substitue Peano Arithmetic, because
RAs emulation of PA makes only sense with respect to PA (in cases were PA
does a proof that RA can't do). That is, it *needs* PA to make sense, and so
we can't ultimately substitute one with the other (just in some relative
way, if we are using the result in the right way).
It is like the word apple cannot really substitute a picture of an apple
in general (still less an actual apple), even though in many context we can
indeed use the word apple instead of using a picture of an apple because
we don't want to by shown how it looks, but just know that we talk about
apples - but we still need an actual apple or at least a picture to make
sense of it.


Bruno Marchal wrote:
 
 With Church thesis computing is an absolute notion, and all universal  
 machine computes the same functions, and can compute them in the same  
 manner as all other machines so that the notion of emulation (of  
 processes) is also absolute.
OK, but Chruch turing thesis is not proven and I don't consider it true,
necessarily.
I don't consider it false either, I believe it is just a question of what
level we think about computation.

Also, computation is just absolute relative to other computations, not with
respect to other levels and not even with respect to instantion of
computations through other computations. Because here instantiation and
description of the computation matter - I+II=III and 9+2=11
describe the same computation, yet they are different for practical purposes
(because of a different instantiation) and are not even the same computation
if we take a sufficiently long computation to describe what is actually
going on (so the computations take instantiation into account in their
emulation).


Bruno Marchal wrote:
 
 It is not a big deal, it just mean that my ability to emulate einstein  
 (cf Hofstadter) does not make me into Einstein. It only makes me able  
 to converse with Einstein.
Apart from the question of whether brains can be emulated at all (due to
possible entaglement with their own emulation, I think I will write a post
about this later), that is still not necessarily the case.
It is only the case if you know how to make sense of the emulation. And I
don't see that we can assume that this takes less than being einstein.

benjayk
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34347848.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-25 Thread benjayk


Stathis Papaioannou-2 wrote:
 
 On Fri, Aug 24, 2012 at 11:36 PM, benjayk
 benjamin.jaku...@googlemail.com wrote:
 
 The evidence that the universe follows fixed laws is all of science.
 
 That is plainly wrong. It is like saying what humans do is determined
 through a (quite accurate) description of what humans do.

 It is an confusion of level. The universe can't follow laws, because laws
 are just descriptions of what the universe does.
 
 That the universe follows laws means that the universe shows certain
 patterns of behaviour that, fortuitously, clever humans have been able
 to observe and codify.
 
OK, so it is a metaphor, since the laws itself are just what we codified
about the behaviour of the universe (so the universe can't follow laws
because the laws follow the universe).


Stathis Papaioannou-2 wrote:
 
 You said you see no evidence that the universe follows
 laws but the evidence is, as stated, all of science.
Science just requires that the universes behaviour is *approximated* by
laws.


Stathis Papaioannou-2 wrote:
 
 Science does show us that many aspects of the universe can be accurately
 described through laws. But this is not very suprising since the laws and
 the language they evolved out of emerge from the order of the universe
 and
 so they will reflect it.

 Also, our laws are known to not be accurate (they simply break down at
 some
 points), so necessarily the universe does not behave as our laws suggest
 it
 does. And we have no reason to assume it behaves as any other law suggest
 it
 does. Why would be believe it, other than taking it as a dogma?
 
 The laws are constantly being revised, which is what science is about.
 If there were no laws there would be no point to science.
Right, but this doesn't mean that the laws have to be accurate or even can
be accurate. They just need to be accurate enough to be useful to us.
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34347886.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-25 Thread benjayk
 ask the question, because the program would have to
be more general.

 
Jason Resch-2 wrote:
 
  You're crossing contexts and levels.  Certainly, a heart inside a
 computer
  simulation of some reality isn't going to do you any good if you exist
 on
  a
  different level, in a different reality.
 So you are actually agreeing with me? - Since this is exactly the point I
 am
 trying to make.
 Digital models exist on a different level than what they represent, and
 it
 doesn't matter how good/accurate they are because that doesn't bridge the
 gap between model and reality.

 
 But what level something is implemented in does not restrict the
 intelligence of a process.
This may be our main disagreement.
It boils down to the question whether we assume intelligence = (turing)
computation.
We could embrace this definition, but I would rather not, since it doesn't
fit with my own conception of intelligence (which also encompasses
instantiation and interpretation).

But for the sake of discussion I can embrace this definition and in this
case I agree with you. Then we might say that computers can become more
intelligent than humans (and maybe already are), because they manifest
computations more efficiently than humans.

Jason Resch-2 wrote:
 
 Jason Resch-2 wrote:
 
  And this seems to be empirically true because there is pretty much no
  other
  way to explain psi.
 
 
  What do you mean by psi?
 Telepathy, for example.


 Are you aware of any conclusive studies of psi?
That depends on what you interpret as conclusive. For hard-headed
skepticists no study will count as conclusive.

There are plenty of studies that show results that are *far* beyond chance,
though.
Also the so called anecdotal evidence is extremely strong.


Jason Resch-2 wrote:
 

 Jason Resch-2 wrote:
 
 
 
  Jason Resch-2 wrote:
  
   I am not saying that nature is infinite in the way we picture it.
 It
  may
   not
   fit into these categories at all.
  
   Quantum mechanics includes true subjective randomness already, so
 by
  your
   own standards nothing that physically exists can be emulated.
  
  
   The UD also contains subjective randomness, which is at the heart of
   Bruno's argument.
  No, it doesn't even contain a subject.
 
  Bruno assumes COMP, which I don't buy at all.
 
 
  Okay.  What is your theory of mind?
 I don't have any. Mind cannot be captured or even by described at the
 fundamental level at all.

 
 That doesn't seem like a very useful theory.  Does this theory tell
 you whether or not you should take an artificial brain if it was the only
 way to save your life?
Of course it is not a useful theory, since it is not a theory in the first
place.
To answer your question: No. There is no theoretical way of deciding that.

benjayk

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34348098.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-24 Thread benjayk


Jason Resch-2 wrote:
 
 On Thu, Aug 23, 2012 at 11:11 AM, benjayk
 benjamin.jaku...@googlemail.comwrote:
 


 Jason Resch-2 wrote:
 
   So what is your definition of computer, and what is your
   evidence/reasoning
   that you yourself are not contained in that definition?
  
   There is no perfect definition of computer. I take computer to mean
   the
   usual physical computer,
  
   Why not use the notion of a Turing universal machine, which has a
   rather well defined and widely understood definition?
  Because it is an abstract model, not an actual computer.
 
 
  It doesn't have to be abstract.  It could be any physical machine that
 has
  the property of being Turing universal.  It could be your cell phone,
 for
  example.
 
 OK, then no computers exists because no computer can actually emulate all
 programs that run on an universal turing machine due to lack of memory.

 
 If you believe the Mandlebrot set, or the infinite digits of Pi exist,
 then
 so to do Turing machines with inexhaustible memory.
They exist as useful abstractions, but not as physical objects (which is
what we practically deal with when we talk about computers).


Jason Resch-2 wrote:
 

 But let's say we mean except for memory and unlimited accuracy.
 This would mean that we are computers, but not that we are ONLY
 computers.


 Is this like saying our brains are atoms, but we are more than atoms?  I
 can agree with that, our minds transcend the simple description of
 interacting particles.
 
 But if atoms can serve as a platform for minds and consciousness, is there
 a reason that computers cannot?
 
Not absolutely. Indeed, I believe mind is all there is, so necessarily
computers are an aspect of mind and are even conscious in a sense already.


Jason Resch-2 wrote:
 
 Short of adopting some kind of dualism (such as
 http://en.wikipedia.org/wiki/Biological_naturalism , or the idea that God
 has to put a soul into a computer to make it alive/conscious), I don't see
 how atoms can serve as this platform but computers could not, since
 computers seem capable of emulating everything atoms do.
OK. We have a problem of level here. On some level, computers can emulate
everything atoms can do computationally, I'll admit that.  But that's simply
the wrong level, since it is not about what something can do in the sense of
transforming input/output.
It is about what something IS (or is like).

A boulder that falls on your foot may not be computationally more powerful
than a computer, but it can do something important that a computer running a
simulation of a boulder dropping on your foot can't - to make your foot
hurt.
Even if you assume we could use a boulder in a simulation with ourselves
plugged into the simulation to create pain (I agree), it still doesn't do
the same, namely creating the pain when dropping on your physical foot.
See, the accuracy of the simulation does not help in bridging the levels.


Jason Resch-2 wrote:
 

 Jason Resch-2 wrote:
 
  Jason Resch-2 wrote:
  
   since this is all that is required for my argument.
  
   I (if I take myself to be human) can't be contained in that
 definition
   because a human is not a computer according to the everyday
   definition.
  
   A human may be something a computer can perfectly emulate, therefore
 a
   human could exist with the definition of a computer.  Computers are
   very powerful and flexible in what they can do.
  That is an assumption that I don't buy into at all.
 
 
  Have you ever done any computer programming?  If you have, you might
  realize that the possibilities for programs goes beyond your
 imagination.
 Yes, I studied computer science for one semester, so I have programmed a
 fair amount.
 Again, you are misinterpreting me. Of course programs go beyond our
 imagination. Can you imagine the mandel brot set without computing it on
 a
 computer? It is very hard.
 I never said that they can't.

 I just said that they lack some capability that we have. For example they
 can't fundamentally decide which programs to use and which not and which
 axioms to use (they can do this relatively, though). There is no
 computational way of determining that.

 
 There are experimental ways, which is how we determined which axioms to
 use.
Nope, since for the computer no experimental ways exists if we haven't
determined a program first.


Jason Resch-2 wrote:
 

 For example how can you computationally determine whether to use the
 axiom
 true=not(false) or use the axiom true=not(true)?

 
 Some of them are more useful, or lead to theories of a richer complexity.
Yes, but how to determine that with a computer?
If you program it to embrace bad axioms that lead to bad theories and don't
have a lot of use he will still carry out your instructions. So the computer
by itself will not notice whether it does something useful (except if you
programmed it to, in which case you get the same problem with the creation
of the program).


Jason Resch-2 wrote:
 
  If the computer

Re: Simple proof that our intelligence transcends that of computers

2012-08-24 Thread benjayk


Jason Resch-2 wrote:
 
 On Thu, Aug 23, 2012 at 1:18 PM, benjayk
 benjamin.jaku...@googlemail.comwrote:
 


 Jason Resch-2 wrote:
 
  Taking the universal dovetailer, it could really mean everything (or
  nothing), just like the sentence You can interpret whatever you want
  into
  this sentence... or like the stuff that monkeys type on typewriters.
 
 
  A sentence (any string of information) can be interpreted in any
 possible
  way, but a computation defines/creates its own meaning.  If you see a
  particular step in an algorithm adds two numbers, it can pretty clearly
 be
  interpreted as addition, for example.
 A computation can't define its own meaning, since it only manipulates
 symbols (that is the definition of a computer),
 
 
 I think it is a rather poor definition of a computer.  Some have tried to
 define the entire field of mathematics as nothing more than a game of
 symbol manipulation (see
 http://en.wikipedia.org/wiki/Formalism_(mathematics) ).  But if
 mathematics
 can be viewed as nothing but symbol manipulation, and everything can be
 described in terms of mathematics, then what is not symbol manipulation?
 
That what it is describing. Very simple. :)



Jason Resch-2 wrote:
 
 and symbols need a meaning
 outside of them to make sense.

 
 The meaning of a symbol derives from the context of the machine which
 processes it.
I agree. The context in which the machine operates matters. Yet our
definitions of computer don't include an external context.


Jason Resch-2 wrote:
 

 Jason Resch-2 wrote:
 
 
  Jason Resch-2 wrote:
  
The UD contains an entity who believes it writes a single program.
  No! The UD doesn't contain entities at all. It is just a computation.
 You
  can only interpret entities into it.
 
 
  Why do I have to?  As Bruno often asks, does anyone have to watch your
  brain through an MRI and interpret what it is doing for you to be
  conscious?
 Because there ARE no entities in the UD per its definition. It only
 contains
 symbols that are manipulated in a particular way.
 
 
 You forgot the processes, which are interpreting those symbols.
No, that's simply not how we defined the UD. The UD is defined by
manipulation of symbols, not interpretation of symbols (how could we even
formalize that?).


Jason Resch-2 wrote:
 
 The definitions of the UD
 or a universal turing machine or of computers in general don't contain a
 reference to entities.


 The definition of this universe doesn't contain a reference to human
 beings
 either.
Right, that's why you can't claim that all universes contain human beings.


Jason Resch-2 wrote:
 
 So you can only add that to its working in your own imagination.


 I think I would still be able to experience meaning even if no one was
 looking at me.
Yes, because you are what is looking - there is no one looking at you in the
first place, because someone looking is occur in you.


Jason Resch-2 wrote:
 
 Jason Resch-2 wrote:
 
 
  Jason Resch-2 wrote:
  
The UD itself
   isn't intelligent, but it contains intelligences.
  I am not even saying that the UD isn't intelligent. I am just saying
 that
  humans are intelligent in a way that the UD is not (and actually the
  opposite is true as well).
 
 
  Okay, could you clarify in what ways we are more intelligent?
 
  For example, could you show a problem that can a human solve that a
  computer with unlimited memory and time could not?
 Say you have a universal turing machine with the alphabet {0, 1}
 The problem is: Change one of the symbols of this turing machine to 2.

 
 Your example is defining a problem to not be solvable by a specific
 entity,
 not turing machines in general. 
But the claim of computer scientists is that all turing machines are
interchangable, because they can emulate each other perfectly. Clearly
that's not true because perfect computational emulation doesn't help to
solve the problem in question, and that is precisely my point!



Jason Resch-2 wrote:
 
 Given that it is a universal turing machine, it is supposed to be able to
 solve that problem. Yet because it doesn't have access to the right
 level,
 it cannot do it.
 
 It is an example of direct self-manipulation, which turing machines are
 not
 capable of (with regards to their alphabet in this case).

 
 Neither can humans change fundamental properties of our physical
 incarnation.  You can't decide to turn one of your neurons into a magnetic
 monopole, for instance, but this is not the kind of problem I was
 referring
 to.
I don't claim that humans are all powerful. I am just saying that they can
do things computer can't.


Jason Resch-2 wrote:
 
 To avoid issues of level confusion, it is better to think of problems with
 informational solutions, since information can readily cross levels.  That
 is, some question is asked and some answer is provided.  Can you think of
 any question that is only solvable by human brains, but not solvable by
 computers?
OK, if you want to ignore levels, context

Re: Simple proof that our intelligence transcends that of computers

2012-08-24 Thread benjayk


Stathis Papaioannou-2 wrote:
 
 On Thu, Aug 23, 2012 at 3:59 AM, benjayk
 benjamin.jaku...@googlemail.com wrote:
 
 I am not sure that this is true. First, no one yet showed that nature can
 be
 described through a set of fixed laws. Judging from our experience, it
 seems
 all laws are necessarily incomplete.
 It is just dogma of some materialists that the universe precisely follows
 laws. I don't see why that would be the case at all and I see no evidence
 for it either.
 
 The evidence that the universe follows fixed laws is all of science.
That is plainly wrong. It is like saying what humans do is determined
through a (quite accurate) description of what humans do.

It is an confusion of level. The universe can't follow laws, because laws
are just descriptions of what the universe does.

Science does show us that many aspects of the universe can be accurately
described through laws. But this is not very suprising since the laws and
the language they evolved out of emerge from the order of the universe and
so they will reflect it.

Also, our laws are known to not be accurate (they simply break down at some
points), so necessarily the universe does not behave as our laws suggest it
does. And we have no reason to assume it behaves as any other law suggest it
does. Why would be believe it, other than taking it as a dogma?


Stathis Papaioannou-2 wrote:
 
 Secondly, even the laws we have now don't really describe that the atoms
 in
 our brain are rigidly controlled. Rather, quantum mechanical laws just
 give
 us a probability distribution, they don't tell us what actually will
 happen.
 In this sense current physics has already taken the step beyond precise
 laws.
 Some scientists say that the probability distribution is an actual
 precise,
 deterministic entity, but really this is just pure speculation and we
 have
 no evidence for that.
 
 Probabilities in quantum mechanics can be calculated with great
 precision. For example, radioactive decay is a truly random process,
 but we can calculate to an arbitrary level of certainty how much of an
 isotope will decay. In fact, it is much easier to calculate this than
 to make predictions about deterministic but chaotic phenomena such as
 the weather.
 
Sure, but that is not an argument against my point. Precise probabilities
are just a way of making the unprecise (relatively) precise. They still do
not allow us to make precise predictions - they say nothing about what will
happen, just about what could happen.

Also, statistical laws do not tell us anything about the correlation between
(apparently) seperate things, so they actually inherently leave out some
information that could very well be there (and most likely is there if we
look at the data).
They only describe probabilities of seperate events, not correlation of the
outcome of seperate events.

Say you have 1000 dices with 6 sides that behaves statistically totally
random if analyzed seperately.

Nevertheless they could be strongly correlated and this correlation is very
hard to find using scientific methods and to describe - we wouldn't notice
at all if we just observed the dices seperately or just a few dices (as we
would usually do using scientific methods).

Or you have 2 dices with 1000 sides that behaves statistically totally
random if analyzed seperately, but if one shows 1 the other ALWAYS shows one
as well. Using 1000 tries you will most likely notice nothing at all, and
using 1 tries you will still probably notice nothing because there will
be most likely other instances as well where the two numbers are the same.
So it would be very difficult to detect the correlation, even though it is
quite important (given that you could accurately predict what the other
1000-sided dice will be in 1/1000 of the cases).

And even worse, if you have 10 dices that *together* show no correlation at
all (which we found out using many many tries), this doesn't mean that the
combinated  result of the 10 dices is not correlated with another set of 10
dices. To put it another way: Even if you showed that a given set of
macrosopic objects is not correlated, they still may not behave random at
all on a bigger level because they are correlated with another set of
objects!

Most scientists seem to completely disregard this as they think there could
be no correlation between seperate macro objects because they decohere too
quickly. But this assumes that our laws are correct when it comes to
describing decoherence and it also assumes that decoherence means that there
is NO correlation anymore (as oppposed to no definite/precise correlation).

And we have very solid data that there is large scale correlation (psi -
like telepathy and extremely unusual coincidences - or photsynthesis).
Also there is no reason to apriori assume that there could not be
correlation between distant events (unless you have a dogmatically classical
worldview) - which would be inherently hard to measure.

Using two assumptions we can

Re: Simple proof that our intelligence transcends that of computers

2012-08-23 Thread benjayk


Jason Resch-2 wrote:
 
 
 
 On Aug 22, 2012, at 1:57 PM, benjayk benjamin.jaku...@googlemail.com  
 wrote:
 


 Jason Resch-2 wrote:

 On Wed, Aug 22, 2012 at 1:07 PM, benjayk
 benjamin.jaku...@googlemail.comwrote:



 Jason Resch-2 wrote:

 On Wed, Aug 22, 2012 at 10:48 AM, benjayk
 benjamin.jaku...@googlemail.comwrote:



 Bruno Marchal wrote:


 Imagine a computer without an output. Now, if we look at what  
 the
 computer
 is doing, we can not infer what it is actually doing in terms of
 high-level
 activity, because this is just defined at the output/input. For
 example, no
 video exists in the computer - the data of the video could be  
 other
 data as
 well. We would indeed just find computation.
 At the level of the chip, notions like definition, proving,
 inductive
 interference don't exist. And if we believe the church-turing
 thesis, they
 can't exist in any computation (since all are equivalent to a
 computation of
 a turing computer, which doesn't have those notions), they  
 would be
 merely
 labels that we use in our programming language.

 All computers are equivalent with respect to computability. This
 does
 not entail that all computers are equivalent to respect of
 provability. Indeed the PA machines proves much more than the RA
 machines. The ZF machine proves much more than the PA machines.  
 But
 they do prove in the operational meaning of the term. They  
 actually
 give proof of statements. Like you can say that a computer can  
 play
 chess.
 Computability is closed for the diagonal procedure, but not
 provability, game, definability, etc.

 OK, this makes sense.

 In any case, the problem still exists, though it may not be  
 enough to
 say
 that the answer to the statement is not computable. The original  
 form
 still
 holds (saying solely using a computer).


 For to work, as Godel did, you need to perfectly define the  
 elements in
 the
 sentence using a formal language like mathematics.  English is too
 ambiguous.  If you try perfectly define what you mean by  
 computer, in a
 formal way, you may find that you have trouble coming up with a
 definition
 that includes computers, but does't also include human brains.


 No, this can't work, since the sentence is exactly supposed to  
 express
 something that cannot be precisely defined and show that it is
 intuitively
 true.

 Actually even the most precise definitions do exactly the same at  
 the
 root,
 since there is no such a thing as a fundamentally precise  
 definition. For
 example 0: You might say it is the smallest non-negative integer,  
 but
 this
 begs the question, since integer is meaningless without defining 0  
 first.
 So
 ultimately we just rely on our intuitive fuzzy understanding of 0 as
 nothing, and being one less then one of something (which again is an
 intuitive notion derived from our experience of objects).



 So what is your definition of computer, and what is your
 evidence/reasoning
 that you yourself are not contained in that definition?

 There is no perfect definition of computer. I take computer to mean  
 the
 usual physical computer,
 
 Why not use the notion of a Turing universal machine, which has a  
 rather well defined and widely understood definition?
Because it is an abstract model, not an actual computer. Taking a computer
to be a turing machine would be like taking a human to be a picture or a
description of a human.
It is a major confusion of level, a confusion between description and
actuality.

Also, if we accept your definition, than a turing machine can't do anything.
It is a concept. It doesn't actually compute anything anymore more than a
plan how to build a car drives.
You can use the concept of a turing machine to do actual computations based
on the concept, though, just as you can use a plan of how to a build a car
to build a car and drive it.


Jason Resch-2 wrote:
 
 since this is all that is required for my argument.

 I (if I take myself to be human) can't be contained in that definition
 because a human is not a computer according to the everyday  
 definition.
 
 A human may be something a computer can perfectly emulate, therefore a  
 human could exist with the definition of a computer.  Computers are  
 very powerful and flexible in what they can do.
That is an assumption that I don't buy into at all.

Actually it can't be true due to self-observation.
A human that observes its own brain observes something entirely else than a
digital brain observing itself (the former will see flesh and blood while
the latter will see computer chips and wires), so they behaviour will
diverge if they look at their own brains - that is, the digital brain can't
an exact emulation, because emulation means behavioural equivalence.


Jason Resch-2 wrote:
 
 Short of injecting infinities, true randomness, or halting-type  
 problems, you won't find a process that a computer cannot emulate.
Really? How come that we never ever emulated anything which isn't already
digital?
What

Re: Simple proof that our intelligence transcends that of computers

2012-08-23 Thread benjayk


Jason Resch-2 wrote:
 
 On Wed, Aug 22, 2012 at 1:52 PM, benjayk
 benjamin.jaku...@googlemail.comwrote:
 


 Jason Resch-2 wrote:
 
  On Wed, Aug 22, 2012 at 12:59 PM, benjayk
  benjamin.jaku...@googlemail.comwrote:
 
 
 
  Jason Resch-2 wrote:
  
   On Wed, Aug 22, 2012 at 11:49 AM, benjayk
   benjamin.jaku...@googlemail.comwrote:
  
  
  
   John Clark-12 wrote:
   
On Tue, Aug 21, 2012 at 5:33 PM, benjayk
benjamin.jaku...@googlemail.comwrote:
   
I have no difficulty asserting this statement as well. See:
   
   
Benjamin Jakubik cannot consistently assert this sentence is
  true.
   
   
   
Benjamin Jakubik cannot consistently assert the following
 sentence
   without
demonstrating that there is something he can't consistently
 assert
  but
   a
computer can:
   
'Benjamin Jakubik cannot consistently assert this sentence' is
  true.
   
If the sentence is true then Benjamin Jakubik cannot consistently
   assert
this sentence , if the sentence is false then Benjamin Jakubik is
asserting
something that is untrue. Either way Benjamin Jakubik cannot
 assert
  all
true statements without also asserting false contradictory ones.
  That
   is
   a
limitation that both you and me and any computer have.
   The problem is of a more practical/empirical nature. You are right
  that
   from
   a philosophical/analytical standpoint there isn't necessarily any
   difference.
  
   Let's reformulate the question to make it less theoretical and more
   empirical:
   'You won't be able to determine the truth of this statement by
   programming
   a
   computer'
  
   Just try and program a computer that is determining the answer to
 my
   problem
   in any way that relates to its actual content. It is not possible
  because
   the actual content is that whatever you program into the computer
  doesn't
   answer the question, yet when you cease doing it you can observe
 that
  you
   can't succeed and thus that the statement is true.
   It demonstrates to yourself that there are insights you can't get
 out
  of
   programming the computer the right way. To put it another way, it
  shows
   you
   that it is really just obvious that you are beyond the computer,
  because
   you
   are the one programming it.
  
   Computers do only what we instruct them to do (this is how we built
   them),
   if they are not malfunctioning. In this way, we are beyond them.
  
  
   I once played with an artificial life program.  The program
 consisted
  of
   little robots that sought food, and originally had randomly wired
  brains.
Using evolution to adapt the genes that defined the little robot's
   artificial neural network, these robots became better and better at
   gathering food.  But after running the evolution overnight I awoke
 to
  find
   them doing something quite surprising.  Something that neither I,
 nor
  the
   original programmer perhaps ever thought of.
  
   Was this computer only doing what we instructed it to do?  If so,
 why
   would
   I find one of the evolved behaviors so surprising?
  Of course, since this is what computers do. And it is suprising
 because
  we
  don't know what the results of carrying out the instructions we give
 it
  will
  be. I never stated that computers don't do suprising things. They just
  won't
  invent something that is not derived from the axioms/the code we give
  them.
 
 
 
  It is hard to find anything that is not derived from the code of the
  universal dovetailer.
 The universal dovetailer just goes through all computations in the sense
 of
 universal-turing-machine-equivalent-computation. As Bruno mentioned, that
 doesn't even exhaust what computers can do, since they can, for example,
 prove things (and some languages prove some things that other languages
 don't).

 
 It exhausts all the possibilities at the lowest level, which implies
 exhausting all the possibilities for higher levels.
 


Sorry but that's nonsense. Look at the word: break
At the lowest level it is just one word, yet at the higher level there are
many possibilities what it could mean.

Exactly the same applies to computations. For every computation are there
infinitely many possibilities what it could mean (1+1=2 could mean that you
add two apples, or two oranges, or that you add the value of two registers
or that you increase the value of a flag).
Many very long computations are *relatively* less ambigous (relative to us),
but they are still ambigous.

Taking the universal dovetailer, it could really mean everything (or
nothing), just like the sentence You can interpret whatever you want into
this sentence... or like the stuff that monkeys type on typewriters.


Jason Resch-2 wrote:
 
 For example: if you exhausted every possible configuration of atoms, you
 would also exhaust every possible chemical, every possible life form, and
 every possible human.
Only because there is no absolute seperation between levels in actual
physical

Re: Simple proof that our intelligence transcends that of computers

2012-08-23 Thread benjayk
 at all.


Jason Resch-2 wrote:
 

 Jason Resch-2 wrote:
 
  Do you believe humans are hyper computers?  If not, then we are just
  special cases of computers.  The particular case can defined by
  program, which may be executed on any Turing machine.
 Nope. We are not computers and also not hyper-computers.


 That is a bit like saying we are not X, but we are also not (not X).
Right, reality is not based on binary logic (even though it seems to play an
important role).


Jason Resch-2 wrote:
 Hyper computers are these imagined things that can do everything normal
 computers
 cannot.  So together, there is nothing the two could not be capable of.
  What is this magic that makes a human brain more capable than any
 machine?
  Do you not believe the human brain is fundamentally mechanical?
Nope. I think we will soon realize this as we undoubtably see that the brain
is entangled with the rest of the universe. The presence of psi is already
evidence for that.
The notion of entaglement doesn't make sense for machines, since they can
only process information/symbols, but entanglement is not informational.
Also, machines necessarily work in steps (that's how we built them), yet
entaglement is instantaneous. If you have to machines then they both have to
do a step to know the state of the other one.

And indeed entanglement is somewhat magical, but nevertheless we know it
exists.

benjayk

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34340179.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-23 Thread benjayk
 intelligent?
 
 For example, could you show a problem that can a human solve that a
 computer with unlimited memory and time could not?
Say you have a universal turing machine with the alphabet {0, 1}
The problem is: Change one of the symbols of this turing machine to 2.

Given that it is a universal turing machine, it is supposed to be able to
solve that problem. Yet because it doesn't have access to the right level,
it cannot do it.
It is an example of direct self-manipulation, which turing machines are not
capable of (with regards to their alphabet in this case).
You could of course create a model of that turing machine within that turing
machine and change their alphabet in the model, but since this was not the
problem in question this is not the right solution.

Or the problem manipulate the code of yourself if you are a program, solve
1+1 if you are human (computer and human meaning what the average humans
considers computer and human) towards a program written in a turing
universal programming language without the ability of self-modification. The
best it could do is manipulate a model of its own code (but this wasn't the
problem).
Yet we can simply solve the problem by answering 1+1=2 (since we are human
and not computers by the opinion of the majority).

benjayk
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34340683.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-23 Thread benjayk

Sorry, I am not going to answer to your whole post, because frankly the
points you make are not very interesting to me.


John Clark-12 wrote:
 
 On Wed, Aug 22, 2012 at 12:49 PM, benjayk
 benjamin.jaku...@googlemail.comwrote:
 
 
  'You won't be able to determine the truth of this statement by
 programming a computer'

 
 If true then you won't be able to determine the truth of this statement
 PERIOD.
OK, take the sentence:

'Not all sentences have unambigous truth values - by the way you won't be
able to determine that this sentence doesn't have a unambigous truth value
by using a computer '

The same paradox applies but the statement is clearly practically true
because it has no unambigous answer.



John Clark-12 wrote:
 
 To put it another way, it shows you that it is really just obvious that
 you are beyond the computer, because you
 are the one programming it.

 
 But it's only a matter of time before computers start programing you
 because computers get twice as smart every 18 months and people do not.
So transistor count and smartness are the same? So if I have 10100
transistors that compute while(true) then you have something that is
unimaginable much smarter than a human?



John Clark-12 wrote:
 
 Computers do only what we instruct them to do (this is how we built them)
 
 
 That is certainly not true, if it were there would be no point in
 instructing computers about anything.
The definition of a computer is that it precisely carries out the
instructions it is given.


John Clark-12 wrote:
 
  Tell me this, if you instructed a
 computer to find the first even integer greater than 4 that is not the sum
 of two primes greater than 2 and then stop what will the computer do? It
 would take you less than 5 minutes to write such a program so tell me,
 will
 it ever stop?
I don't know. This doesn't relate to whether it carries out the instructions
it is given at all.

benjayk

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34340705.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: NewsFlash: Monadic weather today will be cloudy with a chance of thunderstorms

2012-08-22 Thread benjayk


Roger Clough wrote:
 
 Hi benjayk 
 
 In monadic theory, since space does not exist, monads are by definition
 nonlocal, thus all minds in a sense are one
 and can commune with one another as well as with God (the mind behind the
 supreme monad). 
 
 The clarity of intercommunication will of course depend, of course, on the
 sensitivity of the monads, their intelligence,
 and how near (resonant) their partners are, as well as other factors
 such as whether or not its
 a clear monadic weather day.
 
 
I agree. We even have empiricial evidence for that with telepathy (and other
psi phenomena).
It seems computers will have a hard time doing any of it, since we
specificially built them to only do what we ordered them to do.
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34333545.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-22 Thread benjayk


Bruno Marchal wrote:
 
 
 On 22 Aug 2012, at 00:26, benjayk wrote:
 


 meekerdb wrote:

 On 8/21/2012 2:52 PM, benjayk wrote:

 meekerdb wrote:
 On 8/21/2012 2:24 PM, benjayk wrote:
 meekerdb wrote:
 This sentence cannot be confirmed to be true by a human being.

 The Computer

 He might be right in saying that (See my response to Saibal).
 But it can't confirm it as well (how could it, since we as  
 humans can't
 confirm it and what he knows about us derives from what we  
 program into
 it?). So still, it is less capable than a human.
 I know it by simple logic - in which I have observed humans to be
 relatively slow and
 error prone.


 regards, The Computer

  Well, that is you imagining to be a computer. But program an actual
 computer that concludes this without it being hard-coded into it.  
 All it
 could do is repeat the opinion you feed it, or disagree with you,
 depending
 on how you program it.

 There is nothing computational that suggest that the statement is  
 true or
 false. Or if it you believe it is, please attempt to show how.

 In fact there is a better formulation of the problem: 'The truth- 
 value of
 this statement is not computable.'.
 It is true, but this can't be computed, so obviously no computer can
 reach
 this conclusion without it being fed to it via input (which is  
 something
 external to the computer). Yet we can see that it is true.

 Not really.  You're equivocating on computable as what can be  
 computed
 and what a
 computer does.  You're supposing that a computer cannot have the
 reflexive inference
 capability to see that the statement is true.
 No, I don't supppose that it does. It results from the fact that we  
 get a
 contradiction if the computer could see that the statement is true  
 (since it
 had to compute it, which is all it can do).
 
 A computer can do much more than computing. It can do proving,  
 defining, inductive inference (guessing), and many other things. You  
 might say that all this is, at some lower level, still computation,  

Sorry, but the opposite is the case. To say that computers do proving,
defining, guessing is a confusion of level, since these are interpretation
of computations, or are represented using computations, not the computations
itself. If we encode a proof using numbers, then this is not the proof
itself, but its representation in numbers. Just as Gödel's proof is not
Gödel's proof just because I say it represents Gödel's proof.
Or just as I say computers the word computers don't compute anything.

Imagine a computer without an output. Now, if we look at what the computer
is doing, we can not infer what it is actually doing in terms of high-level
activity, because this is just defined at the output/input. For example, no
video exists in the computer - the data of the video could be other data as
well. We would indeed just find computation.
At the level of the chip, notions like definition, proving, inductive
interference don't exist. And if we believe the church-turing thesis, they
can't exist in any computation (since all are equivalent to a computation of
a turing computer, which doesn't have those notions), they would be merely
labels that we use in our programming language.

That is the reason that I don't buy turings thesis, because it intends to
reduce all computation to a turing machine just because it can be
represented using computation. But ultimately a simple machine can't compute
the same as a complex one, because we need a next layer to interpret the
simple computations as complex ones (which is possible). That is, assembler
isn't as powerful as C++, because we need additional layers to retrieve the
same information from the output of the assembler.

You are right that we can confuse the levels in some way, basically because
there is no way to actually completely seperate them. But in this case we
can also confuse all symbols and definitions, in effect deconstructing
language. So as long as we rely on precise, non-poetic language it is wise
to seperate levels.



Bruno Marchal wrote:
 
 but then this can be said for us too, and that would be a confusion of  
 level.
Only if we assume we are computational. I don't.



Bruno Marchal wrote:
 
  The fact that a computer is universal for computation does not  
 imply logically that a computer can do only computations. You could  
 say that a brain can only do electrical spiking, or that molecules can  
 only do electron sharing.
You have a point here. Physical computers must do more then computation,
because they must convert abstract information into physical signals (which
don't exist at the level of computation).
But if we really are talking about the abstract aspect of computers, I think
my point is still valid. It can only do computations, because all we defined
it as is in terms of computationl.

benjayk

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34333663.html

Re: Simple proof that our intelligence transcends that of computers

2012-08-22 Thread benjayk



meekerdb wrote:
 
 On 8/21/2012 3:26 PM, benjayk wrote:

 meekerdb wrote:
 On 8/21/2012 2:52 PM, benjayk wrote:
 meekerdb wrote:
 On 8/21/2012 2:24 PM, benjayk wrote:
 meekerdb wrote:
 This sentence cannot be confirmed to be true by a human being.

 The Computer

 He might be right in saying that (See my response to Saibal).
 But it can't confirm it as well (how could it, since we as humans
 can't
 confirm it and what he knows about us derives from what we program
 into
 it?). So still, it is less capable than a human.
 I know it by simple logic - in which I have observed humans to be
 relatively slow and
 error prone.


 regards, The Computer

Well, that is you imagining to be a computer. But program an actual
 computer that concludes this without it being hard-coded into it. All
 it
 could do is repeat the opinion you feed it, or disagree with you,
 depending
 on how you program it.

 There is nothing computational that suggest that the statement is true
 or
 false. Or if it you believe it is, please attempt to show how.

 In fact there is a better formulation of the problem: 'The truth-value
 of
 this statement is not computable.'.
 It is true, but this can't be computed, so obviously no computer can
 reach
 this conclusion without it being fed to it via input (which is
 something
 external to the computer). Yet we can see that it is true.
 Not really.  You're equivocating on computable as what can be
 computed
 and what a
 computer does.  You're supposing that a computer cannot have the
 reflexive inference
 capability to see that the statement is true.
 No, I don't supppose that it does. It results from the fact that we get a
 contradiction if the computer could see that the statement is true (since
 it
 had to compute it, which is all it can do).


 meekerdb wrote:
   Yet you're also supposing that when we
 see it is true that that is not a computation.
 No. It can't be a computation, since if it were a computation we couldn't
 conclude it is true (as this would be a contradiction, as I showed
 above).
 
 You avoid the contradiction by saying, What *I'm* doing is not
 computation. which you 
 can say because you don't know what you're doing - you're just seeing
 it's true.  If you 
 knew what you were doing you would know you were computing too and you'd
 be in the same 
 contradiction that you suppose the computer is in because computing is
 all it can do.  
 You're implicitly *assuming* you can do something that is not computing to
 avoid the 
 contradiction and thereby prove you can do something beyond computing -
 see the circularity
 
Not really. The fact that I can see its true proves that I can't be only
doing computation, because by only doing computation (and only allowing
binary logic as the answer) we could never arrive at the fact that the
sentence is true.

A computer would derive that it is false, and thus it is true and thus it is
false,... Or it would derive that it is true and thus that its answer must
be wrong (because its own way of arriving there contradicts the sentence),
so it must be false after all, etc... But it would never arrive at the fact
that the statement it is clearly true.
Yet I see that it is clearly true, since a computer could never unambigously
see its true (as the last paragraph shows).

We could only hardcode the statement into the computer, but then it just
states it and doesn't confirm it by itself.

You could say that I am beyond the level of the computer and thus can see
something about the computer that the computer can't.

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34333746.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-22 Thread benjayk


Bruno Marchal wrote:
 

 Imagine a computer without an output. Now, if we look at what the  
 computer
 is doing, we can not infer what it is actually doing in terms of  
 high-level
 activity, because this is just defined at the output/input. For  
 example, no
 video exists in the computer - the data of the video could be other  
 data as
 well. We would indeed just find computation.
 At the level of the chip, notions like definition, proving, inductive
 interference don't exist. And if we believe the church-turing  
 thesis, they
 can't exist in any computation (since all are equivalent to a  
 computation of
 a turing computer, which doesn't have those notions), they would be  
 merely
 labels that we use in our programming language.
 
 All computers are equivalent with respect to computability. This does  
 not entail that all computers are equivalent to respect of  
 provability. Indeed the PA machines proves much more than the RA  
 machines. The ZF machine proves much more than the PA machines. But  
 they do prove in the operational meaning of the term. They actually  
 give proof of statements. Like you can say that a computer can play  
 chess.
 Computability is closed for the diagonal procedure, but not  
 provability, game, definability, etc.
 
OK, this makes sense.

In any case, the problem still exists, though it may not be enough to say
that the answer to the statement is not computable. The original form still
holds (saying solely using a computer).

Of course one can object to this, too, since it is not possible to solely
use a computer. We always use our brains to interpret the results the
computer gives us.

But its still practically true.
Just do the experiment and try to solve the question by programming a
computer. You will not be able to make sense of the question. As soon as you
cease to try to achieve a solution using the computer you will suddenly
realize the answer is YES since you didn't achieve a solution using the
computer (and this is what the sentence says).

The only way to avoid the problem is to hardcode the fact 'This statement
can't be confirmed to be true by utilizing a computer'=true into the
computer and claim that this a confirmation. But it seems that this is not
what we really mean by confirming, since we could program 'This statement
can't be confirmed to be true by utilizing a computer'=false into the
computer as well. It would just be a belief, not an actual confirmation.


Bruno Marchal wrote:
 
 just because it can be
 represented using computation. But ultimately a simple machine can't  
 compute
 the same as a complex one, because we need a next layer to interpret  
 the
 simple computations as complex ones (which is possible). That is,  
 assembler
 isn't as powerful as C++, because we need additional layers to  
 retrieve the
 same information from the output of the assembler.
 
 That depends how you implement C++. It is not relevant. We might  
 directly translate C++ in the physical layer, and emulate some  
 assembler in the C++.
 But assembler and C++ are computationally equivalent because their  
 programs exhaust the computable function by a Turing universal machine.
I think this is just a matter of how we define computation. If computation
is defined as what an universal Turing machine does, of course nothing can
be more computationally powerful.

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34335113.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-22 Thread benjayk


Jason Resch-2 wrote:
 
 On Wed, Aug 22, 2012 at 11:49 AM, benjayk
 benjamin.jaku...@googlemail.comwrote:
 


 John Clark-12 wrote:
 
  On Tue, Aug 21, 2012 at 5:33 PM, benjayk
  benjamin.jaku...@googlemail.comwrote:
 
  I have no difficulty asserting this statement as well. See:
 
 
  Benjamin Jakubik cannot consistently assert this sentence is true.
 
 
 
  Benjamin Jakubik cannot consistently assert the following sentence
 without
  demonstrating that there is something he can't consistently assert but
 a
  computer can:
 
  'Benjamin Jakubik cannot consistently assert this sentence' is true.
 
  If the sentence is true then Benjamin Jakubik cannot consistently
 assert
  this sentence , if the sentence is false then Benjamin Jakubik is
  asserting
  something that is untrue. Either way Benjamin Jakubik cannot assert all
  true statements without also asserting false contradictory ones. That
 is
 a
  limitation that both you and me and any computer have.
 The problem is of a more practical/empirical nature. You are right that
 from
 a philosophical/analytical standpoint there isn't necessarily any
 difference.

 Let's reformulate the question to make it less theoretical and more
 empirical:
 'You won't be able to determine the truth of this statement by
 programming
 a
 computer'

 Just try and program a computer that is determining the answer to my
 problem
 in any way that relates to its actual content. It is not possible because
 the actual content is that whatever you program into the computer doesn't
 answer the question, yet when you cease doing it you can observe that you
 can't succeed and thus that the statement is true.
 It demonstrates to yourself that there are insights you can't get out of
 programming the computer the right way. To put it another way, it shows
 you
 that it is really just obvious that you are beyond the computer, because
 you
 are the one programming it.

 Computers do only what we instruct them to do (this is how we built
 them),
 if they are not malfunctioning. In this way, we are beyond them.

 
 I once played with an artificial life program.  The program consisted of
 little robots that sought food, and originally had randomly wired brains.
  Using evolution to adapt the genes that defined the little robot's
 artificial neural network, these robots became better and better at
 gathering food.  But after running the evolution overnight I awoke to find
 them doing something quite surprising.  Something that neither I, nor the
 original programmer perhaps ever thought of.
 
 Was this computer only doing what we instructed it to do?  If so, why
 would
 I find one of the evolved behaviors so surprising?
Of course, since this is what computers do. And it is suprising because we
don't know what the results of carrying out the instructions we give it will
be. I never stated that computers don't do suprising things. They just won't
invent something that is not derived from the axioms/the code we give them.



Jason Resch-2 wrote:
 

 You might say we only do what we were instructed to do by the laws of
 nature, but this would be merely a metaphor, not an actual fact (the laws
 of
 nature are just our approach of describing the world, not something that
 is
 somehow actually programming us).

 
 That we cannot use our brains to violate physical laws (the true laws, not
 our models or approximations of them) is more than a metaphor.
 
 Regardless of whether or not we are programmed, the atoms in our brain are
 as rigididly controlled as the logic gates of any computer.  The point is
 that physical laws, or logical laws serve only as the most primitive of
 building blocks on which greater complexity may be built.  I think it is
 an
 error to say that because inviolable laws sit at the base of computation
 that we are inherently more capable, because given everything we know, we
 seem to be in the same boat.
I am not sure that this is true. First, no one yet showed that nature can be
described through a set of fixed laws. Judging from our experience, it seems
all laws are necessarily incomplete.
It is just dogma of some materialists that the universe precisely follows
laws. I don't see why that would be the case at all and I see no evidence
for it either.

Secondly, even the laws we have now don't really describe that the atoms in
our brain are rigidly controlled. Rather, quantum mechanical laws just give
us a probability distribution, they don't tell us what actually will happen.
In this sense current physics has already taken the step beyond precise
laws.
Some scientists say that the probability distribution is an actual precise,
deterministic entity, but really this is just pure speculation and we have
no evidence for that.

benjayk
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34335756.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message

Re: Simple proof that our intelligence transcends that of computers

2012-08-22 Thread benjayk


Jason Resch-2 wrote:
 
 On Wed, Aug 22, 2012 at 10:48 AM, benjayk
 benjamin.jaku...@googlemail.comwrote:
 


 Bruno Marchal wrote:
 
 
  Imagine a computer without an output. Now, if we look at what the
  computer
  is doing, we can not infer what it is actually doing in terms of
  high-level
  activity, because this is just defined at the output/input. For
  example, no
  video exists in the computer - the data of the video could be other
  data as
  well. We would indeed just find computation.
  At the level of the chip, notions like definition, proving, inductive
  interference don't exist. And if we believe the church-turing
  thesis, they
  can't exist in any computation (since all are equivalent to a
  computation of
  a turing computer, which doesn't have those notions), they would be
  merely
  labels that we use in our programming language.
 
  All computers are equivalent with respect to computability. This does
  not entail that all computers are equivalent to respect of
  provability. Indeed the PA machines proves much more than the RA
  machines. The ZF machine proves much more than the PA machines. But
  they do prove in the operational meaning of the term. They actually
  give proof of statements. Like you can say that a computer can play
  chess.
  Computability is closed for the diagonal procedure, but not
  provability, game, definability, etc.
 
 OK, this makes sense.

 In any case, the problem still exists, though it may not be enough to say
 that the answer to the statement is not computable. The original form
 still
 holds (saying solely using a computer).


 For to work, as Godel did, you need to perfectly define the elements in
 the
 sentence using a formal language like mathematics.  English is too
 ambiguous.  If you try perfectly define what you mean by computer, in a
 formal way, you may find that you have trouble coming up with a definition
 that includes computers, but does't also include human brains.
 
 
No, this can't work, since the sentence is exactly supposed to express
something that cannot be precisely defined and show that it is intuitively
true.

Actually even the most precise definitions do exactly the same at the root,
since there is no such a thing as a fundamentally precise definition. For
example 0: You might say it is the smallest non-negative integer, but this
begs the question, since integer is meaningless without defining 0 first. So
ultimately we just rely on our intuitive fuzzy understanding of 0 as
nothing, and being one less then one of something (which again is an
intuitive notion derived from our experience of objects).
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34335798.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-22 Thread benjayk


Jason Resch-2 wrote:
 
 On Wed, Aug 22, 2012 at 12:59 PM, benjayk
 benjamin.jaku...@googlemail.comwrote:
 


 Jason Resch-2 wrote:
 
  On Wed, Aug 22, 2012 at 11:49 AM, benjayk
  benjamin.jaku...@googlemail.comwrote:
 
 
 
  John Clark-12 wrote:
  
   On Tue, Aug 21, 2012 at 5:33 PM, benjayk
   benjamin.jaku...@googlemail.comwrote:
  
   I have no difficulty asserting this statement as well. See:
  
  
   Benjamin Jakubik cannot consistently assert this sentence is
 true.
  
  
  
   Benjamin Jakubik cannot consistently assert the following sentence
  without
   demonstrating that there is something he can't consistently assert
 but
  a
   computer can:
  
   'Benjamin Jakubik cannot consistently assert this sentence' is
 true.
  
   If the sentence is true then Benjamin Jakubik cannot consistently
  assert
   this sentence , if the sentence is false then Benjamin Jakubik is
   asserting
   something that is untrue. Either way Benjamin Jakubik cannot assert
 all
   true statements without also asserting false contradictory ones.
 That
  is
  a
   limitation that both you and me and any computer have.
  The problem is of a more practical/empirical nature. You are right
 that
  from
  a philosophical/analytical standpoint there isn't necessarily any
  difference.
 
  Let's reformulate the question to make it less theoretical and more
  empirical:
  'You won't be able to determine the truth of this statement by
  programming
  a
  computer'
 
  Just try and program a computer that is determining the answer to my
  problem
  in any way that relates to its actual content. It is not possible
 because
  the actual content is that whatever you program into the computer
 doesn't
  answer the question, yet when you cease doing it you can observe that
 you
  can't succeed and thus that the statement is true.
  It demonstrates to yourself that there are insights you can't get out
 of
  programming the computer the right way. To put it another way, it
 shows
  you
  that it is really just obvious that you are beyond the computer,
 because
  you
  are the one programming it.
 
  Computers do only what we instruct them to do (this is how we built
  them),
  if they are not malfunctioning. In this way, we are beyond them.
 
 
  I once played with an artificial life program.  The program consisted
 of
  little robots that sought food, and originally had randomly wired
 brains.
   Using evolution to adapt the genes that defined the little robot's
  artificial neural network, these robots became better and better at
  gathering food.  But after running the evolution overnight I awoke to
 find
  them doing something quite surprising.  Something that neither I, nor
 the
  original programmer perhaps ever thought of.
 
  Was this computer only doing what we instructed it to do?  If so, why
  would
  I find one of the evolved behaviors so surprising?
 Of course, since this is what computers do. And it is suprising because
 we
 don't know what the results of carrying out the instructions we give it
 will
 be. I never stated that computers don't do suprising things. They just
 won't
 invent something that is not derived from the axioms/the code we give
 them.



 It is hard to find anything that is not derived from the code of the
 universal dovetailer.
The universal dovetailer just goes through all computations in the sense of
universal-turing-machine-equivalent-computation. As Bruno mentioned, that
doesn't even exhaust what computers can do, since they can, for example,
prove things (and some languages prove some things that other languages
don't).

Also, the universal dovetailer can't select a computation. So if I write a
program that computes something specific, I do something that the UD doesn't
do.
It is similar to claiming that it is hard to find a text that is not derived
from monkeys bashing on type writers, just because they will produce every
possible output some day.

Intelligence is not simply blindly going through every possibility but also
encompasses organizing them meaningfully and selecting specific ones and
producing them in a certain order and producing them within a certain time
limit.


Jason Resch-2 wrote:
 
 Jason Resch-2 wrote:
 
 
  You might say we only do what we were instructed to do by the laws of
  nature, but this would be merely a metaphor, not an actual fact (the
 laws
  of
  nature are just our approach of describing the world, not something
 that
  is
  somehow actually programming us).
 
 
  That we cannot use our brains to violate physical laws (the true laws,
 not
  our models or approximations of them) is more than a metaphor.
 
  Regardless of whether or not we are programmed, the atoms in our brain
 are
  as rigididly controlled as the logic gates of any computer.  The point
 is
  that physical laws, or logical laws serve only as the most primitive of
  building blocks on which greater complexity may be built.  I think it
 is
  an
  error to say that because inviolable laws sit

Re: Simple proof that our intelligence transcends that of computers

2012-08-22 Thread benjayk


Jason Resch-2 wrote:
 
 On Wed, Aug 22, 2012 at 1:07 PM, benjayk
 benjamin.jaku...@googlemail.comwrote:
 


 Jason Resch-2 wrote:
 
  On Wed, Aug 22, 2012 at 10:48 AM, benjayk
  benjamin.jaku...@googlemail.comwrote:
 
 
 
  Bruno Marchal wrote:
  
  
   Imagine a computer without an output. Now, if we look at what the
   computer
   is doing, we can not infer what it is actually doing in terms of
   high-level
   activity, because this is just defined at the output/input. For
   example, no
   video exists in the computer - the data of the video could be other
   data as
   well. We would indeed just find computation.
   At the level of the chip, notions like definition, proving,
 inductive
   interference don't exist. And if we believe the church-turing
   thesis, they
   can't exist in any computation (since all are equivalent to a
   computation of
   a turing computer, which doesn't have those notions), they would be
   merely
   labels that we use in our programming language.
  
   All computers are equivalent with respect to computability. This
 does
   not entail that all computers are equivalent to respect of
   provability. Indeed the PA machines proves much more than the RA
   machines. The ZF machine proves much more than the PA machines. But
   they do prove in the operational meaning of the term. They actually
   give proof of statements. Like you can say that a computer can play
   chess.
   Computability is closed for the diagonal procedure, but not
   provability, game, definability, etc.
  
  OK, this makes sense.
 
  In any case, the problem still exists, though it may not be enough to
 say
  that the answer to the statement is not computable. The original form
  still
  holds (saying solely using a computer).
 
 
  For to work, as Godel did, you need to perfectly define the elements in
  the
  sentence using a formal language like mathematics.  English is too
  ambiguous.  If you try perfectly define what you mean by computer, in a
  formal way, you may find that you have trouble coming up with a
 definition
  that includes computers, but does't also include human brains.
 
 
 No, this can't work, since the sentence is exactly supposed to express
 something that cannot be precisely defined and show that it is
 intuitively
 true.

 Actually even the most precise definitions do exactly the same at the
 root,
 since there is no such a thing as a fundamentally precise definition. For
 example 0: You might say it is the smallest non-negative integer, but
 this
 begs the question, since integer is meaningless without defining 0 first.
 So
 ultimately we just rely on our intuitive fuzzy understanding of 0 as
 nothing, and being one less then one of something (which again is an
 intuitive notion derived from our experience of objects).


 
 So what is your definition of computer, and what is your
 evidence/reasoning
 that you yourself are not contained in that definition?
 
There is no perfect definition of computer. I take computer to mean the
usual physical computer, since this is all that is required for my argument.

I (if I take myself to be human) can't be contained in that definition
because a human is not a computer according to the everyday definition.
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34336029.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Simple proof that our intelligence transcends that of computers

2012-08-21 Thread benjayk

In this post I present an example of a problem that we can (quite easily)
solve, yet a computer can't, even in principle, thus showing that our
intelligence transcends that of a computer. It doesn't necessarily show that
human intelligence transcend computer intelligence, since the human may have
received the answer from something beyond itself (even though I am quite
confident human intelligence does transcend computer intelligence).

It is, in some sense, a variant of the Gödel sentence, yet it more directly
relates to computers, thus avoiding the ambiguities in interpreting the
relevance of Gödel to computer intelligence.

Is the following statement true?
'This statement can't be confirmed to be true solely by utilizing a
computer'
Imagine a computer trying to solve this problem:
If it says yes, it leads to a contradiction, since a computer has been
trying to confirm it, so its answer is wrong.
If it says no, that is, it claims that it CAN be confirmed by a computer,
again leading to a contradiction.

But from this we can derive that a computer cannot correctly answer the
statement, and so cannot solve the problem in question! So the solution to
the problem is YES, yet no computer can really confirm the truth of the
sentence.

Nevertheless it can utter it. A computer can say The following statement is
true: 'This statement can't be confirmed to be true by utilizing a
computer', but when it does this doesn't help to answer the question
whether it is correct about that, since we could just as well program it to
say the opposite.

So, yes, our intelligence (whatever we truly are) definitely transcends the
intelligence of a computer and the quest for strong AI or even superhuman AI
seems futile based on that.

This has also relevance for AI development, especially yet-to-come more
powerful AI. We should hardcode the fact Some things cannot be understood
using computers into the computer, so it reminds us of its own limits. This
will help us to use it correctly and not get lost in a illusion of
all-knowing, all-powerful computers (which to an extend is already happening
as you can see by looking at concepts like singularity).
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34330236.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-21 Thread benjayk


meekerdb wrote:
 
 This sentence cannot be confirmed to be true by a human being.
 
 The Computer
 

He might be right in saying that (See my response to Saibal).
But it can't confirm it as well (how could it, since we as humans can't
confirm it and what he knows about us derives from what we program into
it?). So still, it is less capable than a human.
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34331679.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-21 Thread benjayk


Stephen P. King wrote:
 
 Dear Benjayk,
 
  Isn't this a form of the same argument that Penrose made?
 
I guess so, yet it seems more specific. At least it was more obvious to me
than the usual arguments against AI. I haven't really read anything by
Penrose, except maybe some excerpts, though.
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34331719.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-21 Thread benjayk


meekerdb wrote:
 
 On 8/21/2012 2:24 PM, benjayk wrote:

 meekerdb wrote:
 This sentence cannot be confirmed to be true by a human being.

 The Computer

 He might be right in saying that (See my response to Saibal).
 But it can't confirm it as well (how could it, since we as humans can't
 confirm it and what he knows about us derives from what we program into
 it?). So still, it is less capable than a human.
 
 I know it by simple logic - in which I have observed humans to be
 relatively slow and 
 error prone.
 
 
 regards, The Computer
 
 Well, that is you imagining to be a computer. But program an actual
computer that concludes this without it being hard-coded into it. All it
could do is repeat the opinion you feed it, or disagree with you, depending
on how you program it.

There is nothing computational that suggest that the statement is true or
false. Or if it you believe it is, please attempt to show how.

In fact there is a better formulation of the problem: 'The truth-value of
this statement is not computable.'.
It is true, but this can't be computed, so obviously no computer can reach
this conclusion without it being fed to it via input (which is something
external to the computer). Yet we can see that it is true.
-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34331797.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-21 Thread benjayk


meekerdb wrote:
 
 On 8/21/2012 2:52 PM, benjayk wrote:

 meekerdb wrote:
 On 8/21/2012 2:24 PM, benjayk wrote:
 meekerdb wrote:
 This sentence cannot be confirmed to be true by a human being.

 The Computer

 He might be right in saying that (See my response to Saibal).
 But it can't confirm it as well (how could it, since we as humans can't
 confirm it and what he knows about us derives from what we program into
 it?). So still, it is less capable than a human.
 I know it by simple logic - in which I have observed humans to be
 relatively slow and
 error prone.


 regards, The Computer

   Well, that is you imagining to be a computer. But program an actual
 computer that concludes this without it being hard-coded into it. All it
 could do is repeat the opinion you feed it, or disagree with you,
 depending
 on how you program it.

 There is nothing computational that suggest that the statement is true or
 false. Or if it you believe it is, please attempt to show how.

 In fact there is a better formulation of the problem: 'The truth-value of
 this statement is not computable.'.
 It is true, but this can't be computed, so obviously no computer can
 reach
 this conclusion without it being fed to it via input (which is something
 external to the computer). Yet we can see that it is true.
 
 Not really.  You're equivocating on computable as what can be computed
 and what a 
 computer does.  You're supposing that a computer cannot have the
 reflexive inference 
 capability to see that the statement is true. 
No, I don't supppose that it does. It results from the fact that we get a
contradiction if the computer could see that the statement is true (since it
had to compute it, which is all it can do).


meekerdb wrote:
 
  Yet you're also supposing that when we 
 see it is true that that is not a computation.
No. It can't be a computation, since if it were a computation we couldn't
conclude it is true (as this would be a contradiction, as I showed above).
Unless you reject binary logic, but I am sure the problem also arises in
other logics. I might try this later.


meekerdb wrote:
 
   As Bruno would say, you are just 
 rejecting COMP and supposing - not demonstrating - that humans can do
 hypercomputation.
I didn't say hypercomputation. Just something beyond computation.

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34331938.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-12-09 Thread benjayk

Sorry, I am done with this discussion, I am just tired of it.

I actually agree your argument is useful for refuting materialism, but I
still don't think your conlusion follows from just COMP, since you didn't
eliminate COMP+non-platonic-immaterialism. 

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32945129.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-12-08 Thread benjayk


Bruno Marchal wrote:
 
 
 I can relate with many things you say.
 Indeed I can argue that the universal (Löbian) machine already relate  
 on this, too.
 
 But science get rid only on subjective judgement in publication  
 (ideally), making them universally communicable.
 
 But considering the subjective influence themselves, science prohibit  
 them only by bad habits, ignorance, since about theology has been  
 abandoned to or stolen by the politics (523 after C.). it is just a  
 form of (sad) prohibition. It is above all unscientific.
But that's necessary, in some way. If we try to make the very subjective
communicable, we run into the problem of making the uncommunicable
communicable. Either science fails there, or it isn't very good science
(reproducible and clearly presented) anymore. If we start to include
subjective influence, suddenly our research won't be very reproducible and
can't be very clearly presented in an objective way, which are standards for
good science.
I don't think that the scientific community excluded subjective influence
purely because of dogma, but because it is so hard to research that it is
virtually impossible to obtain good results, and so it quite justfiable to
exclude (as a first approximation of what consistutes valid science) such
research from science. It is at most fringe science, like parapsychology.
I think the mistake of many scientist is to act like fringe science (or not
quite science anymore) is not also a valid tool for gaining insight, just
like mysticism. That's just dogma, scientism.

You are right that we can publicate subjective things without subjective
judgement, but that's not science as commonly understood, as this requires
much more than that (also well designed experiments,
reproducibility,etc...), it is just a part of science.

In a way fringe science and non-suprestitious mysticism is the continuation
of science; it continues its tradition of skepticism and open-mindedness,
but transcends scientific limitation. It is just a more difficult realm, in
the sense that we have to be more clear and honest and non-dogmatic and
careful and skeptic than in science to really gain useful insights.



Bruno Marchal wrote:
 
 And here, according to the machine's comp theory (AUDA) you might be  
 rather true, but cross what can be communicated without making some  
 non provable assumption clear. Or you should add something like I  
 hope that 
I have no clear assumption, and what I say are just thoughts, I am not
saying there are the truth. I think there are very interesting and possibly
useful thoughts, though.
I am not even hoping that, it is just what I think, and it happens to
include hopeful thoughts - but it is not rooted in hope. I am just not a
person rooted in hope (quite the opposite actually, I tend to be afraid and
depressed).
I don't really feel like what I say is what would come out of what one could
hope. It is much more promising than anything one could hope for (like
heaven), and is so big that it naturally comes to us to find it very
frightening.

You are right that unfortunately in our times it seems better to make clear
at the start that you are not dogmatic about what you say, since it is so
common to assume that you think what you write is true. I often don't do
that because I don't even believe in what I say myself. I really can't find
any thought that I don't doubt almost immediatly.

Ultimately every thought and every theory and every assumption is worthy to
be doubted, we just have to learn to not be dependent on our beliefs to
really do that. I don't even think a belief can be true, it can be useful,
that's all, and beliefs that you hold very firmly tend to be of little use.
I treat all these ideas of the conscious singularity as ideas, not as dearly
held beliefs. If it happens it is going to be infinitely unbelievable
anyway.

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32934264.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-12-08 Thread benjayk


Bruno Marchal wrote:
 
 
 On 07 Dec 2011, at 18:41, benjayk wrote:
 


 Bruno Marchal wrote:


 On 05 Dec 2011, at 19:03, benjayk wrote:

 Bruno Marchal wrote:

 I am just not arguing at all for what
 your argument(s) seeks to refute.

 I know that. It might be your problem. You have independent  
 reason to
 *believe* in the conclusion of comp. You just seems uncomfortable
 that
 those conclusions can be extracted from comp. It looks like you  
 feel
 like this should force you to accept comp, but I have *never* say  
 so.
 The point is that I can conceive to say YES, at least in theory.
 I am not uncomfortable that those conclusions can be extracted from
 comp,
 they just can't. I pointed out your flaws in your argument over and
 over
 again, and you simply avoid them by stating some assumption that you
 don't
 make explicit in the reasoning (only the computational state can
 matter) and
 then saying it is equivalent to COMP.

 Where do I say that only the computational state can matter?
 Not in the assumption. Where existence of concrete material brain,  
 and
 skillful doctor, and some luck (for the level), etc. does matter, a
 priori.
 I might say something similar to what you say, but I say it only  
 after
 the step 7 and/or 8, which explains the reason why I are led to that
 idea.
 The step 7 and 8 do not really work for what I am saying.
 
 Explain this in detail. Please.
It just doesn't deal with non-platonic-immaterialism, that's all.


Bruno Marchal wrote:
 
 The only work for
 a certain kind of materialism, not for sufficiently magical  
 materialism or
 non-platonic-immaterialism.
 
 It can't work for everything which might make you doubt you will  
 survive a digital substitution qua computation, that is in virtue a  
 machine do the right corresponding computation.
But if your reaoning doesn't work for everything then the conlusion doesn't
follow. I might doubt that I survive a substitution, but I don't have to if
I don't believe in what you refuted in your argument.
So, you conclusion just follows if you believe only the alternatives you
find relevant can be true. But it's quite unreasonable to assume that.


Bruno Marchal wrote:
 


 Bruno Marchal wrote:

 You didn't refute magical materialism, BTW. You 8 steps assumes
 nothing
 magical is going on, and the MGA argument just refutes physical
 supervenience (not physicality and consciousness are magically
 related).

 I was just saying that I refute comp + consistency of *some* magical
 materialism. I do not refute magical materialism per se, nor the comp
 + sufficiently magical materialism. This is obvious, and that is why
 after step 8 a computationalist can throw such extreme magic away  
 with
 Occam razor. Thermodynamic does not refute the idea that car are
 pushed by invisible and discrete Kangaroos. Artificial Magic is  
 rarely
 scientifically refutable, nor interesting.
 Maybe here is our most important disagreement. Occam is meant to  
 eliminate
 too complicated possibilities. It is of no use to conlude that nothing
 magical or rather, non-objectifiable is going on.
 It is not at all artificial. A car pushed by invisible discrete  
 kangaroos
 is a quite complicated posibility, but that everything is driven by  
 some
 mysterious non-objective force is a quite simple idea that has been  
 believed
 for many centuries, and also is our actual experience.
 
 I agree.
 This is not jeopardized at all with comp. On the contrary it is shown  
 that all universal machines can see something mysterious and they can  
 realize their respective limitations, and transcend them in variate  
 ways. Of course this is more AUDA than UDA. (Some amount of  
 theoretical computer science is needed, but I can explain or give  
 references).
So we agree. But then you conlusion doesn't follow, since you failed to
eliminate the mystery beyond computations. We are not only related to
infinity of computations, we are related to an infinite mystery (which
*also* includes an infinity of computation).


Bruno Marchal wrote:
 
 Even your theory needs some fundamental mysterious thing (numbers or
 computations), so you can't just eliminate fundamentally mysterious  
 things
 at the end of your reasoning, otherwise you have to eliminate the  
 very basis
 of your theory.

 It seems you invoke some ad-hoc principle in the end to simply  
 eliminate all
 possbilities that you don't like.
 
 Proving eliminate possibilities by definition. In the frame of some  
 assumption.
That's not the problem, you are just avoiding my point. The problem is that
your principle it totally ad-hoc. Oh, that's not good, let's just eliminate
that.
As said, you let your favorite mystery surivive and eliminate the one you
don't like. You keep the inherent primitive infinite mystery of numbers, but
deny the *inherent/primitive* infinite mystery of matter or the *inherent*
primitive infinite mystery of consciousness, even though you have no
justification for that. You can say you

Re: The consciousness singularity

2011-12-08 Thread benjayk


meekerdb wrote:
 
 On 12/7/2011 8:14 AM, benjayk wrote:
 Tegmark's argument shows only that the brain is essentially classical if
 we
 assume decoherence works the same in natural systems as in our
 artificial
 experiments.  But it seems natural systems have a better ability to
 remain
 coherent, when it would be impossible otherwise (see photosynthesis). So
 it
 seems we can't rely on Tegmarks assumption.
 
 Photosynthesis doesn't require much coherence.
http://www.physorg.com/news/2011-12-evidence-quantum-photosynthesis.html

And wikipedia says Studies in the last few years have demonstrated the
existence of functional quantum coherence in photosynthetic protein. [...]
These systems use times to decoherence that are within the timescales
calculated for brain protein..


meekerdb wrote:
 
   Even aside from Tegmark's analysis, it's 
 easy to see that brains should be mostly classical.  There would be great
 evolutionary 
 disadvantage to have a brain that was in a coherent superposition when it
 needed to inform 
 actions in a mostly classical world using a mostly classical body.
What if the classical world is just an simplificated world as an
epistemological model that's helps us to survive well in the world of
infinite quantum possibility (which is extremely hard to survive in without
it)? It may be that quantum processes are of great importance everywhere in
nature, and it is precisely our capability of consciousness to make simple
models that makes it appear classical.
We have more and more evidence of that, as we discover quantum coherence in
plants and many phenomena that are virtually impossible to explain in terms
of classical physics (paranormal phenomena).

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32934592.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-12-08 Thread benjayk


Bruno Marchal wrote:
 
 
 On 07 Dec 2011, at 18:41, benjayk wrote:

 You smuggled in your own opinion through the backdoor (only my  
 favorite
 mystery is acceptable).
 
 This is only a negative ad hominem insult. Frankly I prefer your  
 enthusiast tone of your earlier posts.
 
I am not insulting you, I am just stating what you did. You invoke an
occams razor, which actually has nothing to do with eliminating
complicated theories (since it is just mysterious is not complicated at
all), and is really your opinion of what alternatives are acceptable.
You elimimate the primary mystery of matter and/or consciousness, but
abitrarily keep the mystery of computations.


Bruno Marchal wrote:
 
 Quentin and Brent(*), and myself, have patiently debunked your  
 refutation. You might just ask for explanation if you still miss the  
 point.
Sorry, you are patiently avoiding my point and claim to have debunked it.
That's a bit unfair.


Bruno Marchal wrote:
 .
 With Occam, we can't eliminate the mystery.
 
 Occam eliminates only the ad hoc hypothesis used for making a theory  
 wrong. Occam eliminates the collapse of the wave packet,  for example,  
 because the collapse is made only to make QM false when applied to the  
 observers. (To avoid many realities).
 
 Likewise Occam eliminates primitive matter if the appearance of matter  
 can be (or has to be) explained in a conceptual simpler theory. And my  
 point is double:
 
 1) if we assume comp then it has to be the case that arithmetic (or  
 combinator, ...) is the simpler theory. (UDA)
 
 2) This can be verified (making comp testable) by deriving physics  
 from a translation of UDA in the language of a universal number.  
 (AUDA). Then you can compare that physics with the observation  
 inferred physics.
You miss the most simple possibility that primitive matter/consciousness
don't work according to any theory, but to some more fundamental
untheoretical principle.
You can't eliminate that, and your theory can't derive that principle,
either. And no, that is not unreasonable, since the very axioms of math
don't work according to any theory, either.

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32934738.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-12-07 Thread benjayk
 intelligence
(which lies in its simplicity). Consciousness does not belong to something
and has no location, so we can't find consciousness in particular matter.
What is the meaning of all of this? Self-meaning. Self-order. Ultimately
leading to ever increasing, boundless insight, creativity and happiness.
It's all we could ever wish for, and unimaginably much more. It provides a
infinite richness of unlimited beauty that is so marvelous that our wildest
imaginations of heaven (or the nerd equivalent the technological
singularity) are not even a pale shadow of the truth and the real goodness
of our future.
What is the fate of the cosmos? Ever increasing self-order at ever greater
scales and at ever greater efficiency, ever increasing unity, connection,
diversity.
Why can paranormal events not be easily verified scientifically? They aren't
objective phenomena, the objective world is just a small aspect of
consciousness - if we try to objectify them (get rid of subjective
influences, like done in science) they largly vanish.
If there is already infinite intelligence, why can't we really find it? It
is not to be found in the objective world, and so it is hard for us as
beings fixated on objects and external circumstance to get in touch with it.
It will come naturally to us as we get more in touch with the reality of us
being infinite consciousness.
Is ther life after death? Life of consciousness is already eternal.
Individuals are only different expressions of consciousness, not seperate
beings.
Why is there accelerating development? Because it lies in the nature of
self-organizing universal intelligence to self-organize to self-organize
better.
How can the human problems be solved? They needn't be, consciousness takes
care of itself, and as soon as we see that, the apparent problems become
irrelevant. It is not luck that we survived that far, consciousness
self-regulates to make sure important intelligent structures surive.
If subjectivity is primary, why can't we simply transcend all physical
limits and make ourselves happy? The universe doesn't care about individual
transcendence or happiness, it needs physical limits to help order itself in
a consistent (non-dreamy) way, until it learns to trascend the limits (which
requires, among other things, universal cooperative behaviour among humans).
What is our individual part in all of this? Naturally learn to recognize
that we as individuals are just a part of the whole that we really are, and
through this, learn to finally relax into our true infinite consciousness
and be really free. It isn't so important what we do, the things go the way
they do anyway.

benjayk

-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32929793.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-12-07 Thread benjayk


Bruno Marchal wrote:
 
 
 On 05 Dec 2011, at 19:03, benjayk wrote:

 Bruno Marchal wrote:

 I am just not arguing at all for what
 your argument(s) seeks to refute.

 I know that. It might be your problem. You have independent reason to
 *believe* in the conclusion of comp. You just seems uncomfortable  
 that
 those conclusions can be extracted from comp. It looks like you feel
 like this should force you to accept comp, but I have *never* say so.
 The point is that I can conceive to say YES, at least in theory.
 I am not uncomfortable that those conclusions can be extracted from  
 comp,
 they just can't. I pointed out your flaws in your argument over and  
 over
 again, and you simply avoid them by stating some assumption that you  
 don't
 make explicit in the reasoning (only the computational state can  
 matter) and
 then saying it is equivalent to COMP.
 
 Where do I say that only the computational state can matter?
 Not in the assumption. Where existence of concrete material brain, and  
 skillful doctor, and some luck (for the level), etc. does matter, a  
 priori.
 I might say something similar to what you say, but I say it only after  
 the step 7 and/or 8, which explains the reason why I are led to that  
 idea.
The step 7 and 8 do not really work for what I am saying. The only work for
a certain kind of materialism, not for sufficiently magical materialism or
non-platonic-immaterialism.


Bruno Marchal wrote:
 
 You didn't refute magical materialism, BTW. You 8 steps assumes  
 nothing
 magical is going on, and the MGA argument just refutes physical
 supervenience (not physicality and consciousness are magically  
 related).
 
 I was just saying that I refute comp + consistency of *some* magical  
 materialism. I do not refute magical materialism per se, nor the comp  
 + sufficiently magical materialism. This is obvious, and that is why  
 after step 8 a computationalist can throw such extreme magic away with  
 Occam razor. Thermodynamic does not refute the idea that car are  
 pushed by invisible and discrete Kangaroos. Artificial Magic is rarely  
 scientifically refutable, nor interesting.
Maybe here is our most important disagreement. Occam is meant to eliminate
too complicated possibilities. It is of no use to conlude that nothing
magical or rather, non-objectifiable is going on.
It is not at all artificial. A car pushed by invisible discrete kangaroos
is a quite complicated posibility, but that everything is driven by some
mysterious non-objective force is a quite simple idea that has been believed
for many centuries, and also is our actual experience. 
Even your theory needs some fundamental mysterious thing (numbers or
computations), so you can't just eliminate fundamentally mysterious things
at the end of your reasoning, otherwise you have to eliminate the very basis
of your theory.

It seems you invoke some ad-hoc principle in the end to simply eliminate all
possbilities that you don't like.
You smuggled in your own opinion through the backdoor (only my favorite
mystery is acceptable).

benjayk

-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32930129.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-12-06 Thread benjayk


Quentin Anciaux-2 wrote:
 
 2011/12/5 benjayk benjamin.jaku...@googlemail.com
 


 Bruno Marchal wrote:
 
 
  On 04 Dec 2011, at 16:39, benjayk wrote:
 
 
 
  Bruno Marchal wrote:
 
  The steps rely on the substitution being perfect, which they will
  never
  be.
 
  That would contradict the digital and correct level assumption.
 
  No. Correctly functioning means good enough to be working, not
  perfect.
 
  Once the level is chosen, it is perfect, by definition of digital.
  Either you miss something or you are playing with words.
 No, you miss something. You choose to define the words so that they fit
 your
 conlusion.
 Wikipedia says A digital system[1] is a data technology that uses
 discrete
 (discontinuous) values.. That does not mean that digital system has no
 other relevant parts that don't work with discrete values, and that may
 matter in the substitution.
 COMP does not say they can't matter.

 
 It does by definition.
 
Definition of what? Correct substitution level? It just says that there is a
working substitution level. It does not say it has to work perfectly, or
that only the right choice of the substitution level matters (indeed,
obviously it matter whether it is instantiated correctly).


Quentin Anciaux-2 wrote:
 
  The only thing that matter is digitalness... the
 fact that you run it on your pingpong ball computer doesn't matter.
 
It does matter. If you run computations on pingpong ball computer that
interact with the environment, it will be useless (because the computations
are too slow to use the input and give useful output). And the brain/body of
us interacts with the environment per definition of what a brain/body is.
Or, if your computer runs the expected computations, but fails 99,999%
percent of the time, it is also of no use. 
Or if your computer runs the expected computations, but doesn't correctly
transform analog and digital values. Say, for example you give it a sound
Woooshhh... that is represented as data XYZ and then is transformed by the
computation C which gives the digital output ABC, which is sent to your
screen, it will be useless.
We always need input/output, otherwise our brain can't interact with its
environment, making it useless.

COMP does not say only the digitalness matters. It says digital
substitution, but it does not say that only the digitalness of the
substitution matters. As said, digital means using discrete values, not
something were everything else but its discrete values does not matter (what
ever that would even mean, since we can't even absolutely differentiate
between discrete values and their physical anolog instantiation).
Also, we assume that doctor correctly implements the computations, and in
that implementation it may matter if his implementations takes care of the
non-computational aspect of the implementation.

If we take COMP to mean only the discrete values and their computations can
matter, then we already state the conlusion, since discrete values and their
computations are not physical, but abstract notions, so materialism (and
non-platonic-immaterialism) are excluded at the beginning.
But in this case the doctor can not possibly make a mistake (since the
physical instantiation can't matter, and so can't be wrong), but this means
that it doesn't matter at all what is being substituted and how.
That is a reductio ad absurdum of this interpretation of COMP, since it
obviously does matter whether we substitute our brain with a peanut or a
working device.

I don't get why it is not valid to show that the assumption is absurd to
refute the reasoning. You can't say assuming [the latter form of] COMP if
that assumption is absurd (well, you can but then your reasoning is as
absurd).


Quentin Anciaux-2 wrote:
 
 Bruno Marchal wrote:
 
  A digital
  computer is not defined to be always working, and a correct
  substitution is
  one where the computer works good enough, not perfectly.
 
  You miss the notion of level, and are splitting the hair, it seems to
  me.
 I am splitting the hair if I am pointing out the most essential flaw in
 the
 argument?
 I don't miss the notion of level. Correct substitution level means
 working
 substitution level, nowhere does it say it works perfectly.
 
 If there is a substitution level, then it is perfect by definition of
 substitution level. If it is not perfect, either it is not the correct
 substitution level or there are none.
Nowhere in COMP is substitution level defined as a level that works
perfectly. It works good enough for us to subjectively stay the same person.

If you insist COMP means there is a perfect substitution level, we get the
same problem as above (perfect substitution is not possible physically -
just according to the COMP conclusion -, so we can't substitute correctly,
or any abitrary substitution has no effect, which is absurd) and even if a
perfect substitution level existed, it would have to be correctly
implemented, which may include a non-computational aspect.


Quentin

Re: The consciousness singularity

2011-12-06 Thread benjayk


Quentin Anciaux-2 wrote:
 
 2011/12/6 benjayk benjamin.jaku...@googlemail.com
 


 Quentin Anciaux-2 wrote:
 
  2011/12/5 benjayk benjamin.jaku...@googlemail.com
 
 
 
  Bruno Marchal wrote:
  
  
   On 04 Dec 2011, at 16:39, benjayk wrote:
  
  
  
   Bruno Marchal wrote:
  
   The steps rely on the substitution being perfect, which they
 will
   never
   be.
  
   That would contradict the digital and correct level assumption.
  
   No. Correctly functioning means good enough to be working, not
   perfect.
  
   Once the level is chosen, it is perfect, by definition of digital.
   Either you miss something or you are playing with words.
  No, you miss something. You choose to define the words so that they
 fit
  your
  conlusion.
  Wikipedia says A digital system[1] is a data technology that uses
  discrete
  (discontinuous) values.. That does not mean that digital system has
 no
  other relevant parts that don't work with discrete values, and that
 may
  matter in the substitution.
  COMP does not say they can't matter.
 
 
  It does by definition.
 
 Definition of what? Correct substitution level?
 
 
 If you are turing emulable *then* there exists a *perfect* substitution
 level *or* the premice you are turing emulable is false.
There exists no premise you are turing emulable.
COMP as defined by Bruno in his UDA says that we can be substituted by a
correct digital substitution (let's call that COMP1). That doesn't mean that
we have to be perfectly turing emulable. You can substitute a heart with an
artificial heart, that doesn't mean that the artificial heart works exactly
like the biological heart.
As Bruno, you assume the conlusion additionally to COMP1.

If we assume at the start that we are in a turing emulable state (let's call
it COMP2), we don't have to derive that this means that we can't be material
(and thus the world we are in can't be fully material also), since a turing
emulable state is per definition a state of an abstract machine, not of a
physical system.

But then the reasoning is not deriving anything. At most, it explains the
hypothesis. I am not saying it is not good in this, Brunos steps explain
well what it would mean if we are in a emulable state, but then Brunos
argument is just not what Bruno claims it is (if we say yes to an digital
substitution his conclusion follows).


Quentin Anciaux-2 wrote:
 
 , it will be useless (because the computations
 are too slow to use the input and give useful output). And the brain/body
 of
 us interacts with the environment per definition of what a brain/body is.
 Or, if your computer runs the expected computations, but fails 99,999%
 percent of the time, it is also of no use.
 Or if your computer runs the expected computations, but doesn't correctly
 transform analog and digital values. Say, for example you give it a sound
 Woooshhh... that is represented as data XYZ and then is transformed by
 the
 computation C which gives the digital output ABC, which is sent to your
 screen, it will be useless.
 We always need input/output, otherwise our brain can't interact with its
 environment, making it useless.

 COMP does not say only the digitalness matters.
 
 
 Yes it says... Computationalism is the theory that you can be
 run/simulated
 on a digital computer.
Even if it does (it is not exactly COMP as defined by Bruno, because it
doesn't state that we ourselves can be run on a computer, just that our body
can be substituted): A digital computer consists not only of the turing
emulable states it works with. It does way more than that, since it is a
physical object and has to have some parts that transfrom the states (which
work with analog means like voltage), and receive (analog) input and output.
And because of that, we can't assume that it only matters that the
computations are being done, but it may matter how the computations are done
and how they are being interfaced with the environment.
One could define computer more narrowly to exclude input and output, but in
this case a substitution is impossible, because without input and output a
brain or body can't work.
Only digital input and output doesn't work, because (even according to
Brunos conlusion) the physical world is not purely digital, so a digital
input and output is of no use.
And if we even grant that the external world can mysteriously give the right
digital input and do something with the output, then we create an additional
mysterious non-computational force that matters to what happens (because it
determines whether the digital brain receives the right input and output).
But according to Brunos conlusion this can't be, as we are supposedly *only*
related to computations.
One could argue that this outside could be infinite sheafs of computations,
but they don't give a output to the brain, so this doesn't seem to work,
either. The only way the could give an output if they have something else to
determine what output to give, for example a distribution on the sheat

Re: The consciousness singularity

2011-12-05 Thread benjayk


Bruno Marchal wrote:
 
 
 On 04 Dec 2011, at 16:39, benjayk wrote:
 


 Bruno Marchal wrote:

 The steps rely on the substitution being perfect, which they will
 never
 be.

 That would contradict the digital and correct level assumption.

 No. Correctly functioning means good enough to be working, not  
 perfect.
 
 Once the level is chosen, it is perfect, by definition of digital.
 Either you miss something or you are playing with words.
No, you miss something. You choose to define the words so that they fit your
conlusion.
Wikipedia says A digital system[1] is a data technology that uses discrete
(discontinuous) values.. That does not mean that digital system has no
other relevant parts that don't work with discrete values, and that may
matter in the substitution.
COMP does not say they can't matter.


Bruno Marchal wrote:
 
 Digital means based on discrete values, not only consisting of  
 discrete
 values (otherwise there could be no digital computers, since they  
 rely on
 non-discrete functioning of their parts).
 
 In which theory. The assumptions are neutral on physics. Here, you are  
 not, so i suspect you work in some non defined theory.
What? We have to rely on some basic agreement of what the words used in the
argument mean, and this happens to be the agreement we use in our language
(digital means based on discrete values). This has little to do with a
specific theory.
If we don't presuppose any physics (even not in a practical sense), we can't
substitute a physical object (our brain), since physical object is
undefined, so COMP is meaningless, and in this case this is not a question
of lack of faith in the possbility of a correct substitution.
So if you want to eliminate any practical notion of physics in the
argumentation, you invalidate the COMP assumption, because it would state a
totally undefined thing (substituting a physical object).


Bruno Marchal wrote:
 
 A digital
 computer is not defined to be always working, and a correct  
 substitution is
 one where the computer works good enough, not perfectly.
 
 You miss the notion of level, and are splitting the hair, it seems to  
 me.
I am splitting the hair if I am pointing out the most essential flaw in the
argument?
I don't miss the notion of level. Correct substitution level means working
substitution level, nowhere does it say it works perfectly. Indeed it can't
work perfectly, as we all plainly observe in the physical world, no device
works perfectly.
You misrepresent the notion of level that is defined in the argument with
your imagination of what a level is supposed to be (the right level is the
perfect instantiation of the right turing emulable states).

It seems you just get defensive because you realize your argument doesn't
work. I see that it is important for you, but if you want to be honest, that
is no good reason to ignore criticism.


Bruno Marchal wrote:
 
 And if you do
 remain relatively invariant, it is only because you choose to define
 yourself in a way that you are still yourself after a certain change  
 in
 experience, but that is just a matter of opinion, and it means that  
 is just
 a matter of opinion whether you survive a substitution - but then we  
 can
 only conclude that we may survive no substitution (if we don't  
 believe YES
 doctor) or we survive every substitution (!) or something inbetween  
 - a
 pretty weak conclusion.
 
 You are playing with words. Sorry, but I get that feeling. Comp would  
 have no sense if you were true here, and that contradict other  
 statement you made. you still are unclear if you criticize comp, or  
 the validity of the reasoning. You seem a bit wanting to be negative.
I am just being honest. My criticism can be conceived of a criticism of comp
or your reasoning, because I argue that either comp is false or the
reasoning.
So it might be that your reasoning cannot directly be shown false, if you
insist that COMP is meaningless.
You seem to do that above, as you want to eliminate all notions of
physicality, but then we can't substitute a physical brain anymore, so COMP
becomes meaningless.


Bruno Marchal wrote:
 

 Also: How does your reasoning show that we can't survive every  
 substitution?
 
 Nowhere the reasoning shows that. On the contrary, I have very often  
 presented the conclusion partially by saying: if you can survive (in  
 the usual clinical sense) with a concrete digital brain, then you will  
 survive no matter what.
OK. Then your argument refutes COMP. If I survive every substitution, there
can be no correct substitution level, and no non-abitrary description of my
parts. All levels would be correct and all descriptions correct, but that is
not only absurd, but also makes it impossible to choose the correct one.
But if COMP is false, your conclusion does not follow, obviously.


Bruno Marchal wrote:
 
 but only *if done in the correct non-computational way*,
 
 And that would just contradict directly the comp *assumption*. You are  
 (again

Re: The consciousness singularity

2011-12-04 Thread benjayk
that adhere to a naive form of materialism have to be very dogmatic to keep
that belief for long, so they probably often will not be convinced by any
rational argument at all.
I mean even almost universally accepted modern physics are not compatible
with naive materialism (things are made of spatially defined and non-fuzzy
stuff, like bricks or something).

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32912437.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-12-02 Thread benjayk


Bruno Marchal wrote:
 
 
 On 29 Nov 2011, at 18:44, benjayk wrote:
 


 Bruno Marchal wrote:

 I only say that I do not have a perspective of being a computer.

 If you can add and multiply, or if you can play the Conway game of
 life, then you can understand that you are at least a computer.
 So, then I am computer or something more capable than a computer? I  
 have no
 doubt that this is true.
 
 OK. And comp assumes that we are not more than a computer, concerning  
 our abilities to think, etc. This is what is captured in a quasi  
 operational way by the yes doctor thought experiment. Most people  
 understand that they can survive with an artificial heart, for  
 example, and with comp, the brain is not a privileged organ with  
 respect to such a possible substitution.
If YES doctor means we are just an immaterial abstract computer than there
is nothing to deduce (our experience already is only related to
computations, since we defined as by them).
But if YES doctor just means our bodies work *like* a computer (and thus the
substitution works, and we already know that this is the case to some
extent) then none of the step works because they assume we work exactly 100%
like a abstract computer. In actuality we can eg never be sure that
teleportation, duplication etc... work as intended, because actual computers
are not totally reliable, and actually quantum objects, and not purely
digital in an abstract sense (I argue in a more detailed way below).
In other words, you are assuming an abstraction of a computer in the
argument, which is already the conlusion.
The steps rely on the substitution being perfect, which they will never
be.

I'm probably making it to complicated, because I can't seem to point out the
simple fallacy. That's why I'm continuing to give examples of why either YES
doctor does not mean what you need it to mean (we are exactly, and only, and
always an abstract digital computer) or why you can't assume that the
reasoning work.


Bruno Marchal wrote:
 


 Bruno Marchal wrote:

 When I look
 at myself, I see (in the center of my attention) a biological being,
 not a
 computer.

 Biological being are computers. If you feel to be more than a
 computer, then tell me what.
 Biological beings are not computers. Obviously a biological being it  
 is not
 a computer in the sense of physical computer.
 
 I don't understand this. A bacteria is a physical being (in the sense  
 that it has a physical body) and is a computer in the sense that its  
 genetic regulatory system can emulate a universal machine.
Usually computer means programmable machine, not something that can emulate
a universal machine. It seems you are so hooked on the abstract perspective
of a computer scientist, that you don't even see the possibility of the
distinction abstract computer / actual computer.


Bruno Marchal wrote:
 
 It is also not an abstract
 digital computer (even according to COMP it isn't) since a  
 biological being
 is physical and spiritual (meaning related to subjective conscious
 experience beyond physicality and computability).
 
 But all universal machine have a link with something beyond  
 physicality and computability. Truth about computability is beyond the  
 computable. So your point is not valid.
Yes, but then the whole argument does not work, because it deals with
something that even according to your conclusion can't be purely
computational (actual computers), so you can't assume it works as they
should do. COMP does just mean we work enough like computers to make a
substitution possible (we say YES to a *functionally* correct substitution),
it does not mean that there is any substitution that works perfectly.


Bruno Marchal wrote:
 
 Neither
 can they be derived from it.
 
 Physicality can be derived. And has to be derived (by UDA). Both  
 quanta and qualia. Only the geography cannot be derived, but the  
 physical laws can. You might elaborate why you think they can't.
Frankly I don't believe in absolute physical laws, so we can't derive them.
They are just locally valid approximate rules, like swans are white.


Bruno Marchal wrote:
 

 And no, there is no need for any evidence for some non-turing emulable
 infinity in the brain. We just need non-turing emulable finite stuff  
 in the
 brain, and that's already there.
 
 I thought you were immaterialist. What is that finite stuff which is  
 non Turing emulable?
Matter. It is a form of consciousness that is finite in terms of apparent
size and apparent information content but still not computable, because the
qualia of matter itself cannot be substituted.
I don't believe in primitive matter, but I believe in stuff as a sensation
of stuffiness.


Bruno Marchal wrote:
 
 I really try to understand. Sometimes it seems you argue against comp,  
 and sometimes it seems you argue against the proof that comp entails  
 the Platonist reversal (to be short).
Well, actually I am arguing agains both, but relevant to your argument is
just

Re: The consciousness singularity

2011-12-01 Thread benjayk


John Mikes wrote:
 
 Don't let yourself drag into a narrower vision just to be able to agree,
 please. I say openly: I dunno (not Nobel-stuff I admit).
 
I agree wholheartedly!
That's why I don't like the reasoning. It is very narrow, and pretends to be
a proof (or at least a valid reasoning) for something that can't be
concluded through reason. It is very immodest to just disregard all
criticism of the argument (and to defend that with you don't know what
you're talking about), and then claim to be modest by virtue of not taken
the assumption for granted.
Taken the validity of reasoning for granted is not much more modest than
taking assumptions for granted, since really the reasoning itself depends on
many unstated assumption.
In this case, for example, only materialism or computational immaterialism
can be true, it is meaningful to say YES to something that is subjectively
not happening, etc...
I don't *know* the reasoning is false, but I can see plainly that is not
quite as objectively valid as Bruno wants to present it as.

Being able to say I DUNNO! is, in my opinion, one of the most important
steps in really being able to experience reality and ourselves in an
unbiased and clear manner.
As long as we cling to knowledge, we are looking at our ideas of reality and
ourselves, not at reality as it actually is.

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32891833.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-11-29 Thread benjayk
 the patient will notice he has
been substituted, that is, he didn't survive a substitution, but a change of
himself - if he survives).

I guess I will abandon the discussion, if in the next post you also don't
bother to respond to anything essential I said. Apparently you are
dogmatically insisting that everyone that criticizes your argument doesn't
understand it and is wrong, and therefore you don't actually have to inspect
what they are saying. If this is the case a discussion is quite futile. Up
to know I just had the faith that you know better than that and will sooner
or later give an actual response, but now I am not so sure anymore.


Bruno Marchal wrote:
 
 Either way, our experience doesn't remain invariant, or we have no  
 way to
 state we are being substituted (making COMP meaningless).
 
 This point is not valid. We can say yes for a substitution in  
 advance. Then, in that case, just surviving a fatal brain illness will  
 make the difference.
But you just said that this can't happen, because he himself will
subjectively remain unchanged. His fatal brain illness will still be there,
because we have to include it in the substitution. Otherwise you are not
substituting, you are changing him. And in this case he will survive as
what he changed into (even if this is just a collection of misfiring
transistors). But then we obviously don't know whether he really survives in
any sense of the word, and if, in what sense he did survive (since this
depends in which way we changed him).


Bruno Marchal wrote:
 

 How is that not a reductio ad absurdum?
 The only situtation where COMP may be reasonable is if the  
 substitute is
 very similar in a way beyond computational similarity - which we can  
 already
 confirm due to digital implants working.
 
 The apparent success of digital implants confirms that we don't need  
 to go beyond computational similarity.
It doesn't, because the surrounding neurons may make additional connections
to interpret the computations that are happening. This just works as long as
the neurons can make enough new connections to fill the similarity gap.


Bruno Marchal wrote:
 
 This would make COMP work in a quite special case scenario, but  
 wrong in
 general.
 
 It is hard to follow you.
I am not saying anything very complicated. It is only hard to follow because
your are insisting on some theoretical situtation which is non-sensical in
reality.
If you do insists that we say YES in the way you would like us to, we either
say YES to your conlusion, or we just say YES to something that doesn't
happen (which doesn't allow any conclusion to be drawn).

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32881450.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-11-25 Thread benjayk


Jason Resch-2 wrote:
 
 On Thu, Nov 24, 2011 at 2:44 PM, benjayk
 benjamin.jaku...@googlemail.comwrote:
 


 Jason Resch-2 wrote:
 
  On Wed, Nov 23, 2011 at 1:17 PM, meekerdb meeke...@verizon.net wrote:
 
  On 11/23/2011 4:27 AM, Jason Resch wrote:
 
  The simulation argument:
 
  http://www.simulation-**argument.com/simulation.html
 http://www.simulation-argument.com/simulation.html
 
  If any civilization in this universe or others has reached the point
  where they choose to explore consciousness (rather than or in
 addition
  to
  exploring their environment) then there are super-intelligences which
  may
  chooses to see what it is like to be you, or any other human, or any
  other
  species.  After they generate this experience, they may integrate its
  memories into the larger super-mind, and therefore there are
  continuations
  where you become one with god.  Alternate post-singularity
  civilizations
  may maintain individuality, in which case, any one person choosing to
  experience another being's life will after experiencing that life
  awaken
  to find themselves in a type of heaven or nirvana offering unlimited
  freedom, from which they can come back to earth or other physical
 worlds
  as
  they choose (via simulation).
 
  Therefore, even for those that don't survive to see the human race
  become
  a trans-humanist, omega-point civilization, and for those that don't
  upload
  their brain, there remain paths to these other realities.   I think
 this
  can address the eternal aging implied by many-worlds: eventually, the
  probability that you survive by other means, e.g., waking up as a
 being
  in
  a post-singularity existence, exceeds the probability of continued
  survival
  through certain paths in the wave function.
 
  Jason
 
 
  Why stop there.  Carrying the argument to it's natural conclusion the
  above has already happened (infinitely many) times and we are now all
 in
  the simulation of the super-intelligent beings who long ago discovered
  that
  nirvana is too boring.
 
  Brent
 
 
 
  Brent,
 
  I agree.  About 10% of all humans who have ever lived are alive today.
   With a silicon-based brain, we could experience things about 1,000,000
  times the rate our biological brains do.  If the humans that uploaded
  themselves spend just 1 day (real time) experiencing other human lives
  that
  is equivalent to 40 human lifetimes worth of experience, and thus 80%
 of
  all human lives experienced would be simulated ones. (After that 1 day)
   This is after just one day, but such a civilization could thrive in
 this
  universe for trillions of years.
 
 Isn't uploading somewhat superflous if we are already simulated?

 
 If everyone were to think like that, then nothing would be simulated.  It
 is like deciding not to put on a seat belt when you go in a car because
 you
 believe in other branches you won't get in an accident in the first place.
  The decisions we make affect the relative proportions and frequencies of
 events.
We may already have simulated ourselves an infinite number of times. If we
decide to simulate ourselves over and over again, we will get in an infinite
cycle of getting lost in our simulations over and over again.
When we just stop, we realize there is already infinite simulations of
everything possible going on.
This is just the most natural conclusion (like Brent said).

We don't have to simulate anything, because we can't avoid that everything
is already being simulated. The only reason to simulate somehing is if the
experience of simulating something is useful (beyond the benefit of
transcending physical limitations, you can do that in dreams as well - just
learn lucid dreaming, it seems to be more rich than any virtual reality
could be and it has the benefit we seemingly don't get lost and addicted to
it, like with games), and we should only do that while making sure there is
a clear difference between simulation and reality (no universal uploading) -
otherwise we have achieved nothing whatsoever, we'll just join the usual
dreamscape. I am not sure under what circumstances very big and involving
simulations would be useful. It might very well turn out the main reason for
simulating anything is discovering the relationship of simulations and
real reality in general. Getting very involved in a simulation may be
impossible (let alone uploading) , since we inevitably will lose contact to
reality (and not just temporarily) quite quickly if we do this. We already
can get dangerously much lost in computer games (often a whole youth is
wasted this way), which are comparitively extremely uninvolving (they are
just pictures on a screen and sound, you don't physically feel anything and
there is a clear sperating barrier between you and the game).
In my youth my main activity was playing computer games, and even though now
I seldomly play games now, and in a casual way, my unconscious was very
polluted by it for a long time and still is to some extent. I

Re: The consciousness singularity

2011-11-25 Thread benjayk


Bruno Marchal wrote:
 
 So uploading is not necessarily superfluous. It is vein if the  
 abstract goal is immortality, but full of sense if the goal consists  
 in seeing the next soccer cup and your brain is too much ill to do it  
 'naturally'.
But as soon as we upload ourselves, we can't make sure we uniquely interact
with our usual physical reality, since an uploaded digital mind could also
be part of a lot of dreamy realities (/simulations/virtual words) - except
if we assume materialism, which postulates there is an objective physical
wold (in which case we have no computational reason to suspect substitutions
will work, we would have to rely on blind faith).
Our brain avoids that by being a structure with a quite unique
instantiation, and a quite clear subjective dividing barrier to virtual
realities (I am not a/ in a computer).

That's why I don't buy COMP: As soon as we substitute ourselves, we will
inevitably change our subjective relative environment, making the
substitution fail. If we are a computer, we can subjectively interface much
more strongly with all the computers that our computational instantiation is
(could be) a part of and interfere with all the simulations that are hard to
dinstinguish from what goes on your computer. It's harder to dinstinguish
yourself from other simulated selfes than from other biological selves,
because of the natural biological barriers that we have, that computers
lack. And we can't assume we are able to find the right world we would like
to be in, without subjectively developing a brain (which will make the
substitution seem to never have happened).
We can only say YES if we assume there is no self-referential loop between
my instantiation and my environment (my instantiation influences what world
I am in, the world I am in influences my instantiation, etc...). But we
really have to assume such a loop exists if we are already part of the
matrix (since everything in the matrix is connected).
It matters how our computations are instantiated because of subjective
self-reference.
OK, we could say YES based on the faith that subjective self-reference will
develop a world for the digital brain that is similar to the old world
(though that seems very unlikely to me), but this is not YES qua computatio.

benjayk

-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32876158.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-11-20 Thread benjayk
 realizing
(some part of) the computations, but not the need for a transcendental
reality. This transcendental reality may not generate the whole UD* (making
something other than abstract computations matter), or may generate the
whole UD* with a non-computational measure, or may generate too much more
than the UD* (making the computation of only relative importance, because
there is more than just the computational measure).

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32869103.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-11-20 Thread benjayk
 realizing
(some part of) the computations, but not the need for a transcendental
reality. This transcendental reality may not generate the whole UD* (making
something other than abstract computations matter), or may generate the
whole UD* with a non-computational measure, or may generate too much more
than the UD* (making the computation of only relative importance, because
there is more than just the computational measure).

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32869104.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-11-17 Thread benjayk


Bruno Marchal wrote:
 
 Actually mechanism as such seems to me to be just a
 metaphor, even though it may be trivially true if every computation  
 [can]
 belong to every experience, which appears to be true to me (since
 experiences are inseperably connected as one movement of  
 consciousness).
 
 ?
We always survive from the 1p of view, regardless how we are substituted
(this is also a result of COMP as far as I am aware of).
The question is, how do we personally feel to survive, and this question has
no mechanistically determineable answer (as 1p experience is not
computable).

The question whether my ego self survives can also not be mechanistically
determined, since it depends on what we identitify the local ego with and
this question cannot be mechanistically determined (as it is a matter of
taste or opinion). If I identify my ego with the computation 1+1=2, then I
can survive in your pocket calculator, if I identify with some vague
particular form of experience, we can't say whether I will survive, because
my identification is too vague for that (I may still say Yes, doctor, just
hoping that some noncomputational component will naturally occur alongside
the substitution).

Therefore it is true that we, from the 1p, are related to all computations,
in an uncomputable way, but also from the 3p we are related to all
computations, in an uncomputable way, unless we fix the 3p to be purely
computational (which won't help us much in the experiental/physical world,
since here there are no seperable computations).
Saying yes does, by the way, not entail that we do that, since our 3p
identification may shift, or be noncomputational, regardless whether we
expect to survive a substitution (your step 8 leading to the conlusion just
works if we assume materialism, which we don't have to do).


Bruno Marchal wrote:
 
 What you call Plantonia, I would simply call the virtual realm, or  
 the dream
 realm (avoiding mathematical connotations).
 
 By Platonia I don't mean anymore than the set of true proposition of  
 arithmetic.
 With mechanism, we need only a tiny effective (computer generable)  
 part of it, which correspond to the UD's work.
If we talk of Platonia, we take a mathematical 3p view, but I am talking
about 1p experience here, that's why calling it Platonia would be
misleading.

Sure, we can take the 3p view that the experience comes out of Platonia, or
comes out of Symbolia (the set of all possible strings) or comes out of
O-tonia (the abstract realm of the letter O) but either way we are then
not talking about the 1p point of view, the realm of experience, which I was
talking about.


Bruno Marchal wrote:
 
 There are probably also infinite layers of virtuality (advanced  
 dreamers of
 the far [potential] future may have heavily nested dreams - dreaming  
 to have
 dreamt to have dreamt ... to have awoken to have awoken and then  
 awaking).
 Ultimately reality in the metaphysical sense encompasses both  
 virtual and
 real.
 
 real is an indexical. It is just virtual seen from inside. From  
 God's view, those have the same nature, although the sharable dreams  
 are more persistent, and can relate to very deep (necessary long)  
 computations.
I agree, I am just calling the more sharable dreams real and the less
sharable ones virtual, in accordance with the every day usage of real.


Bruno Marchal wrote:
 


 Bruno Marchal wrote:

 You are reintroducing a suspect reality selection principle, similar
 to the wave collapse.
 The wave collapse is undoubtably real as a subjective phenomenon, I  
 am not
 saying virtuality is objective.
 It is just a way to order experience. A virtual experience is one  
 from which
 you awake into a more coherent one (without having to die). Virtual
 experience just start out of nowhere, but they also can be  
 (relatively)
 started from normal reality.
 
 ? (not clear for me, sorry).
The last sentence? I mean that a certain virtual experience may be already
be experienced right now, but we can relatively start it by leaving our
usual reality, experience the virtual experience and going back. This may
be felt as entering (thus starting the experience) and leaving.
It's like we didn't make a computer game, but we can start to play it.

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32863888.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-11-15 Thread benjayk


Bruno Marchal wrote:
 
 
 On 14 Nov 2011, at 18:39, benjayk wrote:
 

 I have a few more ideas to add, considering how this singularity  
 might work
 in practice.

 I think that actually consciousness does not start in a linear  
 fashion in
 our coherent material world, but creates an infinity of semi-coherent
 beginngs all the time (at all levels of consciousness), which might be
 termed virtual experiences, that exist right now. These are  
 experiences
 are more akin to exploring the possibility space than having a  
 consistent
 world (though they have to have a relative consistency, no one wants  
 to
 experience random noise). This would explain the encounters with  
 intelligent
 entities encountered on drug trips (sometimes dreams and  
 meditation), that
 seem very conscious. It seems hard to explain where they could come  
 from in
 coventional terms (future, spririt world, parallel universes,  
 etc...?).
 
 Why not mind subroutine? Living in Platonia, and manifesting through  
 brain's module?
 This is already the case if mechanism is correct.
 
Yes, that could well be the case. Calling it subroutine is, in my view, just
a mechanistic metaphor. Actually mechanism as such seems to me to be just a
metaphor, even though it may be trivially true if every computation [can]
belong to every experience, which appears to be true to me (since
experiences are inseperably connected as one movement of consciousness).
What you call Plantonia, I would simply call the virtual realm, or the dream
realm (avoiding mathematical connotations).


Bruno Marchal wrote:
 
 My
 theory is that they are virtual beings, that really experience, but  
 in them
 consciousness has not yet decided by which real entitiy (like a  
 human) it
 is experienced, in which way the real subjective future will be  
 experienced
 (there already might exist a virtual future, though), when it is  
 experienced
 in reality and how exactly the experience is reflected to outside  
 observers.
 
 The thema of this list is that virtual or possible = real. Real =  
 virtual seen from inside.
Right. Real is relative. Virtual beings are real, but we are more real, in
the sense of more stable and coherent (from the view of someone that
awakened from a virtual being, not necessarily from the point of view of
being in the virtual world - there it might appear that the opposite is the
case).
There are probably also infinite layers of virtuality (advanced dreamers of
the far [potential] future may have heavily nested dreams - dreaming to have
dreamt to have dreamt ... to have awoken to have awoken and then awaking).
Ultimately reality in the metaphysical sense encompasses both virtual and
real.


Bruno Marchal wrote:
 
 You are reintroducing a suspect reality selection principle, similar  
 to the wave collapse.
The wave collapse is undoubtably real as a subjective phenomenon, I am not
saying virtuality is objective.
It is just a way to order experience. A virtual experience is one from which
you awake into a more coherent one (without having to die). Virtual
experience just start out of nowhere, but they also can be (relatively)
started from normal reality.

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32851629.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-11-14 Thread benjayk


compscicrackpot wrote:
 
 You might enjoy my conception of God, which I think is the only way in
 which God can be said to exist: God exists as the attractor of maximal
 transcendence or the conscious singularity. :)
 
This fits very well with my conception of God.
I don't even think there is a fundamental difference between God and
consciousness, God is just another conceptual angle of looking at
consciousness (with more emphasis on the powerful and grand aspect, which
mainly lies in the subjective future). Consciousness is already a
singularity of infinite  self-organization, is just isn't that apparent yet,
as it is still in its embryonal stage of its unfoldment in the manifest
world.

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32841391.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-11-14 Thread benjayk

I have a few more ideas to add, considering how this singularity might work
in practice.

I think that actually consciousness does not start in a linear fashion in
our coherent material world, but creates an infinity of semi-coherent
beginngs all the time (at all levels of consciousness), which might be
termed virtual experiences, that exist right now. These are experiences
are more akin to exploring the possibility space than having a consistent
world (though they have to have a relative consistency, no one wants to
experience random noise). This would explain the encounters with intelligent
entities encountered on drug trips (sometimes dreams and meditation), that
seem very conscious. It seems hard to explain where they could come from in
coventional terms (future, spririt world, parallel universes, etc...?). My
theory is that they are virtual beings, that really experience, but in them
consciousness has not yet decided by which real entitiy (like a human) it
is experienced, in which way the real subjective future will be experienced
(there already might exist a virtual future, though), when it is experienced
in reality and how exactly the experience is reflected to outside observers.
They are somehow left in abeyance.
In the future, and partially already in the present, we might download these
experiences and interface them with our normal history. With download, I
mean experience them, and giving them a context, so they can become actual
in a manner that makes sense in our reality. This can happen in our
imagination, in our dreams, through playing games, reading books, surfing
the internet and on trips.
As we download the experience, we may infuse it with our
personality/humaness (this often felt as merging with entities on trips),
which leads to more consistent development in the virtual realm (so that
entities can exist that are stable enough to make a clear and consistent
communication possible).
On the other hand, by downloading experiences, we can infuse our realm with
creative new ideas (and the possibility of paranormal events), bring these
virtual realm on earth. 

If we learn to navigate this virtual realm more efficiently in the future,
it might be immensly powerful. For example, it allows the interaction
between physically seperated entities.
Or it may allow us to make time jumps (of course not collectively, since
someone has to be there to make the time that we skip). That would allow for
truly awesomely fast subjective development. Imagine you live your life, and
at some point a virtual entities contacts you to die and jump 100 years
into the future (where you get an appropiate body and mind for that future,
of course). Right now we can't jump, because we need everyone on earth to
make the world in normal time work. But if we learn to virtualize ourselves
(/navigate the virtual realm) we may, instead of going to the world
ourselves, send copies for some time (that are played by other ones) and
in that way prolong the time until we have to come to earth (or whatever
exists then) again.

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32842071.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Universes

2011-11-12 Thread benjayk


Kim Jones-2 wrote:
 
 Is it possible to have a universe with no laws, including the laws of
 physics? Is not having any laws of physics possible? What could happen
 within such a universe? It seems at least a logical possibility (to me).
 
 Wouldn't this be equivalent to saying either:
 
 
 1. The laws of physics can't be divined or derived in those universes for
 some reason so we only think there no laws
  
 2. The laws of physics change continually in those universes so we can't
 measure them
 
 3. Nothing is possible at all in those universes, but the universes
 nevertheless exist in some sense.
 
 Is this just an empty set or is there more to it?
 
I don't see any evidence that the/any universe follows laws. Laws just
approximate behaviour, they are not what determines behaviour.
Self-organization causes laws, not the opposite way.

We see that in the history of physics. All laws turned out to be
approximations and not perfectly accurate. I don't see why this should
change, so sooner or later all laws will turn out to be approximations of a
another law, or a principle that is not a law (self-organization).
Especially considering quantum mechanics we have to be very bold to state
that the universe follows laws. What we actually see is that laws *don't*
determine the behaviour, since quantum mechanical equations don't describe a
certain behaviour. We don't even have quantum mechanical laws, we have just
a way to make statistical predictions. What kind of law would it be that you
are allow to smoke weed 50% of the time? That wouldn't really be a law.
One might argue that there is an objective wavefunction that follows quantum
mechanical laws, but that is only an assumption, we can't actually find such
a thing (it seems we can't even define a universal wavefunction for the
universe) , so it is just dogmatic to insist is has to exist.
One might also argue that it is a law in so far that it predicts all the
order that there is (meaning what possibility happens of the described ones
is totally random), but this has yet to be shown. We know from experiments
that there is a certain distribution that can be quite accurately defined,
but not that it is entirely random in which way the distribution is achieved
(there may be other distributions which are only locally valid and which
cancel out on average).

More realistic is the possibility that physical laws are only relative and
approximate laws, that can sometimes be violated (like in paranormal
events), just like laws in justice. The laws are only a kind of approximate
common denominator of behaviour. I even think small violations are a vital
part of the functioning of the universe (especially in more intelligent
beings) - the more intelligent, the more laws can be violated without going
into a state of confusion (leading to decreased fitness and thus death).

We already have a lot of evidence that human intelligence can transcend
physical laws, it just isn't yet overwhelming enough to convince the hard
headed materialistic scientific majority. But this will change in a not so
far future, I am pretty sure of that.
It isn't so easy to show that the laws don't universally apply, because it
is very hard to verify. Up to a certain point, we can always say maybe the
laws work together in a way we don't yet understand (even though that gets
increasingly implausible), since the laws are so damn complex.

benjayk
-- 
View this message in context: 
http://old.nabble.com/Universes-tp32830044p32831527.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-11-11 Thread benjayk


Quentin Anciaux-2 wrote:
 
  What I am describing can be said to be a kind of solipsism; only I
 exist,
  but I being the consciousness that we all share,
 
  I can't make a meaning of that... we do not share a consciousness,
 not
  in
  any definition of that term.
 
 I am not speaking about anything definable. Whatever can be defined is
 just
 a concept. You can't define your way to an understanding of
 consciousness,
 just like you can't define things to find out what ice cream tastes like.
 Trying to define the taste of ice cream is quite a futile endavour - the
 most you could give as a definition is a paraphrase and this by itself
 won't
 make it clear at all what it tastes like.

 That we share a consciousness (or rather, are that one consciousness),
 can
 only be recognized by the consciousness itself (that is, you), not
 inferred
 through some apparent relation of objects or persons (or some description
 or
 definition). It is not hidden that there is just this consciousness now,
 and
 there is nothing else to find. That's why no else can have another
 consciousness. It is nowhere to be found. Just your consciousness can be
 found.

 Why would you believe that I have another consciousness, when you can
 never
 access it, and I am even telling you that I don't have it.
 
 
 Well fine, another zombi on the list then.
But I am not a zombie. I just don't see how consciousness can be owned by
anyone.


Quentin Anciaux-2 wrote:
 
 There is no
 evidence for such an other to your consciousness.

 
 The evidence is that I'm conscious, and I infer that objects that
 interact with me the way I do, must be conscious too, ie: have qualia,
 have
 a 1st person perspective. You say you don't... ok fine.
That inference is fine, but you can't infer that their 1st person
perspective takes place in another consciousness. You can't find another
consciousness, and they can't find another consciousness, and there is no
evidence for such a second consciousness. How do you even count
consciousness? I don't see anything to count about it, so how to say that
there are 2 (or 3 or 4...) of it?


Quentin Anciaux-2 wrote:
 
 If you say you can infer it, then I ask by what means? We can infer
 existence of other objects, but only because we already directly see that
 different objects can exist as content of consciousness. But we never
 ever
 witnessed something like a different consciousness
 
 I witness behavior, and for such behavior to occur, qualia must occur.
Yes, there are many reasons to believe so, but there is no reason to suppose
that these qualia occur in another consciousness, whatever this is
suppposed to mean.


Quentin Anciaux-2 wrote:
 
 and so have nothing which
 we could base the claim that there is different consciousness on.

 
 Solipsism is false.
How do you know? It seems obviously true to me when it comes to
consciousness, not persons.


Quentin Anciaux-2 wrote:
 
 It is not even clear to me what other consciousness could even mean.
 
 Someone not being you.
But consciousness is not a someone. It is just experiencing.
You confuse consciousness with persons, or experience that is particular to
a person.

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32825335.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-11-10 Thread benjayk


Spudboy100 wrote:
 
  
 In a message dated 11/9/2011 7:27:48 AM Eastern Standard Time,  
 benjamin.jaku...@googlemail.com writes:
 
 Probably  the one that is most convincing is direct experience. Try
 meditation (my  favorite is just doing nothing while being aware not to
 snooze or think or  search for something to do,etc...), or, if you are a
 bit
 more daring and  very cautious and well informed, psychdelic drugs (eg
 Salvia, Mushrooms,  LSD, DMT) or suspend your belief that you are just a
 person for long enough  (then the reality of unity tends to reveal itself
 spotaneously). If you are  in the right mindset and maybe a bit lucky you 
 can
 experience states in  which it is directly evident that there is
 fundamentally no other, just  this consciousness that you are.
 
 
 I see, Benjamin. But unless one takes these visions as a solipsism, I
 would 
  ask, what does this bring to the table? We humans are primates, and for 
 most of  us primates, we are group animals. We need each other even though
 we 
 irritate  each other. 
What I am describing can be said to be a kind of solipsism; only I exist,
but I being the consciousness that we all share, not I in the sense of me
as a person (which is usually meant when we are talking about solipsism).
We need others as an other to our personhood, but not as an other to us as
consciousness (which is what we really are, the person being more like
something we dress ourselves with). Otherness is the one seeing itself from
different perspectives.


Spudboy100 wrote:
 
 At the end of the day, can one bring information, that 
 would not,  logically, be known, otherwise? For instance, that Uncle, 
 Bruno, left a  mathematical puzzle, he worked on, inscribed on page 1273,
 in the 
 1999 edition  of ARS MATHEMATICA, in his old, study--something like this, 
 let us  say? 
You mean in a paranormal way? There are many experiental results that
suggest so (even though its validity is disputed, but the criticism if often
not vindicated, in my opinion), and a lot of astounding anecdotes. But it
might not work in the way we expect, in terms of consistency,
controllability and scope.

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32818189.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-11-10 Thread benjayk


Quentin Anciaux-2 wrote:
 
 2011/11/10 benjayk benjamin.jaku...@googlemail.com
 


 Spudboy100 wrote:
 
 
  In a message dated 11/9/2011 7:27:48 AM Eastern Standard Time,
  benjamin.jaku...@googlemail.com writes:
 
  Probably  the one that is most convincing is direct experience. Try
  meditation (my  favorite is just doing nothing while being aware not to
  snooze or think or  search for something to do,etc...), or, if you are
 a
  bit
  more daring and  very cautious and well informed, psychdelic drugs (eg
  Salvia, Mushrooms,  LSD, DMT) or suspend your belief that you are just
 a
  person for long enough  (then the reality of unity tends to reveal
 itself
  spotaneously). If you are  in the right mindset and maybe a bit lucky
 you
  can
  experience states in  which it is directly evident that there is
  fundamentally no other, just  this consciousness that you are.
 
 
  I see, Benjamin. But unless one takes these visions as a solipsism, I
  would
   ask, what does this bring to the table? We humans are primates, and
 for
  most of  us primates, we are group animals. We need each other even
 though
  we
  irritate  each other.
 What I am describing can be said to be a kind of solipsism; only I exist,
 but I being the consciousness that we all share,
 
 I can't make a meaning of that... we do not share a consciousness, not
 in
 any definition of that term.
 
I am not speaking about anything definable. Whatever can be defined is just
a concept. You can't define your way to an understanding of consciousness,
just like you can't define things to find out what ice cream tastes like.
Trying to define the taste of ice cream is quite a futile endavour - the
most you could give as a definition is a paraphrase and this by itself won't
make it clear at all what it tastes like.

That we share a consciousness (or rather, are that one consciousness), can
only be recognized by the consciousness itself (that is, you), not inferred
through some apparent relation of objects or persons (or some description or
definition). It is not hidden that there is just this consciousness now, and
there is nothing else to find. That's why no else can have another
consciousness. It is nowhere to be found. Just your consciousness can be
found.

Why would you believe that I have another consciousness, when you can never
access it, and I am even telling you that I don't have it. There is no
evidence for such an other to your consciousness.
If you say you can infer it, then I ask by what means? We can infer
existence of other objects, but only because we already directly see that
different objects can exist as content of consciousness. But we never ever
witnessed something like a different consciousness and so have nothing which
we could base the claim that there is different consciousness on.
It is not even clear to me what other consciousness could even mean. Could
you attempt to give an explanation? 

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32820987.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The consciousness singularity

2011-11-09 Thread benjayk


Spudboy100 wrote:
 
 To your comment, how do we demonstrate  
 that the Universe is conscious?  There must be some cause and effect, some  
 falsifiable, tests that can be done, perhaps centuries, from now, with
 better  
 equipment.
 
Since we are the universe being conscious of itself, and there is no other
outside of it to confirm it, the only way is to realize it for ourselves. It
is possible to directly realize that we are the universe (or rather the
consciousness that it appears in), an experience that is commonly called
samadhi or cosmic consciousness.
It is just not valid to ask for a falsifiable test for something that is
beyond objective tests and measurements, and beyond falsifiability (just
like 1+1=2 is beyond falsifiability and still valid).
To say that this means that is can't be true is just scientism. There is
nothing in science suggesting that it has to be applicable to everything,
and be the sole authority on truth.

It is not true that the only alternative to this is pure faith in something
more or less abitrary (like in many religions), which is what some
scientists and philosophers seem to suggest. We can directly experience, and
we can rely on intuition which doesn't exclude skepticism. In fact science
already relies on intuition, like the intuition that the universe follows
laws that can be described, that the scientfic method of measuring and
making theories is the appropiate way to find out which laws these are,
etc...

If you want to make it plausible that indeed consciousness is all that is,
and the source of the universe, and inherently meaningful, there are a
number of possibilities.
Probably the one that is most convincing is direct experience. Try
meditation (my favorite is just doing nothing while being aware not to
snooze or think or search for something to do,etc...), or, if you are a bit
more daring and very cautious and well informed, psychdelic drugs (eg
Salvia, Mushrooms, LSD, DMT) or suspend your belief that you are just a
person for long enough (then the reality of unity tends to reveal itself
spotaneously). If you are in the right mindset and maybe a bit lucky you can
experience states in which it is directly evident that there is
fundamentally no other, just this consciousness that you are. If you don't
deny your experience (which we unfortunately often do due to cultural
conditioning) it is very convincing evidence. There is just no reason that
the most extraordinary states of consciousness would be states of oness with
everything if everything wasn't really one and the experience is often very
powerful and overwhelmingly real (more so than everyday consciousness).
There is also indirect evidence, which may be useful until you can
experience it directly. First, enlightened people. These people, like
historically Buddha and Christ, have had enormous postive cultural influence
and they often report permanent sensations of peace, freedom and clarity. Is
it really likely that this just comes out of a delusion? Why would a
delusion provide liberation?
Secondly, modern physics. In modern physics there aren't really seperate
particles, there is just a wavefunction, which suggest that everything is
one. Also, it is not an accident that we search for a unified theory.
Because actually, reality is a unity. That this unity has to be conscious is
clear from seeing that a part is conscious (at least you are) and since it
can't be seperated from this unity, the unity is conscious.
Also, even though faith can't be an ultimate answer, ask yourself whether it
couldn't be useful to just make a leap of faith for a while and trust that
reality really is good (but not as the opposite of bad, just as inherently
meaningful, geared to give results that will be satisfying). If reality is
good, it makes sense that it works as one for the goodness. And who could be
the one if not all of us? Consider the goodness wager: What is there to
lose if you believe that reality is fundamentally good (without making an
image what this has to mean, and without attaching to this belief, since
these may have bad consequences)?

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32810552.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: QTI, Cul de sacs and differentiation

2011-11-09 Thread benjayk
 you don't act like it is something
which you could safely ignore until it becomes obvious by itself (which will
be felt as suffering). With light pressure I mean that we can confront
people with deep things, even if they are not immediatly thankful for it
(like daring to question deeply ingrained and cherished beliefs, which are
subtly destructive).

Ultimately, I have no worries about anybody. It might be a very long and
rough ride until they realize it, but it really is nothing compared to the
reward of finally being free (and recognizing it).

benjayk

-- 
View this message in context: 
http://old.nabble.com/QTI%2C-Cul-de-sacs-and-differentiation-tp32721336p32813776.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: QTI, Cul de sacs and differentiation

2011-11-08 Thread benjayk


meekerdb wrote:
 
 On 11/7/2011 9:50 AM, benjayk wrote:
 meekerdb wrote:
   
  How great was that?
 I don't know. Being a fetus might be a peaceful experience, or like
 sleep.
 But the point is that it doesn't matter how great the experience was,
 
 So what's your evidence that there is *any* experience of being a fetus.
 
I don't know, it is just a guess. Actually giving evidence that there is any
experience of being XYZ is hard, or even impossible, because there is no
scientific/objective reason for there to be any experience of being a
particular thing, or even any experience at all. Experience is simply beyond
science - which doesn't mean that science can't say anything about
experience at all, there is just always an aspect that is totally beyond
science, and beyond any attempt to analzye or objectify it. I think that the
aspect of what experiences exist at all is not answerable by science. 
Through science we can just find patterns in experience, which is useful for
building tools and for insight into the nature of experience.

There is no objective evidence that you are conscious, or that I am
conscious, or that a fetus is conscious. It is not measurable, but it is
still there, even if some materialist tend to deny that (which shows how far
we are removed from ourselves and reality, we actually ignore that which is
undoubtably and obviously true).

benjayk

-- 
View this message in context: 
http://old.nabble.com/QTI%2C-Cul-de-sacs-and-differentiation-tp32721336p32802791.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: QTI, Cul de sacs and differentiation

2011-11-08 Thread benjayk


meekerdb wrote:
 
 On 11/7/2011 12:02 PM, benjayk wrote:
 I think we only fear the elimination of personhood because we confuse
 being
 conscious as an ego with being conscious. We somehow think that if we in
 the
 state of feeling to be a seperate individual cease to exist, we as
 conscious
 beings cease to exist, which is simply not true.
 
 Have you ever been unconscious? When you were unconscious, who was
 experiencing 
 unconsciousness? 
I as a person have been unconscious, of course. I as consciousness, no.
Unconsciousness is not really an experience. When we say we were
unconscious, we mean that we lacked an experience that could be assigned to
the time during which we were unconscious, and that we noticed a
discontinuity in experience.

That doesn't mean consciousness ceased to exist, just that it experienced
some inconsistency in experience (I experience falling asleep, and dreaming,
and waking up, but I am not sure how this was connected, exactly; it wasn't
a smooth experience).
So unconsciousness never means that consciousness (the absolute I) was
unconscious. This doesn't even make sense, just like water can't get dry.
When we use (relative) consciousness as something that can be assigned to
people and time, we can say that, relatively speaking, I lacked
consciousness at a certain time,  because there was no content of
consciousness that corresponded to that person at that time.

benjayk
-- 
View this message in context: 
http://old.nabble.com/QTI%2C-Cul-de-sacs-and-differentiation-tp32721336p32802801.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



The consciousness singularity

2011-11-08 Thread benjayk
 orgy of ever increasing bliss and
colourful clarity) if it helps to develop faster (and undoubtably suffering
makes it very clear that something is going wrong, which is going to happen
a lot of times as long as you are ignorant about what's real and what's
important).

What do you think (or feel) about this idea? Isn't it too good to be
*false*?

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-consciousness-singularity-tp32803353p32803353.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: QTI, Cul de sacs and differentiation

2011-11-08 Thread benjayk


Bruno Marchal wrote:
 
 I would rather call this consciousness.

 Indeed I agree with Dan that it is quite accurate to say that there  
 is no
 person in the sense that experience is not personal, it doesn't  
 belong to
 anyone (but it is very intimate with itself nontheless).

 I think we only fear the elimination of personhood because we  
 confuse being
 conscious as an ego with being conscious.
 
 I see this as the confusion between the little ego and the higher  
 self. The first one is a person which identifies itself with the body  
 and memories, the second one identifies itself with its source. By  
 doing so, it dissociate himself with every contingent realities.
In my view this confusion is rooted in thinking that the little ego is
actual more than a relative identity (like in a roleplay). If taken as
reality it becomes the experiental ego; the sense of personal
responsibility (not a courageous responsibility, but a sense of
responsibility rooted in guilt and authority and dogma), of seperateness, of
doership (I am doing something with my body and with my world).
Actually the first one is also a sort of dissociation. It is the
dissociation from actual experience and Self to an idea of experience and
Self. Also the second one is association with the timeless and undisturbable
peaceful reality of consciousness, and the freshness of present experience.

Really there is just the source, and whatever else there is, is an
expression of the source and not an other to the source.


Bruno Marchal wrote:
 
 We somehow think that if we in the
 state of feeling to be a seperate individual cease to exist, we as  
 conscious
 beings cease to exist, which is simply not true.
 
 I agree with you. I just call person the conscious being.
Ah, OK. We just have to be careful here that we are extending the use of
person to something which is not normally considered to be a person. But why
not, we can extend the use of words, and in this case I can see the meaning
in that.

Still, we should be aware that this person might indeed by nothing else than
consciousness itself, and has nothing to do with something that is bound by
body, mind, space, time, etc... And it might be useful to realize that
actually we can't find the experiencer apart from the experience. They are
one, even though we can make relative distinction (the experiencer is what
is beyond *particular* experiences, but not experience as such, which would
be the same as the experiencer).


Bruno Marchal wrote:
 
 It is just a big change of perspective, and we fear that as we fear  
 the
 unknown in general.
 
 Yes. It is the same type of fear than the fear of freedom, and of  
 knowledge. It is also the root of the fear of other people.
  There is also a fear that an understanding of the mystery would make the
 world  
 into a very cold and inhuman place, but this comes from some  
 reductionist idea on the mystery itself.
 Some people also fears that if the other cease to fear the Unknown,  
 they will become non controllable (which is partially true). Some  
 religion insists that we have to fear God, like some parents, and  
 teachers, confuse fear and respect.
Really I think that ultimately fear is not even fear of something in
particular. It is (especially in humans) mostly the reaction to the mere
possibility of treat, which comes with the feeling of there being an other
(which might have bad intentions).
We project that fear on everything, so we fear freedom, but also bondage, we
fear knowledge, but also ignorance, we fear mystery, but also ordinariness,
we fear the bad, but we also fear the good, we fear God, but we also fear
the devil, we fear everything, but also nothingness. No wonder we are
suffering if everything becomes a reason to be fearful. The only solution is
to discover directly that there is *nothing* that ever could threaten what
we really are, and so fear becomes just a tool to sense whether there is an
actually imminent danger, not something that is constantly (whether
obviously or subtly) determining the way we live our lifes.

benjayk
-- 
View this message in context: 
http://old.nabble.com/QTI%2C-Cul-de-sacs-and-differentiation-tp32721336p32805417.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: QTI, Cul de sacs and differentiation

2011-11-07 Thread benjayk


Quentin Anciaux-2 wrote:
 
 
  Quentin Anciaux-2 wrote:
  
   Immortality still means what it means, what you're talking about is
 not
   immortality. If nothing is preserved (no memories) then nothing is
 left
   and
   I don't care.
  But is is not true that nothing is preserved. I already gave an
 example
  that
  even without explicit memory something more essential than memory can
 be
  conserved.
 
 
  No your example is wrong. Taking it to the limit, you never have
 memories,
  because at no point do you remember everything. The point is that you
 can
  remember your own memories.
 
 
  If you don't care, you are just being superficial with regards to what
  you
  are.
 
 
  I don't thing so, what is important to me is me in the event of dying.
 I
  don't care if a not me stays.
 
 OK, you are just insisting on the dogma that all one could be is a me. If
 you presuppose that, than further discussion doesn't lead anywhere. It is
 just that this assumption is not verified through experience.
 
 
 Which/what experience ? Don't say drugs... this comparison is invalid.
 
Fundamentally, every experience. There is no ownership tag in experience
that says: There has to be a me here!. The me is simply a certain mode of
experience, which can be there, but doesn't have to be here.
There is a lot of evidence for that. During meditation, flow, extraordinary
states of consciousness induced by sleep or drugs it is quite a common
experience that there is experience without a me. Enlightenment consists of
realizing that there is no I (and the realization that there is only
consciousness) in a way that is stable. These people report that there is no
feeling of seperation, no sense of doership, no feeling of fundamental
otherness (which make up the I) and still they live quite normally.


Quentin Anciaux-2 wrote:
 
 Actually there
 is just experience, no me that experiences that
 
 
 ???
What's hard to understand about that? Just look at your experience. There is
experiencing, but there is no entity that has this experience. Yes, the
feeling of an I having the experience appears in the experience, but since
this I is just a part of the experience, it can't have it (it just
imagines that it has it). Just like a window can't have a house, and a leg
can't have a body.
If anything, metaphorically speaking, the experience has a me.


Quentin Anciaux-2 wrote:
 
 , apart from the feeling of
 me (which is just another feeling).

 There is no need for a self for consciousness to be there.
 
 
 But it exists... that's what demand explanation, that's what lead to the
 envy of immortality.
It is no big mystery that a self seems to exist. Consciousness experiences
itself through a body and a mind, which is, in terms of superficial things,
the main invariant of human experience. So, as long as consciousness is not
conscious enough to experience the absolute invariant of itself (which is
more subtle than the body/mind), it will identify with this relative
invariant. With this there comes a sense of self (as opposed to other),
since what it identifies itself with is seperate from an other (my body is
not your body, my mind is not your mind).
But we can transcend this indentity (even though the I can't). If we
directly see ourselves as consciousness itself, the appearance of being a
seperate individual, a me, can dissolve. If this process is complete, it
usually comes with a great sense of liberation, freedom and peace (this is
also known as enlightenment, liberation, nirvana, moksha,...). If you don't
believe you are a body that can be hurt and die, a mind that can be ignorant
of the solutions the most important problems, a person that can lack
love,etc... a great burden is lifted from you. Unfortunately this
realization is rare, since it requires one to not buy into the dominant
collective delusion and deeply ingrained feelings of fear towards death of
self.


Quentin Anciaux-2 wrote:
 
 Neither
 experientally, nor logically or scientifically.

 
 You say so...
What's your evidence? In experience, the I is merely a mode of experience,
like sleep is, and there are modes of experiences where there is no I. There
is no logical contradiction between being conscious and not feeling to be a
seperate individual (an I). In science, we never have found any such thing
as an I.

benjayk
-- 
View this message in context: 
http://old.nabble.com/QTI%2C-Cul-de-sacs-and-differentiation-tp32721336p32788734.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: QTI, Cul de sacs and differentiation

2011-11-07 Thread benjayk


meekerdb wrote:
 
 You picture consciousness as something inherently personal. But you can
 be
 conscious without there being any sense of personhood, or any experience
 related to a particular person (like in meditation). So that assumption
 doesn't seems to be true.

   Also you think that memory has to be conserved in order for the
 experience
 to continue consistently. This is also not true, we can experience things
 that are totally disconnected from all memories we have, yet still it is
 the
 I (not the I) that experiences it. For example on a drug trip, you can
 literally forget every trace of what your life was like, in terms of any
 concretely retrievable memory (you can even forget you are human or an
 animal). So why can't we lose any *concrete* memory after death and
 experience still continues consistently (and if it does you have to
 surive
 in some way - it makes no sense to have a continuous experience while you
 totally die).
 You also don't remember being an infant (probably), yet you were that
 infant
 and are still here.
 Saying that we are the sum of our memory is very simplistic and just
 isn't
 true in terms of how we experience (you remember almost nothing of what
 you
 have experienced).
 
 
 But in what sense did you experience when you were an infant?  You can't
 really see 
 anything until your brain organizes to process the visual signals from
 your eyes.  So your 
 visual experiences were different and limited as a new born that at a few
 months of age.
Yes, this is probably true. I don't know what it is like to be an infant,
and probably I won't know as long as I am alive.

  
meekerdb wrote:
 
 Nobody remembers how they learned to see (or hear or walk) but that kind
 of memory is 
 essential to having experiences.  I think it is a mistake to think of a
 person as some 
 core soul.  The person grows and is created by interaction of the
 genetic provided body 
 and the environment.  We tend to overlook this because most of the growth
 occurs early in 
 life before we have developed episodic memories
I agree. You actually strenghten my point.


  
meekerdb wrote:
 
  and the inner narrative we call 
 consciousness.
Consciousness is not a inner narrative. Consciousness is the sense of being.
The inner narrative is the sense of personhood. We can be conscious without
an inner narrative, like in meditation.

  
meekerdb wrote:
 

 So if you say it is death, you only refer to a superficial aspect of the
 person, namely their body and explicit memory. Sure, we tend to indentify
 with that, but that doesn't mean that there isn't something much more
 important. The particular person may just be an expression of something
 deeper, which is conserved, and is the real essence of the person, and
 really all beings: Their ability to consciously, consistently experience.
 We tend to find that scary, as it makes us part of something so much
 greater
 that all our attachments, possesions, achievements, memory, beliefs and
 security are hardly worth anything at all, in the big picture. But if
 they
 aren't, what are we then? Since most of us have not yet looked deeper
 into
 ourselves than these things, we feel immensly treatened by the idea that
 this is not at all what is important about us. It (apparently) reduces us
 to
 nothing.
 But isn't it, when we face it from a more open perspective, tremendously
 liberating and exciting? By confronting that, we can free us from all
 these
 superficial baggage like things and relations and identity (freeing
 mentally
 speaking, of course), and see the true greatness of what we are which is
 beyond all of this.
 
 Were you beyond it all when you were a fetus?
We are beyond time, so clearly we were beyond it all at this time. Yet the
fetus is not beyond it all, since he is just a limited object (a quite
amazing object, to be sure). Strictly speaking, I was not a fetus, I
experienced myself as a fetus, which doesn't change what I am. Note that
here I am using I as the absolute I (I -am-ness) not the relative I of
personhood (I versus you). 

  
meekerdb wrote:
 
   How great was that?
I don't know. Being a fetus might be a peaceful experience, or like sleep.
But the point is that it doesn't matter how great the experience was, since
what we are is beyond particular experiences (it is experiencing itself).
Even when I feel absolutely terrible I still am beyond all, I just don't
realize it. The very fact that the experience passes shows that I am beyond
it (clearly when it is over I am beyond it).
But even during very horrible circumstances it seems that it is possible to
feel being untouched by it. Like the yogis that bear horrible pain without
any visible sign of disturbance.

benjayk

-- 
View this message in context: 
http://old.nabble.com/QTI%2C-Cul-de-sacs-and-differentiation-tp32721336p32788736.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups

Re: QTI, Cul de sacs and differentiation

2011-11-07 Thread benjayk


Bruno Marchal wrote:
 
 But if you realize that there has never been a person to begin with,
 
 But this contradicts immediately my present consciousness feeling. I  
 am currently in the state of wanting to drink water, so I am pretty  
 sure that there exist right now at least one person, which is the one  
 who wants to drink water. I might be able to conceive that such a  
 person is deluded on the *content* of that experience (may be he  
 really want to smoke a cigarette instead), but in that case a person  
 still remains: the one who is deluded.
 
Why does there have to be a person in order for there to be experience? If
there is a feeling of wanting to drink water, this only shows that there is
a feeling of wanting to drink water and the ability to experience that.
But why would that ability to experience be equivalent to personhood? It
rather seems it is something that transcends persons, as it is shared by
different people, and can occur in the absence of experience of personality,
like you yourself experienced during meditative states.

This might just be a vocabulary issue, but why would one call something that
is beyond body, rational mind, individuality, etc... a person? You might say
what is most essential to a person is her experience, and here I would
agree, but it seems a step to far to identify person and experience.
I would rather call this consciousness.

Indeed I agree with Dan that it is quite accurate to say that there is no
person in the sense that experience is not personal, it doesn't belong to
anyone (but it is very intimate with itself nontheless).

I think we only fear the elimination of personhood because we confuse being
conscious as an ego with being conscious. We somehow think that if we in the
state of feeling to be a seperate individual cease to exist, we as conscious
beings cease to exist, which is simply not true. Probably we are just so
used to that state of consciousness, that we can't conceive of consciousness
in another state than that.
It is just a big change of perspective, and we fear that as we fear the
unknown in general.

benjayk
-- 
View this message in context: 
http://old.nabble.com/QTI%2C-Cul-de-sacs-and-differentiation-tp32721336p32788744.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: QTI, Cul de sacs and differentiation

2011-11-03 Thread benjayk


Quentin Anciaux-2 wrote:
 
 You picture consciousness as something inherently personal. But you can
 be
 conscious without there being any sense of personhood, or any experience
 related to a particular person (like in meditation). So that assumption
 doesn't seems to be true.

  Also you think that memory has to be conserved in order for the
 experience
 to continue consistently. This is also not true, we can experience things
 that are totally disconnected from all memories we have, yet still it is
 the
 I (not the I) that experiences it. For example on a drug trip, you can
 literally forget every trace of what your life was like, in terms of any
 concretely retrievable memory (you can even forget you are human or an
 animal). So why can't we lose any *concrete* memory after death and
 experience still continues consistently (and if it does you have to
 surive
 in some way - it makes no sense to have a continuous experience while you
 totally die).
 You also don't remember being an infant (probably), yet you were that
 infant
 and are still here.
 Saying that we are the sum of our memory is very simplistic and just
 isn't
 true in terms of how we experience (you remember almost nothing of what
 you
 have experienced).

 So if you say it is death, you only refer to a superficial aspect of the
 person, namely their body and explicit memory. Sure, we tend to indentify
 with that, but that doesn't mean that there isn't something much more
 important. The particular person may just be an expression of something
 deeper, which is conserved, and is the real essence of the person, and
 really all beings: Their ability to consciously, consistently experience.
 We tend to find that scary, as it makes us part of something so much
 greater
 that all our attachments, possesions, achievements, memory, beliefs and
 security are hardly worth anything at all, in the big picture. But if
 they
 aren't, what are we then? Since most of us have not yet looked deeper
 into
 ourselves than these things, we feel immensly treatened by the idea that
 this is not at all what is important about us. It (apparently) reduces us
 to
 nothing.
 But isn't it, when we face it from a more open perspective, tremendously
 liberating and exciting? By confronting that, we can free us from all
 these
 superficial baggage like things and relations and identity (freeing
 mentally
 speaking, of course), and see the true greatness of what we are which is
 beyond all of this. And this is immortal, with death merely being a
 relative
 end, just like sleeping.

 benjayk

 
 Well if immortality is something which do not preseve the person... then
 it
 is death.
For the person. The point is that if I don't consider the person to be what
is most important about me, than I don't die at all. Immortality may be
immortality of I (consciousness), not immortality of I (personality).
It is death for some aspect, but just as you don't call it death when some
cells of you die, there is no need to consider it death when the person you
consider to be right now dies. It is just material death, but not death of
what you really are. This can't die, as is not even subject to time (even
though it can utilize time).


Quentin Anciaux-2 wrote:
 
  If not, what is the difference between your consciousness and
 mine or any other...
There is no difference, as there is no your and mine consciousness.
Consciousness can not be owned, and can not be divided into pieces. There is
just consciousness.
It is very easily experientally confirmable: Do you ever experience anything
other than this consciousness? Can you ever find an owner of consciousness,
which is not just another appearance in consciousness? No, so why would we
assume that another consciousess or an owner of consciousness exists? We
can't infer that other consciousnesses exist by observation of other
people, because we can only infer that other people exist, not that they
have another consciousness. There is no evidence for this at all.

We can speak of your consciousness and my consciousness on a relative level,
meaning one particular expression of consciousness and another particular
expression. But this is a relative distinction, and there are contexts where
this distinction makes little or no sense, like when we die or when we are
in objectless and perceptionless meditation.


Quentin Anciaux-2 wrote:
 
  what is *preserved* ?
Continuity of consciousness.


Quentin Anciaux-2 wrote:
 
 Immortality still means what it means, what you're talking about is not
 immortality. If nothing is preserved (no memories) then nothing is left
 and
 I don't care.
But is is not true that nothing is preserved. I already gave an example that
even without explicit memory something more essential than memory can be
conserved.
If you don't care, you are just being superficial with regards to what you
are.


Quentin Anciaux-2 wrote:
 
 When you take drug and forget... you then remember when the effects
 stop,
 proving you still have

Re: QTI, Cul de sacs and differentiation

2011-11-03 Thread benjayk


Quentin Anciaux-2 wrote:
 
 2011/11/3 benjayk benjamin.jaku...@googlemail.com
 


 Quentin Anciaux-2 wrote:
 
  You picture consciousness as something inherently personal. But you
 can
  be
  conscious without there being any sense of personhood, or any
 experience
  related to a particular person (like in meditation). So that
 assumption
  doesn't seems to be true.
 
   Also you think that memory has to be conserved in order for the
  experience
  to continue consistently. This is also not true, we can experience
 things
  that are totally disconnected from all memories we have, yet still it
 is
  the
  I (not the I) that experiences it. For example on a drug trip, you
 can
  literally forget every trace of what your life was like, in terms of
 any
  concretely retrievable memory (you can even forget you are human or an
  animal). So why can't we lose any *concrete* memory after death and
  experience still continues consistently (and if it does you have to
  surive
  in some way - it makes no sense to have a continuous experience while
 you
  totally die).
  You also don't remember being an infant (probably), yet you were that
  infant
  and are still here.
  Saying that we are the sum of our memory is very simplistic and just
  isn't
  true in terms of how we experience (you remember almost nothing of
 what
  you
  have experienced).
 
  So if you say it is death, you only refer to a superficial aspect of
 the
  person, namely their body and explicit memory. Sure, we tend to
 indentify
  with that, but that doesn't mean that there isn't something much more
  important. The particular person may just be an expression of
 something
  deeper, which is conserved, and is the real essence of the person, and
  really all beings: Their ability to consciously, consistently
 experience.
  We tend to find that scary, as it makes us part of something so much
  greater
  that all our attachments, possesions, achievements, memory, beliefs
 and
  security are hardly worth anything at all, in the big picture. But if
  they
  aren't, what are we then? Since most of us have not yet looked deeper
  into
  ourselves than these things, we feel immensly treatened by the idea
 that
  this is not at all what is important about us. It (apparently) reduces
 us
  to
  nothing.
  But isn't it, when we face it from a more open perspective,
 tremendously
  liberating and exciting? By confronting that, we can free us from all
  these
  superficial baggage like things and relations and identity (freeing
  mentally
  speaking, of course), and see the true greatness of what we are which
 is
  beyond all of this. And this is immortal, with death merely being a
  relative
  end, just like sleeping.
 
  benjayk
 
 
  Well if immortality is something which do not preseve the person...
 then
  it
  is death.
 For the person. The point is that if I don't consider the person to be
 what
 is most important about me, than I don't die at all. Immortality may be
 immortality of I (consciousness)
 
 
 I don't understand what you mean by consciousness. Without a notion of
 self, it is meaningless.
 
 
 
 , not immortality of I (personality).

 
 There is no soul... so unless what you mean is soul, it is meaningless.
 And
 if you mean soul, I don't believe in soul.
 
 
 It is death for some aspect,
 
 
 For all aspect.
 
 
 but just as you don't call it death when some
 cells of you die,
 
 
 Your comparison is not relevant for the case at hand.
 
 
 there is no need to consider it death when the person you
 consider to be right now dies.
 
 
 Well most of the people do.
 
 
 It is just material death,
 
 
 Death is always material.
 
 
 but not death of
 what you really are.
 
 
 And what it is ? What I really am is me.
 
 
 This can't die,
 
 
 Sure if personhood is erased, it dies.
 
 
 as is not even subject to time (even
 though it can utilize time).

 
 Well unless you have proof about the existence of souls, it is
 meaningless,
 consciousness needs time.
 
 


 Quentin Anciaux-2 wrote:
 
   If not, what is the difference between your consciousness and
  mine or any other...
 There is no difference, as there is no your and mine consciousness.

 
 You don't use consciousness in the commen sense it is used.
 
 
 
 Consciousness can not be owned, and can not be divided into pieces. There
 is
 just consciousness.
 It is very easily experientally confirmable: Do you ever experience
 anything
 other than this consciousness? Can you ever find an owner of
 consciousness,
 which is not just another appearance in consciousness? No, so why would
 we
 assume that another consciousess or an owner of consciousness exists? We
 can't infer that other consciousnesses exist by observation of other
 people, because we can only infer that other people exist, not that they
 have another consciousness. There is no evidence for this at all.

 We can speak of your consciousness and my consciousness on a relative
 level,
 meaning one particular expression

Re: QTI, Cul de sacs and differentiation

2011-11-02 Thread benjayk


Quentin Anciaux-2 wrote:
 
 2011/11/1 benjayk benjamin.jaku...@googlemail.com
 


 Quentin Anciaux-2 wrote:
 
  2011/10/30 benjayk benjamin.jaku...@googlemail.com
 
 
 
  Quentin Anciaux-2 wrote:
  
   2011/10/30 benjayk benjamin.jaku...@googlemail.com
  
  
  
   Nick Prince-2 wrote:
   
   
This is similar to my speculations in an earlier topic post
   
  
 
 http://groups.google.com/group/everything-list/browse_thread/thread/4514b50b8eb469c3/c49c3aa24c265a4b?lnk=gstq=homomorphic#c49c3aa24c265a4b
where I suggest that  very old or dying brains might
deterorate in a specific way that allows the transition of 1st
  person
experiences from an old to
a young mind i.e. the decaying brain becomes in some way
  homomorphic
to a new young brain which allows an extension of consciousness.
   This is not even required. The decaying brain can become no brain,
 and
   consciousness proceeds from no brain. Of course this means that
 some
   continuity of consciousness needs to be preserved outside of
 brains.
   Theoretically this doesn't even require that structures other than
  brains
   can be conscious, since we know from our experience that even
  when/while
   a
   structure is unconscious it can preserve continuity (we awake from
  deep
   sleep and experience a coherent history).
   The continuity may be preserved simply through similarity of
  structure.
   Like
   our continuity of personhood is preserved through the similarity of
  our
   brains states (even though the brain changes vastly from childhood
  until
   old
   age), continuity of human consciousness may be preserved through
   similarity
   of brains (even though brains have big differences is structure).
  
   So this could even be a materialist sort of non-technological
   immortality.
   It's just that most materialists firmly identify with the person,
 so
  they
   mostly won't care much about it (What's it worth that
 consciousness
   survives, when *I* don't survive.).
   If they like the idea of immortality, they will rather hope for the
   singularity. But impersonal immortality seems more in accord with
 our
   observations than a pipe dream of personal immortality through a
   technological singularity, and also much more elegant (surviving
  through
   forgetting seems much simpler than surviving through acquiring
  abitrarily
   much memory and personal identity).
  
   I wonder why less people consider this possiblity of immortality,
 as
  it
   both
   fits more with our intuition (does it really seem probable that all
   persons
   grow abitrarily old?) and with observation (people do actually die)
  than
   other forms of immortality.
  
  
   Simply because it is just using immortality for meaning death .
   Immortality
   means the  'I' survive... if it's not the case then it is simply
 plain
  old
   death.
  
  OK, I can see that this a possible perspective on that. Indeed most of
  the
  time immortality is used to refer to personal immortality (especially
 in
  the
  west). I agree with materialists there is no good reason to suppose
 that
  this exists.
  Quantum immortality rests on the premise that the supposed
 continuations
  that exist in the MWs of quantum mechanics are lived as real for the
  person
  that dies, while we have no clue how these possibilities are actually
  lived.
  It is much more plausible - and consistent with our experience and
  observation - that the other possibilities are merely dreams,
  imagination,
  or - if more consistent - are lived by other persons (which, for
 example,
  didn't get into the deadly situation in the first place).
 
  On the other hand, I don't see why we would ignore immortality of
  consciousness, considering that the I is just a psychosocial
  construct/illusion anyway. We don't find an actual I anywhere. It
 seems
  very relevant to know that the actual essence of experience can indeed
  survive eternally. Why would I care whether an imagined I
 experiences
  it
  or not?
 
  How would you call this, if not immortality?
 
 
  Death.
 
 You would call eternal existence of consciousness death?
 
 
 What do you mean by consciousness ? I don't care about eternal not
 me... it's the *same* thing as death. When talking about dying, what's
 important is the person who die, if something is left who doesn't know
 that
 it was that person... what does it means that its consciousness still
 exists ? For me, it is just a vocabulary trick to not employ the word
 death
 where what you mean is death.
 
 Immortality means immortality, not death, not resurection.
 
 A person is the sum of her memories, without memories, there is nothing
 left.
 
 
 This seems quite
 strange and narrow to me.

 
 Not to me, just read in a dictionary.
 
 *immortal* (ɪˈmɔːtəl)   —*adj*  1.  not subject to death or decay; having
 perpetual life 2.  having everlasting fame; remembered throughout time 3.
 everlasting; perpetual; constant 4.  of or relating to immortal beings

Re: QTI, Cul de sacs and differentiation

2011-11-01 Thread benjayk


Quentin Anciaux-2 wrote:
 
 2011/10/30 benjayk benjamin.jaku...@googlemail.com
 


 Quentin Anciaux-2 wrote:
 
  2011/10/30 benjayk benjamin.jaku...@googlemail.com
 
 
 
  Nick Prince-2 wrote:
  
  
   This is similar to my speculations in an earlier topic post
  
 
 http://groups.google.com/group/everything-list/browse_thread/thread/4514b50b8eb469c3/c49c3aa24c265a4b?lnk=gstq=homomorphic#c49c3aa24c265a4b
   where I suggest that  very old or dying brains might
   deterorate in a specific way that allows the transition of 1st
 person
   experiences from an old to
   a young mind i.e. the decaying brain becomes in some way 
 homomorphic
   to a new young brain which allows an extension of consciousness.
  This is not even required. The decaying brain can become no brain, and
  consciousness proceeds from no brain. Of course this means that some
  continuity of consciousness needs to be preserved outside of brains.
  Theoretically this doesn't even require that structures other than
 brains
  can be conscious, since we know from our experience that even
 when/while
  a
  structure is unconscious it can preserve continuity (we awake from
 deep
  sleep and experience a coherent history).
  The continuity may be preserved simply through similarity of
 structure.
  Like
  our continuity of personhood is preserved through the similarity of
 our
  brains states (even though the brain changes vastly from childhood
 until
  old
  age), continuity of human consciousness may be preserved through
  similarity
  of brains (even though brains have big differences is structure).
 
  So this could even be a materialist sort of non-technological
  immortality.
  It's just that most materialists firmly identify with the person, so
 they
  mostly won't care much about it (What's it worth that consciousness
  survives, when *I* don't survive.).
  If they like the idea of immortality, they will rather hope for the
  singularity. But impersonal immortality seems more in accord with our
  observations than a pipe dream of personal immortality through a
  technological singularity, and also much more elegant (surviving
 through
  forgetting seems much simpler than surviving through acquiring
 abitrarily
  much memory and personal identity).
 
  I wonder why less people consider this possiblity of immortality, as
 it
  both
  fits more with our intuition (does it really seem probable that all
  persons
  grow abitrarily old?) and with observation (people do actually die)
 than
  other forms of immortality.
 
 
  Simply because it is just using immortality for meaning death .
  Immortality
  means the  'I' survive... if it's not the case then it is simply plain
 old
  death.
 
 OK, I can see that this a possible perspective on that. Indeed most of
 the
 time immortality is used to refer to personal immortality (especially in
 the
 west). I agree with materialists there is no good reason to suppose that
 this exists.
 Quantum immortality rests on the premise that the supposed continuations
 that exist in the MWs of quantum mechanics are lived as real for the
 person
 that dies, while we have no clue how these possibilities are actually
 lived.
 It is much more plausible - and consistent with our experience and
 observation - that the other possibilities are merely dreams,
 imagination,
 or - if more consistent - are lived by other persons (which, for example,
 didn't get into the deadly situation in the first place).

 On the other hand, I don't see why we would ignore immortality of
 consciousness, considering that the I is just a psychosocial
 construct/illusion anyway. We don't find an actual I anywhere. It seems
 very relevant to know that the actual essence of experience can indeed
 survive eternally. Why would I care whether an imagined I experiences
 it
 or not?

 How would you call this, if not immortality?
 
 
 Death.
 
You would call eternal existence of consciousness death? This seems quite
strange and narrow to me.
Why would you restrict it only to the human experience of death? Isn't that
extremely antrophocentric/egocentric? Yes, of course death is an important
aspect - realization of eternal consciousness means death of seperate
identity - but it certainly isn't all that there is to it.

benjayk
-- 
View this message in context: 
http://old.nabble.com/QTI%2C-Cul-de-sacs-and-differentiation-tp32721336p32760389.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: QTI, Cul de sacs and differentiation

2011-10-30 Thread benjayk


Nick Prince-2 wrote:
 
 
 This is similar to my speculations in an earlier topic post
 http://groups.google.com/group/everything-list/browse_thread/thread/4514b50b8eb469c3/c49c3aa24c265a4b?lnk=gstq=homomorphic#c49c3aa24c265a4b
 where I suggest that  very old or dying brains might
 deterorate in a specific way that allows the transition of 1st person
 experiences from an old to
 a young mind i.e. the decaying brain becomes in some way  homomorphic
 to a new young brain which allows an extension of consciousness.
This is not even required. The decaying brain can become no brain, and
consciousness proceeds from no brain. Of course this means that some
continuity of consciousness needs to be preserved outside of brains.
Theoretically this doesn't even require that structures other than brains
can be conscious, since we know from our experience that even when/while a
structure is unconscious it can preserve continuity (we awake from deep
sleep and experience a coherent history).
The continuity may be preserved simply through similarity of structure. Like
our continuity of personhood is preserved through the similarity of our
brains states (even though the brain changes vastly from childhood until old
age), continuity of human consciousness may be preserved through similarity
of brains (even though brains have big differences is structure).

So this could even be a materialist sort of non-technological immortality.
It's just that most materialists firmly identify with the person, so they
mostly won't care much about it (What's it worth that consciousness
survives, when *I* don't survive.).
If they like the idea of immortality, they will rather hope for the
singularity. But impersonal immortality seems more in accord with our
observations than a pipe dream of personal immortality through a
technological singularity, and also much more elegant (surviving through
forgetting seems much simpler than surviving through acquiring abitrarily
much memory and personal identity).

I wonder why less people consider this possiblity of immortality, as it both
fits more with our intuition (does it really seem probable that all persons
grow abitrarily old?) and with observation (people do actually die) than
other forms of immortality.

benjayk
-- 
View this message in context: 
http://old.nabble.com/QTI%2C-Cul-de-sacs-and-differentiation-tp32721336p32746424.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: QTI, Cul de sacs and differentiation

2011-10-30 Thread benjayk


Quentin Anciaux-2 wrote:
 
 2011/10/30 benjayk benjamin.jaku...@googlemail.com
 


 Nick Prince-2 wrote:
 
 
  This is similar to my speculations in an earlier topic post
 
 http://groups.google.com/group/everything-list/browse_thread/thread/4514b50b8eb469c3/c49c3aa24c265a4b?lnk=gstq=homomorphic#c49c3aa24c265a4b
  where I suggest that  very old or dying brains might
  deterorate in a specific way that allows the transition of 1st person
  experiences from an old to
  a young mind i.e. the decaying brain becomes in some way  homomorphic
  to a new young brain which allows an extension of consciousness.
 This is not even required. The decaying brain can become no brain, and
 consciousness proceeds from no brain. Of course this means that some
 continuity of consciousness needs to be preserved outside of brains.
 Theoretically this doesn't even require that structures other than brains
 can be conscious, since we know from our experience that even when/while
 a
 structure is unconscious it can preserve continuity (we awake from deep
 sleep and experience a coherent history).
 The continuity may be preserved simply through similarity of structure.
 Like
 our continuity of personhood is preserved through the similarity of our
 brains states (even though the brain changes vastly from childhood until
 old
 age), continuity of human consciousness may be preserved through
 similarity
 of brains (even though brains have big differences is structure).

 So this could even be a materialist sort of non-technological
 immortality.
 It's just that most materialists firmly identify with the person, so they
 mostly won't care much about it (What's it worth that consciousness
 survives, when *I* don't survive.).
 If they like the idea of immortality, they will rather hope for the
 singularity. But impersonal immortality seems more in accord with our
 observations than a pipe dream of personal immortality through a
 technological singularity, and also much more elegant (surviving through
 forgetting seems much simpler than surviving through acquiring abitrarily
 much memory and personal identity).

 I wonder why less people consider this possiblity of immortality, as it
 both
 fits more with our intuition (does it really seem probable that all
 persons
 grow abitrarily old?) and with observation (people do actually die) than
 other forms of immortality.

 
 Simply because it is just using immortality for meaning death .
 Immortality
 means the  'I' survive... if it's not the case then it is simply plain old
 death.
 
OK, I can see that this a possible perspective on that. Indeed most of the
time immortality is used to refer to personal immortality (especially in the
west). I agree with materialists there is no good reason to suppose that
this exists.
Quantum immortality rests on the premise that the supposed continuations
that exist in the MWs of quantum mechanics are lived as real for the person
that dies, while we have no clue how these possibilities are actually lived.
It is much more plausible - and consistent with our experience and
observation - that the other possibilities are merely dreams, imagination,
or - if more consistent - are lived by other persons (which, for example,
didn't get into the deadly situation in the first place).

On the other hand, I don't see why we would ignore immortality of
consciousness, considering that the I is just a psychosocial
construct/illusion anyway. We don't find an actual I anywhere. It seems
very relevant to know that the actual essence of experience can indeed
survive eternally. Why would I care whether an imagined I experiences it
or not?

How would you call this, if not immortality? Actually eternal youth seems
closer to eternal life to me than eternally growing old, which would be more
properly termed eternal existing or not-quite-mortality. If we are cut
off from experiencing the undeveloped innocent freshness of children - not
knowing who you are - we miss something that is absolutely essential to
life. It is not by chance that children are generally more open and happy,
and learn faster, than adults.

benjayk
-- 
View this message in context: 
http://old.nabble.com/QTI%2C-Cul-de-sacs-and-differentiation-tp32721336p32748927.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: QTI, Cul de sacs and differentiation

2011-10-27 Thread benjayk


Jason Resch-2 wrote:
 
 On Tue, Oct 25, 2011 at 6:00 PM, Nick Prince
 nickmag.pri...@googlemail.comwrote:
 
 QTI, Cul de sacs and differentiation

 I’m trying to get a  picture of how David Deutsch’s idea of
 differentiation works – especially in relation to QTI.  With a
 standard treatment it looks as if there might be cul de sacs for  a
 dying cat.  However I think I can see why this conclusion could be
 wrong.  Maybe someone could check my reasoning for this and tell me if
 there are any flaws.

 
 Nick,
 
 I think such cul de sacs exist only from third person perspectives.  E.g.,
 the experimenter's view of what happens to the cat.  When considering the
 perspective from the first person (cat) perspective, there are no cul de
 sacs for a much simpler reason: The cat might be mistaken, dreaming, or
 even
 an altogether different being choosing to temporarily experience a cat's
 point of view.
 
 No matter how foolproof a setup an experimenter designs, it is impossible
 to
 capture and terminate the cat's continued consciousness as seen from the
 perspective of the cat.
 
 The lower the chance the cat has of surviving through some malfunction of
 the device, the more likely it becomes that the cat survives via
 improbable
 extensions.  For the same reasons, I think it is more probable that you
 will
 wake up as some trans- or post-human playing a realistic sim ancestor
 game
 than for you to live to 200 by some QTI accident (not counting medical
 advances).  Eventually, those alternatives just become more probable.
 
 Jason
 
 
One thing I wonder about: Do the extensions necessarily become improbable?
Why is it not possible that the cat just forgets that it is that particular
cat, and wakes up as new born cat, or dog, or other animal (maybe human?).
It even seems more plausible that as long as the cat is alive, relatively
improbable extensions/narrow are required (since there are less futures
where the cat is alive, than where it is not).

It seems to me it is one step to far to assume that after its death the cat
has to continue in a unlikely future in a form very similiar to its current
form.
That is taking egocentric notions of survival for granted. Maybe it is not
required that much of memory or personality or physical form survives for
the experience of survival. For example, during dream states, meditation or
drug experiences, (almost) all memory and sense of personhood may be lost
and still consciousness experiences surviving.

This would be an argument in favor of a modern form of reincarnation. When
the form is destroyed, consciousness just backtracks (maybe through some
dream like experience) and is born anew.
We don't even need much assumptions in terms of QTI or non-physical plane
for that. All individual memory is lost, and thus consciousness can continue
in very many probable futures, namely all newborn individuals that share a
similar collective consciousness (which may just be the environment - or
world - of the dead one, which obviously does not die). For the person,
this is not really immortality, but this isn't required. Only consciousness
has to survive in order for basic subjective immortality.
It is a quite natural notion of immortality, with natural consequences with
regard to immortality experiments (the subject just dies, and consciousness
continues from memory loss).

This would also explain positive near death experiences: As the person dies,
consciousness feels itself opening up, as more consistent future experiences
become available.

benjayk
-- 
View this message in context: 
http://old.nabble.com/QTI%2C-Cul-de-sacs-and-differentiation-tp32721336p32730568.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: My theory of everything: everything is driven by the potential for transcendence

2011-10-26 Thread benjayk
 of
transportation, just like in Harry Potter). And for some things, technology
can probably barely help at all. Like making us peaceful and happy and a
lasting way (the kingdom of heaven is within). Or changing the fundamental
physical laws.
But most importantly, realizing ourselves as God (or a bit more modest
sounding, consciousness itself). Technology can't do that for us. Nothing
can do that for us, since it is only about us. And even we can't do that, we
can just recognize it.

benjayk
-- 
View this message in context: 
http://old.nabble.com/My-theory-of-everything%3A-everything-is-driven-by-the-potential-for-transcendence-tp32706298p32726794.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The Overlords Gambit

2011-10-20 Thread benjayk
 perspective. I like it more radical and
 clear. I doesn't seem to me like reality is like cocktail of different
 things, but one unified absolute.
 
 It's both a cocktail of different things and one unified absolute.
 It's only our limited participation in this specific form that sees a
 difference between the two.
I agree in a relative sense (our relative everyday reality is certainly is a
mish mash of many different things), but ultimately reality can't be a
cocktail, as there are no different things it could be a cocktail of.


Craig Weinberg wrote:
 
 Not that it is wrong to find a middle ground of different perspectives,
 but
 your page seems to want to deal with the fundament of all (A
 ManifesTOE),
 and this approach doesn't work there.
 
 It's not a middle ground, it's just a map of every ground and how they
 relate. It's an approach which works everywhere.
I find something essential missing. I guess that every map misses
99,9999% of the grounds.

You could make dozens of spectrums that are as fundamental as the ACME OMMM
spectrum.
Like seeing from a perspective of unicity and diversity, perceiving and
feeling, concrete and vague, simple and complex, naive and skeptic, open and
narrow, good and bad, multdimensional and nondimensional, static and
variable, subtle and obvious, cosmic and earthbound, subjective and
objective.
I am not saying what you write is worthless. But it is not a description of
the extreme edges of possible worldviews. You just compiled some of the
poles into stereotypes.
For example you can be very well be a spiritual hard core skeptic
(experience is obviously there and everything that the content seems to
suggest is totally open to doubt) and a very naive materialist (we have the
TOE in the next years), a pessimistic superstitious person (belief in bad
spirits or hell) and an optimistic materialist (the singularity will come
soon and bring heaven on earth), a subjective materialist (the variety that
is not interested in science and rationality and is just sure that matter is
all there is anyway, and even believes qunatum mechanics is BS because it is
to unmaterialist), an objective spiritual person (bruno - there are
objective 3p facts that are the ontology, yet the 1p world is fundamentally
spiritual), an open minded materialist (yeah, matter is all there is, but
it may fundamentally be linked to consciousness) and close minded spiritual
person (2012 all people that are as spiritual as me will ascend, and all
others are fucked)  so on...

We really can't touch the reality of everything with words. I am not
critizing your attempt (I think what you write is fun and somehow poetic), I
am just trying to open you to a broader perspective. If we think it
through we miss SOOO much, especially if we think we get really close.
And as you use words like extreme edges of possible worldviews I am a bit
worried you get lost in your map of what you think the possibilites are,
especially as almost everyone gets lost in thoughts regularly. I am
preaching to myself that I should give more attention to my subjective
experience instead of thinking, yet am I still thinking and thinking and
thinking and thinking
Words and concepts are such powerful pointers that we are almost guaranteed
to mistake them as the actual important thing, which leads us straight into
unconsciousness.


Craig Weinberg wrote:
 

   When I say that your will is not really free, I am not saying that
 you
   are a
   puppet that is controlled by your brain. An opinion is valuable to
  you,
   whether you just have it, or you claim to use your will to have it.
   The cosmos does not need free will, as it is free without a will.
 It
  just
   does what it does, including having opinions, talking to
 interesting
   people,
   etc... Why is all of that nothing worth if there is no controller
 of
   them?

   Why isn't just doing 'what it does' free will?

  Because the feeling of will need not be involved, so why call it will
  then?

  Why should we assume there is no need for a feeling of will to be
  involved?

 Because humans can be freely living without feeling to exert will.
 
 We would have to exert the will to live that way in the first place.
But it is not the result of the will (ask any spiritual teacher, you can't
will your way to enlightenment!). The feeling of will is just a by-product
of the self-reflective capability of indiviudals, so ultimately, there is no
reason to call the spontaneous activity of consciousness free will.

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-Overlords-Gambit-tp32662974p32690321.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http

Re: The Overlords Gambit

2011-10-19 Thread benjayk
 with a conscious will? Why? How? It's totally nuts and
  explains nothing.

 OK, I agree with you that it is not a meaningless by-product, certainly
 not.
 That doesn't make it fundamental, though. It is fundamental to our
 self-image, but that doesn't say much (money or fame is also, for some
 people). Self-image is important in the development for consciousness, so
 it
 makes sense it uses the feeling of being in control. But ultimately we
 don't
 want to idolize an image, but actually be directly aware (of)/as the Self
 (it seems to me there is just one).
 
 I think we are on the same page, I didn't intend to say that free will
 was a super important feature, just that it's appearance suggests
 quite a bit more flexibility in the universe than determinism would
 predict or allow.
For a materialist appearance in terms of consciousness suggests nothing,
except purely subjectively to an individual (usually not to the materialist
of course, since he is more objective than that). Just matter matters,
because this is how it is.
They start from the assumption that matter is all that is, and therefore
they end with that conclusion, no matter what appears to be the case.

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-Overlords-Gambit-tp32662974p32681445.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The Overlords Gambit

2011-10-19 Thread benjayk
 is really going on, we need to see the relationship of
 the extremes and that they both need each other to make any sense.
 Fact is a kind of fiction, fiction is a kind of fact, but also they
 are opposites to each other as well. It is an involuted continuum. The
 inside becomes the outside but the two topologies remain separate
 also.
That's kind of a mish-mash vague perspective. I like it more radical and
clear. I doesn't seem to me like reality is like cocktail of different
things, but one unified absolute.
Not that it is wrong to find a middle ground of different perspectives, but
your page seems to want to deal with the fundament of all (A ManifesTOE),
and this approach doesn't work there.


Craig Weinberg wrote:
 
 Craig Weinberg wrote:

   This thought experiment
  is much more primitive than that. I'm just showing how low level
  processes must be susceptible to control from high level processes as
  well.

 You are not really showing that, frankly. You just show you can imagine
 that
 it could be so, or that it feels that way.
 These thought experiments may be fun, but they really show nothing,
 except
 if someone happens to agree with you already.

 
 I think it presents a counterfactual. If neurons were always
 controlling our will and never the other way around, then we should
 not be able to control neurons outside of our own body either. We have
 to decide if it makes more sense that control passes in both
 directions, or if neurons are magical sources of control which can
 never be controlled themselves.
If we believe that neurons (and matter in general) are the magical source of
(apparent) consciousness (and control), the thought experiment doesn't
really show anything. It might show to you how absurd that is, but if they
buy the absurd premise, it can't work.


Craig Weinberg wrote:
 
  When I say that your will is not really free, I am not saying that you
  are a
  puppet that is controlled by your brain. An opinion is valuable to
 you,
  whether you just have it, or you claim to use your will to have it.
  The cosmos does not need free will, as it is free without a will. It
 just
  does what it does, including having opinions, talking to interesting
  people,
  etc... Why is all of that nothing worth if there is no controller of
  them?

  Why isn't just doing 'what it does' free will?

 Because the feeling of will need not be involved, so why call it will
 then?
 
 Why should we assume there is no need for a feeling of will to be
 involved?
Because humans can be freely living without feeling to exert will.


Craig Weinberg wrote:
 

  I mean, it is natural to want to be the owner of things (these are MY
  actions), but we can also learn to transcend this, or rather, see
 that
  there is no owner in the first place (just the appearance of one). I
 find
  this liberating, not dehumanizing.

  Right, but that's a whole other conversation. I'm just talking to the
  functionalists among us who claim that there is nothing to want to own
  anything in the first place. That it can all only be functions
  satisfying microcosmic physical laws.

 I am not sure you can convince someone by argueing against that, just
 like
 you are unlikely to convince a hard headed christian fundamentalist. It
 is
 just dogma and you (mostly) can't touch that with any words. It is more
 an
 emotional attachment. A materialistic world may be meaningless, but it is
 potentially understandable and controllable, so if that's important to
 you,
 you won't let go of that belief.
 
 True, yes. I think it may even go beyond that to a kind of
 neurological orientation like handedness or gender. I don't know that
 my intention is to convince anyone of anything exactly, I'm mainly
 trying to see if there is something that I haven't thought of before
 which would throw doubt on my own ideas, and I think it helps me
 develop ways of sharing my ideas with those who might be less
 dogmatic.
OK, it is always a good intention to develop doubt about one's ideas. It
helps to go beyond ideas altogether, and face the unfathomable reality
beyond ideas.
I am not sure that materialists will help you much there, when I discuss(ed)
with them, it seems to me it is largly a frustrating waste of time. But if
it is fun to you, why not, I just observed in me that I often was leading
discussions because I felt compelled to, not because it was fun.

benjayk

-- 
View this message in context: 
http://old.nabble.com/The-Overlords-Gambit-tp32662974p32683332.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The Overlords Gambit

2011-10-18 Thread benjayk


Craig Weinberg wrote:
 
 Here’s a little thought experiment about free will. Let’s say that
 there exists a technology which will allow us to completely control
 another person’s neurology. What if two people use this technology to
 control each other? If one person started before the other, then they
 could effectively ‘disarm’ the others control over them preemptively,
 but what if they both began at the exact same time? Would one ‘win’
 control over the other somehow? Would either of them even be able to
 try to win? How would they know if they were controlling the other or
 being controlled to think they are controlling the other?
 
Complete control over anything is simply impossible. Control is just a
feeling and not fundamental.
The closest one can get to controlling the brain is to make it
dysfunctional. It's a bit boring, but the most realistic answer is that both
would fall unconscious, as that is the only result of exerting excessive
control over a brain.
It's the same result as if you try to totally control an ecosystem, or an
economy. It'll destroy the natural order, as control is not a fundamental
ordering principle.

It seems like you think of control or will as something fundamental, and I
don't see any reason to assume that it is. Honestly I that we think that we
have free, independent will is just the arrogance of our ego that feels it
has to have a fundamentally special place in the universe.
That is not to say that we are predetermined by a material universe, rather
control is just a phenomenon arising in consciousness like all other
phenomena eg feelings and perceptions.

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-Overlords-Gambit-tp32662974p32674925.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The Overlords Gambit

2011-10-18 Thread benjayk


Craig Weinberg wrote:
 
 On Oct 18, 10:00 am, benjayk benjamin.jaku...@googlemail.com wrote:
 Craig Weinberg wrote:

  Here’s a little thought experiment about free will. Let’s say that
  there exists a technology which will allow us to completely control
  another person’s neurology. What if two people use this technology to
  control each other? If one person started before the other, then they
  could effectively ‘disarm’ the others control over them preemptively,
  but what if they both began at the exact same time? Would one ‘win’
  control over the other somehow? Would either of them even be able to
  try to win? How would they know if they were controlling the other or
  being controlled to think they are controlling the other?

 Complete control over anything is simply impossible. Control is just a
 feeling and not fundamental.
 
 It depends what you mean by complete control. If I choose to hit the
 letter m on my keyboard, am I not controlling the keyboard to the
 extent that it is controllable?
 
You can control everything to the extent that it is controllable for you,
obviously.
But you can't have control over the individual constituents of the keyboard
all at the same time in the exact way you want it. For the keyboard, you
don't need to, but the brain has no lever which you can use to make it do
what you want, because, contrary to the keyboard, it has not been designed
for that task - it is a holistic system, if you control a part of it
(sticking a electrode into you brain for example), it still won't do what
you want it to, as a whole.
So to control it, you'd have to do it on a broad scale and a fundamental
level. But we can't do that, and if someone could, the brain would just be a
puppet steered by a puppeter and as such it wouldn't be a brain as working
system, but rather a mass of flesh that is being manipulated.


Craig Weinberg wrote:
 
 The closest one can get to controlling the brain is to make it
 dysfunctional. It's a bit boring, but the most realistic answer is that
 both
 would fall unconscious, as that is the only result of exerting excessive
 control over a brain.
 It's the same result as if you try to totally control an ecosystem, or an
 economy. It'll destroy the natural order, as control is not a fundamental
 ordering principle.
 
 I generally agree. The thought experiment is to make people consider
 the fallacy of exclusively bottom up processing. I don't think that
 you could actually control a brain, I'm just saying that if you could,
 how do you get around the fact that it violates the assumption that
 only neurons can control the brain.
I don't think that many people would claim that. You probably mean that the
neurons control your behaviour, but I don't think many people believe that,
either. Materialist would rather claim that the neurons are the physical
cause for behaviour, and consciousness arises as a phenomenon alongside.
I don't see how this is any problem with regards to control, it just is a
claim of magic (mind coming out of non-mind, with no mechanism how this
could happen) that is not even directly subjectively validated (like the
magic of consciousness that we can directly witness).


Craig Weinberg wrote:
 
  The point was to show that bottom up exclusivity fails,
 and that  we must consider that our ordinary intuition of bi-
 directional, high-low processing interdependence may indeed be valid.
Yes, I guessed that this was your point, but I am not sure that your thought
experiment helps it. Neurons making thought is quite meaingless from the
start, I don't see how it is affected by what controls what.


Craig Weinberg wrote:
 

 It seems like you think of control or will as something fundamental, and
 I
 don't see any reason to assume that it is.
 
 That's a reasonable objection. If it's not fundamental, what is it
 composed of, and why is there an appearance of anything other than
 whatever that is?
It is not composed of anything (I am not a reductionist). Rather it arises
like other feelings/perceptions, for example being hungry (it is just more
essential to our identity).
The reason for its appearance is simply as a feedback mechanism, it shows us
that we are the source of the actions, which bring attention to our
actions (which is obviously quite useful). As such it is not more
fundamental than other such mechanism (like pain, which shows us something
is wrong in our body).
Also, in a state of enlightenment, the feeling of being in control
vanishes (together with the ego that is supposed to be the controller), and
people still function normally, which shows that it can't be that
fundamental. It is an artifact of seeing yourself as a person, seperate from
your environment, and intervening in it. Actually it is quite a crude tool,
as many times we feel to be in control when the main cause lies in something
else (like gambling), and often we don't feel in control of essential
interventions into our environment (like reflexes).


Craig Weinberg wrote

  1   2   3   >