Re: Are Philosophical Zombies possible?

2024-08-07 Thread Russell Standish
On Mon, Jul 08, 2024 at 04:34:56PM -0400, Jason Resch wrote:
> 
> 
> On Mon, Jul 8, 2024, 4:04 PM John Clark  wrote:
> 
> 
> On Mon, Jul 8, 2024 at 2:12 PM Jason Resch  wrote:
> 
> 
> >Consciousness is a prerequisite of intelligence.
> 
> 
> I think you've got that backwards, intelligence is a prerequisite of
> consciousness. And the possibility of intelligent ACTIONS is a  
> prerequisite for Darwinian natural selection to have evolved it.
> 
> 
> I disagree, but will explain below.
> 
> 
>  
> 
> > One can be conscious without being intelligent,
> 
> 
> Sure.
> 
> 
> I define intelligence by something capable of intelligent action.
> 
> Intelligent action requires non random choice: choice informed by information
> from the environment.
> 
> Having information about the environment (i.e. perceptions) is consciousness.
> You cannot have perceptions without there being some process or thing to
> perceive them.
> 
> Therefore perceptions (i.e. consciousness) is a requirement and precondition 
> of
> being able to perform intelligent actions.
> 
> Jason 


By this definition, thermostats are conscious.

This could be a definitional debate, but I do disagree.

For me, a consious entity is aware if its place in the world. At a
minimum, it must have a self/other distinction (does that mean immune
systems are conscious?), but I strongly suspect it involves some
notion of self-awareness. I struggle to come with an example of a
non-self-aware consciousness.

OTOH, consciousness needn't necessarily imply intelligence, but
perhaps it does.


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders hpco...@hpcoders.com.au
  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20240808011804.GA12703%40zen.


Re: Are Philosophical Zombies possible?

2024-07-31 Thread John Clark
On Wed, Jul 31, 2024 at 10:49 AM PGC  wrote:

*> I checked the paper again and instead of a response through my phone,
> I'll try to be a bit clearer and leave out the psychology: *
>
> *The misconception is that humans cannot be Turing machines because they
> lack infinite tape. In "On Computable Numbers, With An Application To The
> Entscheidungsproblem," Turing describes Universal Turing Machines (UTMs)
> using finite tables of instructions, with no inherent need for infinite
> resources. Each UTM operates based on a finite set of rules and
> transitions, handling finite inputs and outputs. The notion of an infinite
> tape is often misinterpreted; Turing's idea was that the tape could be
> extended as needed for any computation, ensuring sufficient resources for
> finite tasks rather than literally requiring an infinite resource. Turing's
> formalism re UTM does not involve actual infinities; UTMs are finite
> machines with finite instructions. *
>
> *Turing addresses the ambiguity of functions that may not halt, consistent
> with practical computing where some algorithms may run indefinitely.
> Therefore, the argument that humans cannot be Turing machines due to a lack
> of infinite tape is based on a misinterpretation of Turing's work. Turing's
> detailed description of UTMs involves finite tables, instructions, and
> inputs, making it clear that UTMs are finite in every way. This aligns with
> the practical realities of computation and human cognition, reinforcing the
> idea that human cognitive processes can be viewed as computational within
> Turing's theoretical framework.*
> *I mean, if you approach a UTM with a set of instructions that require the
> full expression of some transcendental number committed to memory in
> decimals, then hopefully the UTM is rich enough in reasoning abilities to
> ask you for your medical history/habits, instead of letting you start
> expressing the full description of your instructions. 😅 Looking at the
> paper, this seems too absurd to even mention there.*
>

Another way to say that is that whenever you observe a Turing Machine that
has stopped (a.k.a. has finished its calculation) you will find it has only
used a finite amount of tape, and whenever you observe a Turing Machine
that is still running you will also observe that it has only used a finite
amount of tape.

John K ClarkSee what's on my new list at  Extropolis

mt0

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0WD8LcHDhxkKwpnp6fXKTU%2BN0vEzz9WtG3bS9FFf_Gdw%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-31 Thread PGC
I checked the paper again and instead of a response through my phone, I'll 
try to be a bit clearer and leave out the psychology: 

The misconception is that humans cannot be Turing machines because they 
lack infinite tape. In "On Computable Numbers, With An Application To The 
Entscheidungsproblem," Turing describes Universal Turing Machines (UTMs) 
using finite tables of instructions, with no inherent need for infinite 
resources. Each UTM operates based on a finite set of rules and 
transitions, handling finite inputs and outputs. The notion of an infinite 
tape is often misinterpreted; Turing's idea was that the tape could be 
extended as needed for any computation, ensuring sufficient resources for 
finite tasks rather than literally requiring an infinite resource. Turing's 
formalism re UTM does not involve actual infinities; UTMs are finite 
machines with finite instructions. 

Turing addresses the ambiguity of functions that may not halt, consistent 
with practical computing where some algorithms may run indefinitely. 
Therefore, the argument that humans cannot be Turing machines due to a lack 
of infinite tape is based on a misinterpretation of Turing's work. Turing's 
detailed description of UTMs involves finite tables, instructions, and 
inputs, making it clear that UTMs are finite in every way. This aligns with 
the practical realities of computation and human cognition, reinforcing the 
idea that human cognitive processes can be viewed as computational within 
Turing's theoretical framework.

I mean, if you approach a UTM with a set of instructions that require the 
full expression of some transcendental number committed to memory in 
decimals, then hopefully the UTM is rich enough in reasoning abilities to 
ask you for your medical history/habits, instead of letting you start 
expressing the full description of your instructions. 😅 Looking at the 
paper, this seems too absurd to even mention there.

On Tuesday, July 30, 2024 at 3:48:17 PM UTC+2 PGC wrote:

> I just saw this now, so apologies for the late reply: 
>
> On Saturday, July 20, 2024 at 4:49:40 AM UTC+2 Russell Standish wrote:
>
>
> Hi Brent - you're just being a bit sloppy with terminology. All 
> universal Turing machines are equivalent computationally, but not all 
> Turing machines are universal. 
>
> It's a moot point about whether a human can be considered a universal 
> Turing machine - a human's finite lifetime is a problem, so you'd 
> really need to consider something like a society of humans whose 
> organisation extends beyond the finite lifetime of an individual 
> human. Even then, there may well be limits to the amount of 
> computation physically possible in the universe, depending on the 
> universe's geometry (which gets us into Tipler's Omega point theory, 
> for example).
>
>
> This is the standard "infinite tape thus we can't be universal Turing 
> machine argument". I thought It refuted by the fact that a universal Turing 
> machine is a finite machine, as can be verified by the Tables of Turing's 
> publications, that imply description, properties, and code of such machines 
> in Turing's formalism. But there's nothing infinite about them. He used the 
> term "infinite tape" much more weakly in the sense "as long as is required 
> by what the machine is executing". Iirc it is also this finitude that will 
> psychologically predispose UTM's aware of it, to write on cave walls, build 
> libraries, buy new hard drives, or subscribe to cloud services. I don't 
> know if we are UTMs but I thought they were finite machines, which can be 
> expressed in finite manner, working with finite sets of instructions, some 
> of which  do not halt. 
>
> And "death" to such a machine would just appear to be deletion of all 
> local memory. Sort of local amnesia. Acquiring more space for your hard 
> drive... it's also the agenda of all colonialists who attempt the 
> psychological escape of the mortal coil by acquiring more land for more 
> space to configure their memories more favorably towards them. It's what 
> would motivate some to seek out Mars and new planets, or to manifest 
> destiny towards the west and similar.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/8674c23f-cd63-46c6-b1c6-025680f69ca4n%40googlegroups.com.


Re: Are Philosophical Zombies possible?

2024-07-30 Thread PGC
I just saw this now, so apologies for the late reply: 

On Saturday, July 20, 2024 at 4:49:40 AM UTC+2 Russell Standish wrote:


Hi Brent - you're just being a bit sloppy with terminology. All 
universal Turing machines are equivalent computationally, but not all 
Turing machines are universal. 

It's a moot point about whether a human can be considered a universal 
Turing machine - a human's finite lifetime is a problem, so you'd 
really need to consider something like a society of humans whose 
organisation extends beyond the finite lifetime of an individual 
human. Even then, there may well be limits to the amount of 
computation physically possible in the universe, depending on the 
universe's geometry (which gets us into Tipler's Omega point theory, 
for example).


This is the standard "infinite tape thus we can't be universal Turing 
machine argument". I thought It refuted by the fact that a universal Turing 
machine is a finite machine, as can be verified by the Tables of Turing's 
publications, that imply description, properties, and code of such machines 
in Turing's formalism. But there's nothing infinite about them. He used the 
term "infinite tape" much more weakly in the sense "as long as is required 
by what the machine is executing". Iirc it is also this finitude that will 
psychologically predispose UTM's aware of it, to write on cave walls, build 
libraries, buy new hard drives, or subscribe to cloud services. I don't 
know if we are UTMs but I thought they were finite machines, which can be 
expressed in finite manner, working with finite sets of instructions, some 
of which  do not halt. 

And "death" to such a machine would just appear to be deletion of all local 
memory. Sort of local amnesia. Acquiring more space for your hard drive... 
it's also the agenda of all colonialists who attempt the psychological 
escape of the mortal coil by acquiring more land for more space to 
configure their memories more favorably towards them. It's what would 
motivate some to seek out Mars and new planets, or to manifest destiny 
towards the west and similar.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1d3b68f6-f1cc-4e61-abac-1b85df100189n%40googlegroups.com.


Re: Are Philosophical Zombies possible?

2024-07-20 Thread John Clark
On Fri, Jul 19, 2024 at 10:49 PM Russell Standish 
wrote:

* > It's a moot point about whether a human can be considered a
> universal Turing machine*


*I don't think it's a moot point! If a Universal Turing Machine can emulate
a human being (and I see no reason why it could not) then in my humble
opinion that's just about the most interesting and important fact about
existence imaginable. At least imaginable by me. *

*> a human's finite lifetime is a problem*


I don't see why. Any Turing Machine that has stopped (a.k.a. successfully
finished its calculation) has only used a FINITE amount of tape. And the
same thing could be said about any Turing Machine that is still working on
its problem.


> > *there may well be limits to the amount of computation physically
> possible in the universe, depending on the universe's geometry*
>

Irrelevant. If true then that fact would affect a human being just as much
as it would affect a Turing Machine.

 John K ClarkSee what's on my new list at  Extropolis

tio

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2rJ8uf0iK%3DpLO8xSCurjDEVO-TXeeiR7nWJWur6Rh88w%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-19 Thread Russell Standish
On Sat, Jul 13, 2024 at 06:21:58PM -0400, John Clark wrote:
> On Sat, Jul 13, 2024 at 4:29 PM Brent Meeker  wrote:
> 
> 
> > All Turing machines have the same computational capability. 
> 
>  
> Well that certainly is not true! There is a Turing Machine for any computable
> task, but any PARTICULAR  Turing Machine has a finite number of internal 
> states
> and can only do one thing. If you want something else done then you are going
> to have to use a Turing Machine with a different set of internal states.  
> 

Hi Brent - you're just being a bit sloppy with terminology. All
universal Turing machines are equivalent computationally, but not all
Turing machines are universal.

It's a moot point about whether a human can be considered a universal
Turing machine - a human's finite lifetime is a problem, so you'd
really need to consider something like a society of humans whose
organisation extends beyond the finite lifetime of an individual
human. Even then, there may well be limits to the amount of
computation physically possible in the universe, depending on the
universe's geometry (which gets us into Tipler's Omega point theory,
for example).

Cheers
-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders hpco...@hpcoders.com.au
  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20240720024930.GB15533%40zen.


Re: Are Philosophical Zombies possible?

2024-07-19 Thread Russell Standish
On Wed, Jul 10, 2024 at 10:24:52AM -0400, Jason Resch wrote:

> 
> There was a study done in the 1950s on probabilistic Turing machines ( 
> https://
> www.degruyter.com/document/doi/10.1515/9781400882618-010/html?lang=en ) that
> found what they could compute is no different than what a deterministic Turing
> machine can compute.

But it would appear that computational complexity classes do differ. I
seemed to remember that P=NP for Turing machines with random oracles,
but it would seem (https://en.wikipedia.org/wiki/Random_oracle) that
whilst there does exist an oracle for which P=NP (Baker-Gill-Solovay
theorem), for random oracles generally, P≠NP with probability 1, ie
P=NP is only true on a set of measure zero of random oracles.

Let me know if I've misinterpreted that stuff... Seems important, as
evolution is a computational process with a random oracle, and it does
appear to be remarkably effective at solving (at least heuristically)
computationally hard problems.

Cheers

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders hpco...@hpcoders.com.au
  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20240720023944.GA15533%40zen.


Re: Are Philosophical Zombies possible?

2024-07-16 Thread John Clark
On Tue, Jul 16, 2024 at 1:27 AM Quentin Anciaux  wrote:

*> Again you're arguing past one another, Brent conflate UTM with a turing
> machine, but not all turing machines are UTM (universal), but any UTM can
> emulate any other turing machine.*
>

I agree with that. And I would add that if something is computable then
there exists a Turing Machine that can compute it, but not everything is
computable. For example, the Busy Beaver function is not computable
because, as Alan Turing discovered, the Halting Problem is not computable.

   John K ClarkSee what's on my new list at  Extropolis


yqt

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0-bXXsXuTqeuBcfV4pEeC6Vo%3DD%3DjUzk%3D6otSdM%2BDY8TA%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-16 Thread Brent Meeker
Yes, I've figured out that is what is going on, which is why I asked 
John for his definition of a Turing machine which apparently is a 
machine that reads a symbol from a tape, writes a symbol, and moves to 
another point on the tape in accordance with a fixed table of rules, and 
beginning with  an infinite tape that is blank except for a finite 
initial part.  Turing proved that there is such a machine (actually a 
whole class) that, depending on the initial part of the tape can compute 
anything that's computable, and these are called Universal Turing 
Machines, and I assumed those were the only ones of interest and left 
off the "universal".  Is there anything interesting about non-universal 
tape-reading-typing machines?


Brent

On 7/15/2024 10:26 PM, Quentin Anciaux wrote:
Again you're arguing past one another, Brent conflate UTM with a 
turing machine, but not all turing machines are UTM (universal), but 
any UTM can emulate any other turing machine.


Quentin

Le mar. 16 juil. 2024, 01:54, John Clark  a écrit :



On Mon, Jul 15, 2024 at 7:38 PM Brent Meeker
 wrote:

*>> Thanks but I already figured out how to look things up
in Wikipedia.*




/> "Knowing how to see what it says isn't the same as knowing
what it says:
A Turing machine is a mathematical model of computation
describing an abstract machine[1] that manipulates symbols on
a strip of tape according to a table of rules.[2] Despite the
model's simplicity, it is capable of implementing any computer
algorithm."
/


Yes I know, that's what I said in my previous post using different
words. What is your point?
John K Clark    See what's on my new list at Extropolis

wcq



-- 
You received this message because you are subscribed to the Google

Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit

https://groups.google.com/d/msgid/everything-list/CAJPayv1D4pXnGoKCYPdU82WKiWscUDGbcmg-Ot%3DS9fSQw73kMw%40mail.gmail.com

.

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMW2kAq_H%2BePpOjvf2kJY8yA8nLa6noXA4-tR-ANFVYgdjJ-AQ%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/cf538e6a-a159-48be-a3e8-c33584624715%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-15 Thread Quentin Anciaux
Again you're arguing past one another, Brent conflate UTM with a turing
machine, but not all turing machines are UTM (universal), but any UTM can
emulate any other turing machine.

Quentin

Le mar. 16 juil. 2024, 01:54, John Clark  a écrit :

>
>
> On Mon, Jul 15, 2024 at 7:38 PM Brent Meeker 
> wrote:
>
> *>> Thanks but I already figured out how to look things up in Wikipedia.*
>>
>>
>>
>>
>> *> "Knowing how to see what it says isn't the same as knowing what it
>> says: A Turing machine is a mathematical model of computation describing an
>> abstract machine[1] that manipulates symbols on a strip of tape according
>> to a table of rules.[2] Despite the model's simplicity, it is capable of
>> implementing any computer algorithm."*
>
>
> Yes I know, that's what I said in my previous post using different words.
> What is your point?
> John K ClarkSee what's on my new list at  Extropolis
> 
> wcq
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1D4pXnGoKCYPdU82WKiWscUDGbcmg-Ot%3DS9fSQw73kMw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMW2kAq_H%2BePpOjvf2kJY8yA8nLa6noXA4-tR-ANFVYgdjJ-AQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-15 Thread John Clark
On Mon, Jul 15, 2024 at 7:38 PM Brent Meeker  wrote:

*>> Thanks but I already figured out how to look things up in Wikipedia.*
>
>
>
>
> *> "Knowing how to see what it says isn't the same as knowing what it
> says: A Turing machine is a mathematical model of computation describing an
> abstract machine[1] that manipulates symbols on a strip of tape according
> to a table of rules.[2] Despite the model's simplicity, it is capable of
> implementing any computer algorithm."*


Yes I know, that's what I said in my previous post using different words.
What is your point?
John K ClarkSee what's on my new list at  Extropolis

wcq

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1D4pXnGoKCYPdU82WKiWscUDGbcmg-Ot%3DS9fSQw73kMw%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-15 Thread Brent Meeker



On 7/15/2024 5:28 AM, John Clark wrote:
On Sun, Jul 14, 2024 at 9:00 PM Brent Meeker  
wrote:


//
/> What's your definition of a Turing machine? /


*A Turing Machine is the simplest possiblecomputer modelthat the 
logical operation of any real computer can be reduced to if given 
enough internal states, it has n States plus the HALT state. A Turing 
Machineconsists of two parts, a long tape with zeros and ones on it 
and a read/write head which can only print a zero or a one on the 
tape, move one notch left or right, and CHANGE INTO A DIFFERENT STATE. 
After the head has read a symbol what it will do next depends on 
whether it has just seen a one or a zero, and it depends ON WHAT STATE 
IT IS CURRENTLY IN. And in general, the more states a Turing Machine 
can be in the richer and more complex its behavior will be. *


*An easy way to understand Turing Machines is to separate the program 
and the data. The data is on the tape and the program is on a series 
of cards. You always start at card #1 and, depending on if you are 
reading a 1 or a 0  on the tape that card tells you to print a one or 
a zero, move left or right, and it instructs you on which state to 
turn into next, that is to say it tells you the number of the card 
(a.k.a. state) to read (a.k.a. be in) next. As I said, all Turing 
Machines have N states plus a HALT state. The information on the cards 
could have originally come from the tape as in a Universal Turing 
Machine, but for simplicity I will assume the cards have already been 
pre-installed. *

**
*For a start let's look at a one state (card) Turing Machine, there 
are 8 different actions a card could tell you to perform if you first 
see a zero on the tape :


1) Write a 1 or a 0.
2) Move left or right.
3) Stay on card#1 (the only card) or halt.*
*
So there are 2^3=2*2*2 =8 things you can do if you read a zero. But 
there are also 8 things you can do if you first read a 1 on the tape, 
so there are 8*8= 64 possible one card (aka one state) Turing 
Machines. 64 is a small number so you can't do much with just a one 
state Turing Machine.

*
*
*
*Now let’s look at a 2 card (state) Turing Machine, if you’re 
currently reading 0 there are 12 things the first card could tell you 
to do; write a zero or a one, shift left or right, go into state 
(card) 1 or card 2 or halt, 2*2*3= 12.  And if you’re currently 
reading 1 there are also 12 things the first card could tell you to 
do: So there are 12*12=144 different things that first card could 
 tell you to do. However, each of those 144 cards must be paired up 
with a second card, and there are also 144 things that the second card 
could tell you to do. So there are 144*144= 20,736 different 2 card 
(state) Turing Machines.*

*
*
*The general formula is, there are (2 N s +1)^(s N )Turing machines 
with N states and with s symbols. A two symbol machine is the simplest 
and can do everything a machine with more symbols can do, so if s=2 
then the formula for the number of 2 symbol N state (card) Turing 
Machines) is [4(N+1)]^(2N). Thus the number of Turing Machines 
increases exponentially with N, but the Busy Beaver number increases 
far far faster than exponentially, faster than super-exponential, in 
fact the function is not  even computable despite the fact it's well 
defined and finite.*


> /I posted the definition from Wikipedia:
/


*Thanks but I already figured out how to look things up in Wikipedia.*

*Knowing how to see what it says isn't the same as knowing what it says:

A Turing machine is a mathematical model of computation describing an 
abstract machine[1] that manipulates symbols on a strip of tape 
according to a table of rules.[2] Despite the model's simplicity, /it is 
capable of implementing any computer algorithm./


Brent
*

*
*
John K Clark    See what's on my new list at Extropolis 


wlo




--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1EbqrUAsxA9603O8NDg5_A42B2_WW14mS_rEiwztNoGg%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/831020d1-5977-49e5-8e07-7a0c035f91df%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-15 Thread John Clark
*One thing that I think is pretty neat is that somebody has written a
program for a 47 state Turing machine that will halt if and only if it
finds an even number that is not the sum of two primes, in other words it
will halt only if Goldbach's Conjecture is false. If we knew what the 47th
Busy Beaver number is (a very big if) and we ran the program until it
reached BB(47) and it had not halted then we would know it would never
halt, and thus Goldbach's Conjecture must be true. The only trouble is it's
almost certainly impossible to calculate BB(47), we know for a fact it's
impossible to calculate BB(745) and I wouldn't be surprised if it's
impossible to calculate BB(6).*

*Goldbach Turing machine with 47 states*



John K ClarkSee what's on my new list at  Extropolis

nn6

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1WiDW3Y2kghxRDtsYoacNVR2ky%2Bi_Sa8zCGw%3Dbfk9mgQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-15 Thread John Clark
On Sun, Jul 14, 2024 at 9:00 PM Brent Meeker  wrote:

* > What's your definition of a Turing machine?  *
>

*A Turing Machine is the simplest possible computer model that the logical
operation of any real computer can be reduced to if given enough internal
states, it has n States plus the HALT state.  A Turing Machine  consists of
two parts, a long tape with zeros and ones on it and a read/write head
which can only print a zero or a one on the tape, move one notch left or
right, and CHANGE INTO A DIFFERENT STATE. After the head has read a symbol
what it will do next depends on whether it has just seen a one or a
zero, and it depends ON WHAT STATE IT IS CURRENTLY IN. And in general, the
more states a Turing Machine can be in the richer and more complex its
behavior will be. *

*An easy way to understand Turing Machines is to separate the program and
the data. The data is on the tape and the program is on a series of cards.
You always start at card #1 and, depending on if you are reading a 1 or a 0
 on the tape that card tells you to print a one or a zero, move left or
right, and it instructs you on which state to turn into next, that is to
say it tells you the number of the card (a.k.a. state) to read (a.k.a. be
in) next.  As I said, all Turing Machines have N states plus a HALT state.
The information on the cards could have originally come from the tape as in
a Universal Turing Machine, but for simplicity I will assume the cards have
already been pre-installed. *





*For a start let's look at a one state (card) Turing Machine, there are 8
different actions a card could tell you to perform if you first see a zero
on the tape :1) Write a 1 or a 0.2) Move left or right.3) Stay on card#1
(the only card) or halt.*


*So there are 2^3=2*2*2 =8 things you can do if you read a zero. But there
are also 8 things you can do if you first read a 1 on the tape, so there
are 8*8= 64 possible one card (aka one state) Turing Machines. 64 is a
small number so you can't do much with just a one state Turing Machine.*

*Now let’s look at a 2 card (state) Turing Machine, if you’re currently
reading 0 there are 12 things the first card could tell you to do; write a
zero or a one, shift left or right, go into state (card) 1 or card 2 or
halt, 2*2*3= 12.  And if you’re currently reading 1 there are also 12
things the first card could tell you to do: So there are 12*12=144
different things that first card could  tell you to do. However, each of
those 144 cards must be paired up with a second card, and there are also
144 things that the second card could tell you to do. So there are 144*144=
20,736 different 2 card (state) Turing Machines.*

*The general formula is, there are (2 N s +1)^(s N )Turing machines with N
states and with s symbols. A two symbol machine is the simplest and can do
everything a machine with more symbols can do, so if s=2 then the formula
for the number of 2 symbol N state (card) Turing Machines) is
[4(N+1)]^(2N). Thus the number of Turing Machines increases exponentially
with N, but the Busy Beaver number increases far far faster than
exponentially, faster than super-exponential, in fact the function is not
 even computable despite the fact it's well defined and finite.*

 >
> *I posted the definition from Wikipedia:*
>

*Thanks but I already figured out how to look things up in Wikipedia.*

John K ClarkSee what's on my new list at  Extropolis

wlo

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1EbqrUAsxA9603O8NDg5_A42B2_WW14mS_rEiwztNoGg%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-14 Thread Brent Meeker



On 7/14/2024 2:02 PM, John Clark wrote:
On Sun, Jul 14, 2024 at 4:23 PM Brent Meeker  
wrote:


/> Every Turing machine can compute whatever is computable...which
means stopping with the answer. /

*
*
*Nonsense!We know everything that a one state Turing machine can due 
to a blank input tape *
*because there are only 64 of them, some of them stop and some of them 
never do. And w**e know everything that a two state Turing machine can 
do to a blank input tape **because there are **20,736** of them. 
**20,736 is larger than 64 therefore there must be some things that a 
**two state Turing machine can do that a one **state Turing machine 
can NOT do.

*

That doesn't follow!

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/e9bbf6db-3053-4ae0-aabb-43122e9b52c0%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-14 Thread Brent Meeker
What's your definition of a Turing machine?  I think we're talking at 
cross purposes.  I posted the definition from Wikipedia:


>>>
A Turing machine is a mathematical model of computation describing an 
abstract machine[1] that manipulates symbols on a strip of tape 
according to a table of rules.[2] *Despite the model's simplicity, it is 
capable of implementing any computer algorithm.[3]

*
The machine operates on an infinite[4] memory tape divided into discrete 
cells,[5] each of which can hold a single symbol drawn from a finite set 
of symbols called the alphabet of the machine. It has a "head" that, at 
any point in the machine's operation, is positioned over one of these 
cells, and a "state" selected from a finite set of states. At each step 
of its operation, the head reads the symbol in its cell. Then, based on 
the symbol and the machine's own present state, the machine writes a 
symbol into the same cell, and moves the head one step to the left or 
the right,[6] or halts the computation. The choice of which replacement 
symbol to write, which direction to move the head, and whether to halt 
is based on a finite table that specifies what to do for each 
combination of the current state and the symbol that is read. Like a 
real computer program, it is possible for a Turing machine to go into an 
infinite loop which will never halt.


The Turing machine was invented in 1936 by Alan Turing,[7][8] who called 
it an "a-machine" (automatic machine).[9] It was Turing's doctoral 
advisor, Alonzo Church, who later coined the term "Turing machine" in a 
review.[10] With this model, Turing was able to answer two questions in 
the negative:


Does a machine exist that can determine whether any arbitrary machine on 
its tape is "circular" (e.g., freezes, or fails to continue its 
computational task)?


Does a machine exist that can determine whether any arbitrary machine on 
its tape ever prints a given symbol?[11][12]


Thus by providing a mathematical description of a *very simple device 
capable of arbitrary computations,* he was able to prove properties of 
computation in general—and in particular, the uncomputability of the 
Entscheidungsproblem ('decision problem').[13]

<

Brent

On 7/14/2024 2:02 PM, John Clark wrote:
On Sun, Jul 14, 2024 at 4:23 PM Brent Meeker  
wrote:


/> Every Turing machine can compute whatever is computable...which
means stopping with the answer. /

*
*
*Nonsense!We know everything that a one state Turing machine can due 
to a blank input tape *
*because there are only 64 of them, some of them stop and some of them 
never do. And w**e know everything that a two state Turing machine can 
do to a blank input tape **because there are **20,736** of them. 
**20,736 is larger than 64 therefore there must be some things that a 
**two state Turing machine can do that a one **state Turing machine 
can NOT do. *
John K Clark    See what's on my new list at Extropolis 



3e4


--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1yQvDtCG_61pjNnSYoGkHZhmgJcAcG3zisDqPVrzX_Dw%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/95dfd483-52d4-4cb1-916d-0c29eccf86a1%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-14 Thread PGC


On Monday, July 15, 2024 at 12:08:13 AM UTC+2 Brent Meeker wrote:



On 7/14/2024 8:36 AM, PGC wrote:



Again, I would have thought that you reading this list for years, just like 
most regular members/poster, are aware of these difficulties. What can I 
say Jason? 

You can say, "I misunderstood Turing emulable."


You slapped "negation Turing emulable certitude" on every phenomena stated 
when I meant randomness/indeterminacy in QM and its conjunctions with a 
variety of challenging phenomena for some TOE. I overemphasized the latter 
perhaps. Of course "in principle". Hence my post "If we believe we are 
Turing emulable at some level of description..." See above.

But that doesn't make me certain that something like gravity is completely 
Turing emulable because I'm not sure. For practical purposes ok. I'd still 
like to know why the computer outputs infinities of velocity in Fluids with 
Navier-Stokes. It won't bother the engineer in designing the next fighter 
jet, but it does bother me. Imho a TOE should clarify or prove 
non-solvability. 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/ed55e7e4-da56-44b9-9a65-4f98d30424c3n%40googlegroups.com.


Re: Are Philosophical Zombies possible?

2024-07-14 Thread Brent Meeker




On 7/14/2024 10:18 AM, Jason Resch wrote:
I am not aware of any exceptions (except the hypothesized objective 
wave function collapse) but objective wave function collapse is a 
rather ridiculous theory for which we have no evidence.


And for which we have no evidence against...which is the way all 
successful theories are; they have no evidence for except they work 
(like wave function collapse) and no evidence against.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/fcecc720-7378-4434-973e-63538b6477f9%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-14 Thread Brent Meeker



On 7/14/2024 8:36 AM, PGC wrote:



On Sunday, July 14, 2024 at 5:42:23 AM UTC+2 Jason Resch wrote:



On Sat, Jul 13, 2024, 9:54 PM PGC  wrote:



On Sunday, July 14, 2024 at 3:51:27 AM UTC+2 John Clark wrote:

Yes it's possible to have a universal Turing machine in
the sense that you can run any program by just changing
the tape, however ONLY if that tape has instructions for
changing the set of states  that the machine can be in.



It still boggles my mind that matter is Turing-complete.


Turing completeness, as incredible as it is, is (remarkably) easy
to come by. You can achieve it with addition and multiplication,
with billiard balls, with finite automata (rule 110, or game of
life), with artificial neurons, etc. That something as
sophisticated as matter could achieve it is to me less surprising
than the fact that these far simpler things can.


In hindsight, every result is easy to come by. You assume 
sophistication to beat simplicity. That's just weird, given how little 
we actually know. Without that simplicity for example, we wouldn't 
have discovered computers.




And this despite parts of physics being not Turing emulable.

Finite physical system's can be simulated to any desired degree of
accuracy, and moreover all known laws of physics are computable.
Which parts of physics do you refer to when you say there are
parts that aren't Turing emulable?


? You write so much about these topics, I cannot understand how you 
make that statement. Many of the known laws are but there is so much 
more to physics than known laws and their solutions. And to any 
desired degree of accuracy? I'll write fast and clumsily as I am by no 
means an expert and gotta go:


Some finite-state physical phenomena present significant challenges to 
computational simulation due to their inherent complexity and the 
limitations of current computational models. One example is quantum 
entanglement and superposition. In quantum mechanics, particles can 
exist in multiple states simultaneously, which you know, and influence 
each other instantaneously at a distance, a phenomenon known as 
entanglement. Simulating these quantum behaviors on classical Turing 
machines is inherently difficult because it requires representing 
exponentially growing state spaces.
Difficult is not the same as impossible.  The part of quantum mechanics 
that is not Turing emulable is true randomness.  But Jason probably 
thinks it's deterministic but non-local, which could be emulated by 
pseudo-random number generators.


Turbulence in fluid dynamics is another challenging phenomenon. 
Turbulent flow in fluids features chaotic and unpredictable patterns, 
including vortices and eddies. Although Navier-Stokes equations 
describe fluid flow, solving these equations accurately (really 
accurately, beyond engineering application) for turbulent systems is 
computationally intensive and doesn't look feasible for all 
conditions, particularly at high Reynolds numbers where the flow 
becomes highly chaotic. This makes precise simulation of turbulent 
behavior quite the biscuit. Tao had the paper about when we can expect 
blow out and the results are sobering at this time.

Challenging is not impossible either.


Weather systems also exemplify the difficulties in simulating complex 
physical phenomena. Despite significant advancements in weather 
modeling, predicting weather with high precision over long periods 
remains a challenge due to the chaotic elements and the large number 
of interacting factors involved. The inherent unpredictability of 
weather systems underscores the limitations of current computational 
approaches.
Classical chaos is deterministic by definition.  It's just limitation in 
practice, not in theory.


Magnetohydrodynamics (MHD) adds another layer of complexity, 
particularly when modeling fusion processes and fluid behavior in 
stars, which also boggles my mind. MHD describes the dynamics of 
electrically conducting fluids like plasmas, liquid metals, and 
saltwater, combining principles from both magnetism and fluid 
dynamics. The equations governing MHD are highly nonlinear and 
coupled, making them difficult to solve to understate things. 
Simulating fusion reactions, such as those occurring in stars, 
involves not only MHD but also nuclear physics, thermodynamics, 
radiation transport, and things I can't probably name. These 
interactions take place under extreme conditions of temperature and 
pressure, further complicating the modeling efforts. This is some 
fancy shit, but do show me any simulation you know of with high or 
infinite accuracy.
You wrote, "...parts of physics being not Turing emulable."  Being 
Turing emulable is a provable mathematical property of a problem. It 
isn't changed because the problem is hard and our hardware in not 
adequate in practice.


In the context of astrophysics, model

Re: Are Philosophical Zombies possible?

2024-07-14 Thread John Clark
On Sun, Jul 14, 2024 at 4:23 PM Brent Meeker  wrote:

*> Every Turing machine can compute whatever is computable...which means
> stopping with the answer.  *


*Nonsense! We know everything that a one state Turing machine can due to a
blank input tape *
*because there are only 64 of them, some of them stop and some of them
never do. And w**e know everything that a two state Turing machine can do
to a blank input tape **because there are **20,736** of them. **20,736 is
larger than 64 therefore there must be some things that a **two state
Turing machine can do that a one **state Turing machine can NOT do. *
 John K ClarkSee what's on my new list at  Extropolis


3e4


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1yQvDtCG_61pjNnSYoGkHZhmgJcAcG3zisDqPVrzX_Dw%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-14 Thread Brent Meeker



On 7/14/2024 5:39 AM, John Clark wrote:
On Sat, Jul 13, 2024 at 10:34 PM Brent Meeker  
wrote:


/> The machine is universal. You don't need a different machine
with different internal states./

*
*
*First of all, the very definition of "a different Turing Machine" is 
a machine with a differentset of internal states*. And there is not 
just one Turing machine, there are an /infinite/number of them. *There 
are 64 one state two symbol(zero and one) Turing Machines,20,736 two 
state, **16,777,216three state, **25,600,000,000four state, and 
**634,033,809,653,824five state two symbol T**uring Machines. **

*
A Turing Machine with different sets of* internal states**will 
exhibit*/**different behavior even if given identical input states. /I 
think What confuses you is that it is possible to have a machine in 
which the tape not only provides the program the machine should work 
on but also the set of internal states that the machine has. In a way 
you could think of the tape as providing not only the program but also 
the wiring diagram of the computer. A universal Turing Machine is in 
an undefined state until the input tape, /or something else/, puts it 
in one specific state.

*
*
*Consider the Busy beaver function, if you feed in a tape with all 
zeros on it into all 4 state Turing Machines and ask "which of those 
25,600,000,000 machines will print the most ones before stopping" 
(it's important that the machine eventually stops), you will find this 
is not an easy question. All the machines are operating on identical 
input tapes (all zeros) but they behave differently, some stop almost 
immediately, others just keep printing 1 forever, but for others the 
behavior is vastly more complicated. It turns out that the winner is a 
set of states that prints out 13 ones after making 107 moves.

*


*You're one confused, John.  Every Turing machine can compute whatever 
is computable...which means stopping with the answer.  That's the 
definition of "Turing machine".  That Turing machines can be constructed 
with different numbers of internal states and using different operations 
is old news.


Brent
*

*
*
*A five state Turing Machine behaves differently, we just found out 
that a particular set of internal states prints 2098 ones after making 
47,176,870 moves. I wouldn't be surprised if the sixth Busy Beaver 
number is not computable, we know for a fact that any busy beaver 
number for a 745 State Turing machine or larger is not computable.  
Right now all we know about BB(6) is that it's larger than 
10^10^10^10^10^10^10^10^10^10^10^10^10^10^10.*

*
*
*The point of all this is thatTuring Machines with different sets of 
internal states behave very differently. *

*
*
John K Clark    See what's on my new list at Extropolis 
*

*
mth





--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1NX6MaOW7eJo1TVbmpAfvb7k4CgvnYOGV%3DzfDt%2BpOo1w%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/687f51ec-4cd8-419c-bb5b-54bf312fcaa2%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-14 Thread Jason Resch
On Sun, Jul 14, 2024, 11:36 AM PGC  wrote:

>
>
> On Sunday, July 14, 2024 at 5:42:23 AM UTC+2 Jason Resch wrote:
>
>
>
> On Sat, Jul 13, 2024, 9:54 PM PGC  wrote:
>
>
>
> On Sunday, July 14, 2024 at 3:51:27 AM UTC+2 John Clark wrote:
>
> Yes it's possible to have a universal Turing machine in the sense that you
> can run any program by just changing the tape, however ONLY if that tape
> has instructions for changing the set of states  that the machine can be
> in.
>
>
>
> It still boggles my mind that matter is Turing-complete.
>
>
> Turing completeness, as incredible as it is, is (remarkably) easy to come
> by. You can achieve it with addition and multiplication, with billiard
> balls, with finite automata (rule 110, or game of life), with artificial
> neurons, etc. That something as sophisticated as matter could achieve it is
> to me less surprising than the fact that these far simpler things can.
>
>
> In hindsight, every result is easy to come by. You assume sophistication
> to beat simplicity. That's just weird, given how little we actually know.
> Without that simplicity for example, we wouldn't have discovered computers.
>

When I say that matter is more sophisticated than say, the cells in game of
life, I mean matter is more flexible. So if something as limited as GoL is
flexible enough to create a Turing machine in it, then to me, it is less
surprising our (even more flexible) physics allows Turing machines to be
constructed.



>
>
>
> And this despite parts of physics being not Turing emulable.
>
> Finite physical system's can be simulated to any desired degree of
> accuracy, and moreover all known laws of physics are computable. Which
> parts of physics do you refer to when you say there are parts that aren't
> Turing emulable?
>
>
> ? You write so much about these topics, I cannot understand how you make
> that statement. Many of the known laws are
>

I am not aware of any exceptions (except the hypothesized objective wave
function collapse) but objective wave function collapse is a rather
ridiculous theory for which we have no evidence.

 but there is so much more to physics than known laws and their solutions.
> And to any desired degree of accuracy?
>

When I say this, I quote the Church-Turing-Wolfram-Deutsch principle:
https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93Deutsch_principle

"One expects in fact that universal computers are as powerful in their
computational capabilities as any physically realizable system can be, so
that they can simulate any physical system. This is the case if in all
physical systems there is a finite density of information, which can be
transmitted only at a finite rate in a finite-dimensional space."
— Stephen Wolfram in “Undecidability and Intractability in Theoretical
Physics” (1985)

To my knowledge, this principle remains an open conjecture in physics.



I'll write fast and clumsily as I am by no means an expert and gotta go:
>
> Some finite-state physical phenomena present significant challenges to
> computational simulation due to their inherent complexity and the
> limitations of current computational models.
>

This is due to the time and space limits of our computer hardware, not due
to any assumed inherent non-computable processes in physics.


One example is quantum entanglement and superposition. In quantum
> mechanics, particles can exist in multiple states simultaneously, which you
> know, and influence each other instantaneously at a distance, a phenomenon
> known as entanglement.
>

There are no non-local influence unless one  believes there is objective
wave function collapse. Entanglement is no more mysterious than consistency
of measurements. Both are the same phenomenon.


Simulating these quantum behaviors on classical Turing machines is
> inherently difficult because it requires representing exponentially growing
> state spaces.
>

Again this is a practical limitation of our hardware.



> Turbulence in fluid dynamics is another challenging phenomenon. Turbulent
> flow in fluids features chaotic and unpredictable patterns, including
> vortices and eddies.
>

Chaotic behavior means a system's future state cannot be predicted by
analytic means (there's not an equation we can plug a time variable into to
get a result arbitrarily far into the future). Rather, chaotic systems must
be simulated. Systems can be simulated to any desired degree of accuracy,
and measurement limitations will impose limits on how much we can know
about a system we intend on simulating. Again, the existence of chaotic
systems is not an example of uncomputable physical laws.


Although Navier-Stokes equations describe fluid flow, solving these
> equations accurately (really accurately, beyond engineering application)
> for turbulent systems is computationally intensive and doesn't look
> feasible for all conditions, particularly at high Reynolds numbers where
> the flow becomes highly chaotic. This makes precise simulation of turbulent
> behavior quite

Re: Are Philosophical Zombies possible?

2024-07-14 Thread PGC


On Sunday, July 14, 2024 at 5:42:23 AM UTC+2 Jason Resch wrote:



On Sat, Jul 13, 2024, 9:54 PM PGC  wrote:



On Sunday, July 14, 2024 at 3:51:27 AM UTC+2 John Clark wrote:

Yes it's possible to have a universal Turing machine in the sense that you 
can run any program by just changing the tape, however ONLY if that tape 
has instructions for changing the set of states  that the machine can be 
in. 



It still boggles my mind that matter is Turing-complete.


Turing completeness, as incredible as it is, is (remarkably) easy to come 
by. You can achieve it with addition and multiplication, with billiard 
balls, with finite automata (rule 110, or game of life), with artificial 
neurons, etc. That something as sophisticated as matter could achieve it is 
to me less surprising than the fact that these far simpler things can.


In hindsight, every result is easy to come by. You assume sophistication to 
beat simplicity. That's just weird, given how little we actually know. 
Without that simplicity for example, we wouldn't have discovered computers.
 



And this despite parts of physics being not Turing emulable. 

Finite physical system's can be simulated to any desired degree of 
accuracy, and moreover all known laws of physics are computable. Which 
parts of physics do you refer to when you say there are parts that aren't 
Turing emulable?


? You write so much about these topics, I cannot understand how you make 
that statement. Many of the known laws are but there is so much more to 
physics than known laws and their solutions. And to any desired degree of 
accuracy? I'll write fast and clumsily as I am by no means an expert and 
gotta go: 

Some finite-state physical phenomena present significant challenges to 
computational simulation due to their inherent complexity and the 
limitations of current computational models. One example is quantum 
entanglement and superposition. In quantum mechanics, particles can exist 
in multiple states simultaneously, which you know, and influence each other 
instantaneously at a distance, a phenomenon known as entanglement. 
Simulating these quantum behaviors on classical Turing machines is 
inherently difficult because it requires representing exponentially growing 
state spaces. 

Turbulence in fluid dynamics is another challenging phenomenon. Turbulent 
flow in fluids features chaotic and unpredictable patterns, including 
vortices and eddies. Although Navier-Stokes equations describe fluid flow, 
solving these equations accurately (really accurately, beyond engineering 
application) for turbulent systems is computationally intensive and doesn't 
look feasible for all conditions, particularly at high Reynolds numbers 
where the flow becomes highly chaotic. This makes precise simulation of 
turbulent behavior quite the biscuit. Tao had the paper about when we can 
expect blow out and the results are sobering at this time. 

Weather systems also exemplify the difficulties in simulating complex 
physical phenomena. Despite significant advancements in weather modeling, 
predicting weather with high precision over long periods remains a 
challenge due to the chaotic elements and the large number of interacting 
factors involved. The inherent unpredictability of weather systems 
underscores the limitations of current computational approaches.

Magnetohydrodynamics (MHD) adds another layer of complexity, particularly 
when modeling fusion processes and fluid behavior in stars, which also 
boggles my mind. MHD describes the dynamics of electrically conducting 
fluids like plasmas, liquid metals, and saltwater, combining principles 
from both magnetism and fluid dynamics. The equations governing MHD are 
highly nonlinear and coupled, making them difficult to solve to understate 
things. Simulating fusion reactions, such as those occurring in stars, 
involves not only MHD but also nuclear physics, thermodynamics, radiation 
transport, and things I can't probably name. These interactions take place 
under extreme conditions of temperature and pressure, further complicating 
the modeling efforts. This is some fancy shit, but do show me any 
simulation you know of with high or infinite accuracy.

In the context of astrophysics, modeling the behavior of fluids in stars, 
such as the convective and radiative zones, requires simulating the 
intricate interplay between gravity, fluid dynamics, magnetic fields, and 
nuclear fusion. The immense scales involved, both in terms of size and 
time, along with the chaotic nature of the processes, make it a challenging 
task to say the least. Accurate simulations of these phenomena are crucial 
for understanding stellar evolution, but they remain computationally 
intensive and challenging due to the complex, multi-physics nature of the 
problem.

Biological systems, such as protein folding, further illustrate the 
challenges of finite-state simulations. Protein folding involves a protein 
chain finding its energetically favorable three-d

Re: Are Philosophical Zombies possible?

2024-07-14 Thread John Clark
On Sun, Jul 14, 2024 at 9:35 AM Quentin Anciaux  wrote:

*> I think you miss the difference between UTM, Universal turing machine
> and turing machine...*
>
> *https://cs.stackexchange.com/questions/69197/difference-between-turing-machine-and-universal-turing-machine#:~:text=A%20Turing%20machine%20can%20be,input%20and%20generates%20some%20output
> .*
>


*It says "**A UTM can be compared to a computer. It can take any program
and run it with some input and generates some output", and I agree with
that, PROVIDED that the tape, in addition to containing the program you
wish to run, ALSO contains information about the sort of computer the
program is to be run on, that is to say it defines and specifies the set of
internal states the T**uring Machine** will have. Thus a** UTM can emulate
an Apple, Windows, LINUX, or any other type of digital computer if it is
given the correct set of internal states. And it's important to remember
that the very definition of "a different Turing Machine" is a machine with
a different set of internal states. A Universal Turing Machine has the
ability to turn into any Turing Machine if the information on how to do
that is included in the input tape. *

John K ClarkSee what's on my new list at  Extropolis





> *First of all, the very definition of "a different Turing Machine" is
>> a machine with a different set of internal states*. And there is not
>> just one Turing machine, there are an *infinite* number of them. *There
>> are 64 one state two symbol (zero and one) Turing Machines, 20,736 two
>> state, **16,777,216 three state, **25,600,000,000 four state, and 
>> **634,033,809,653,824
>> five state two symbol T**uring Machines. *
>>
>> A Turing Machine with different sets of* internal states** will exhibit** 
>> different
>> behavior even if given identical input states. *I think What confuses
>> you is that it is possible to have a machine in which the tape not only
>> provides the program the machine should work on but also the set of
>> internal states that the machine has. In a way you could think of the tape
>> as providing not only the program but also the wiring diagram of the
>> computer. A universal Turing Machine is in an undefined state until the
>> input tape, *or something else*, puts it in one specific state.
>>
>> *Consider the Busy beaver function, if you feed in a tape with all zeros
>> on it into all 4 state Turing Machines and ask "which of those
>> 25,600,000,000 machines will print the most ones before stopping" (it's
>> important that the machine eventually stops), you will find this is not an
>> easy question. All the machines are operating on identical input tapes (all
>> zeros) but they behave differently, some stop almost immediately, others
>> just keep printing 1 forever, but for others the behavior is vastly more
>> complicated. It turns out that the winner is a set of states that prints
>> out 13 ones after making 107 moves. *
>>
>> *A five state Turing Machine behaves differently, we just found out that
>> a particular set of internal states prints 2098 ones after making
>> 47,176,870 moves. I wouldn't be surprised if the sixth Busy Beaver number
>> is not computable, we know for a fact that any busy beaver number for a 745
>> State Turing machine or larger is not computable.  Right now all we know
>> about BB(6) is that it's larger than
>> 10^10^10^10^10^10^10^10^10^10^10^10^10^10^10.*
>>
>> *The point of all this is that Turing Machines with different sets of
>> internal states behave very differently. *
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1LOPhmLU3PZjA_b-GLdXxMsivEWbiUthg92piymR5J2g%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-14 Thread PGC


On Sunday, July 14, 2024 at 3:35:59 PM UTC+2 Quentin Anciaux wrote:

I think you miss the difference between UTM, Universal turing machine and 
turing machine...

https://cs.stackexchange.com/questions/69197/difference-between-turing-machine-and-universal-turing-machine#:~:text=A%20Turing%20machine%20can%20be,input%20and%20generates%20some%20output
.


Right. It’s common to say Turing Machines when we mean Universal Turing 
Machines and the difference is what I’ve been trying to outline in some 
recent posts. LLMs, like GPT-4, are fundamentally deterministic in nature 
when their state (model parameters) and inputs are fixed. The process of 
generating text involves (complex) mathematical transformations based on 
the input and the model's weights, which are determined during the training 
phase.

As further evidence for this, you have predictable outputs: If you provide 
the exact same input and ensure the model state is unchanged, the output 
will be the same. While LLMs can incorporate randomness (e.g., through 
temperature settings during text generation), this is externally introduced 
and can be controlled. Contrasting them to computers further: LLMs do not 
have an inherent mechanism to manage state transitions over time beyond 
their initial training. LLMs follow learned patterns rather than executing 
predefined algorithms with control flow structures like loops and 
conditionals.

LLMs run on computers and not the other way around. They excel in mimicking 
human-like reasoning by generating plausible continuations and answers 
based on vast amounts of learned data. However, their reasoning is 
associative and pattern-based, lacking true logical inference capabilities.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/fc60716a-9ea6-4f7f-850a-7bdb76d781bfn%40googlegroups.com.


Re: Are Philosophical Zombies possible?

2024-07-14 Thread Quentin Anciaux
I think you miss the difference between UTM, Universal turing machine and
turing machine...

https://cs.stackexchange.com/questions/69197/difference-between-turing-machine-and-universal-turing-machine#:~:text=A%20Turing%20machine%20can%20be,input%20and%20generates%20some%20output
.

Le dim. 14 juil. 2024, 14:40, John Clark  a écrit :

> On Sat, Jul 13, 2024 at 10:34 PM Brent Meeker 
> wrote:
>
> *> The machine is universal.  You don't need a different machine with
>> different internal states.*
>
>
> *First of all, the very definition of "a different Turing Machine" is
> a machine with a different set of internal states*. And there is not just
> one Turing machine, there are an *infinite* number of them. *There are
> 64 one state two symbol (zero and one) Turing Machines, 20,736 two state, 
> **16,777,216
> three state, **25,600,000,000 four state, and **634,033,809,653,824 five
> state two symbol T**uring Machines. *
>
> A Turing Machine with different sets of* internal states** will exhibit** 
> different
> behavior even if given identical input states. *I think What confuses you
> is that it is possible to have a machine in which the tape not only
> provides the program the machine should work on but also the set of
> internal states that the machine has. In a way you could think of the tape
> as providing not only the program but also the wiring diagram of the
> computer. A universal Turing Machine is in an undefined state until the
> input tape, *or something else*, puts it in one specific state.
>
> *Consider the Busy beaver function, if you feed in a tape with all zeros
> on it into all 4 state Turing Machines and ask "which of those
> 25,600,000,000 machines will print the most ones before stopping" (it's
> important that the machine eventually stops), you will find this is not an
> easy question. All the machines are operating on identical input tapes (all
> zeros) but they behave differently, some stop almost immediately, others
> just keep printing 1 forever, but for others the behavior is vastly more
> complicated. It turns out that the winner is a set of states that prints
> out 13 ones after making 107 moves. *
>
> *A five state Turing Machine behaves differently, we just found out that a
> particular set of internal states prints 2098 ones after making 47,176,870
> moves. I wouldn't be surprised if the sixth Busy Beaver number is not
> computable, we know for a fact that any busy beaver number for a 745 State
> Turing machine or larger is not computable.  Right now all we know about
> BB(6) is that it's larger than
> 10^10^10^10^10^10^10^10^10^10^10^10^10^10^10.*
>
> *The point of all this is that Turing Machines with different sets of
> internal states behave very differently. *
>
> John K ClarkSee what's on my new list at  Extropolis
> 
> mth
>
>
>
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1NX6MaOW7eJo1TVbmpAfvb7k4CgvnYOGV%3DzfDt%2BpOo1w%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMW2kAqVhLgRBZPTfXj5CUcdEeQZUQr91dmfy-90vNxbwqrG8g%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-14 Thread John Clark
On Sat, Jul 13, 2024 at 10:34 PM Brent Meeker  wrote:

*> The machine is universal.  You don't need a different machine with
> different internal states.*


*First of all, the very definition of "a different Turing Machine" is
a machine with a different set of internal states*. And there is not just
one Turing machine, there are an *infinite* number of them. *There are
64 one state two symbol (zero and one) Turing Machines, 20,736 two
state, **16,777,216
three state, **25,600,000,000 four state, and **634,033,809,653,824 five
state two symbol T**uring Machines. *

A Turing Machine with different sets of* internal states** will
exhibit** different
behavior even if given identical input states. *I think What confuses you
is that it is possible to have a machine in which the tape not only
provides the program the machine should work on but also the set of
internal states that the machine has. In a way you could think of the tape
as providing not only the program but also the wiring diagram of the
computer. A universal Turing Machine is in an undefined state until the
input tape, *or something else*, puts it in one specific state.

*Consider the Busy beaver function, if you feed in a tape with all zeros on
it into all 4 state Turing Machines and ask "which of those 25,600,000,000
machines will print the most ones before stopping" (it's important that the
machine eventually stops), you will find this is not an easy question. All
the machines are operating on identical input tapes (all zeros) but they
behave differently, some stop almost immediately, others just keep printing
1 forever, but for others the behavior is vastly more complicated. It turns
out that the winner is a set of states that prints out 13 ones after making
107 moves. *

*A five state Turing Machine behaves differently, we just found out that a
particular set of internal states prints 2098 ones after making 47,176,870
moves. I wouldn't be surprised if the sixth Busy Beaver number is not
computable, we know for a fact that any busy beaver number for a 745 State
Turing machine or larger is not computable.  Right now all we know about
BB(6) is that it's larger than
10^10^10^10^10^10^10^10^10^10^10^10^10^10^10.*

*The point of all this is that Turing Machines with different sets of
internal states behave very differently. *

John K ClarkSee what's on my new list at  Extropolis

mth





>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1NX6MaOW7eJo1TVbmpAfvb7k4CgvnYOGV%3DzfDt%2BpOo1w%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread Jason Resch
On Sat, Jul 13, 2024, 9:54 PM PGC  wrote:

>
>
> On Sunday, July 14, 2024 at 3:51:27 AM UTC+2 John Clark wrote:
>
> Yes it's possible to have a universal Turing machine in the sense that you
> can run any program by just changing the tape, however ONLY if that tape
> has instructions for changing the set of states  that the machine can be
> in.
>
>
>
> It still boggles my mind that matter is Turing-complete.
>

Turing completeness, as incredible as it is, is (remarkably) easy to come
by. You can achieve it with addition and multiplication, with billiard
balls, with finite automata (rule 110, or game of life), with artificial
neurons, etc. That something as sophisticated as matter could achieve it is
to me less surprising than the fact that these far simpler things can.


And this despite parts of physics being not Turing emulable.
>
Finite physical system's can be simulated to any desired degree of
accuracy, and moreover all known laws of physics are computable. Which
parts of physics do you refer to when you say there are parts that aren't
Turing emulable?

Jason

We can implement Turing Machines with matter, and even with constraints in
> the physical world, it appears to be the basic principle of brains, cells,
> and computers.
>
> Just for clarity’s sake, we should distinguish the idea of
> Turing/universal machine with some demonstrative physical implementation,
> like some computer, tape machine, or LLM running on my table/in the cloud:
> By Turing machine, I mean a T machine u such that phi_u(x, y) = phi_x(y).
> We call “u” the computer, x is named the program, and y is the data. Of
> course, (x, y) is supposed to be a number (coding the two numbers x and y).
> And yeah, you can specify it with infinite tape, print, read, write heads,
> and many other formalisms that have proven equivalent etc. but the class of
> functions is the same. The set of partially computable functions from N to
> N with the standard definitions and axioms.
>
> There are a lot of posts distinguishing this computer here, that LLM
> there, that brain in my head etc. ostensively, as if we knew what we were
> talking about. If we believe we are Turing emulable at some level of
> description, then we are not able to distinguish between ourselves and our
> experiences when emulated in say Python, which is emulated by Rust, which
> is emulated by Swift, which is emulated by Kotlin, which is emulated by Go,
> which is emulated by Elixir, which is emulated by Julia, which is emulated
> by TypeScript, which is emulated by R, which is emulated by a physical
> universe, itself emulated by arithmetic (e.g. assuming arithmetical realism
> like Russell and Bruno), from “our self” emulated in Rust, emulated by
> Python, emulated by Go, emulated by Swift, emulated by Julia, emulated by
> Elixir, emulated by Kotlin, emulated by R, emulated by TypeScript, emulated
> by arithmetic, emulated by a physical universe…
>
> That’s the difficulty of defining what a physical instantiation of a
> computation is (See Maudlin and MGA). For if we could distinguish those
> computations, we’d have something funky in consciousness, which would not
> be Turing emulable, falsifying the arithmetical realism type approaches.
> And if you have that, I’d like to know everything about you, your diet,
> reading habits, pets, family, beverages, medicines etc. and whether
> something like gravity is Turing emulable, even if I guess it isn’t. Send
> me that message in private though and don’t publish anything.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/602ae080-85fe-4a99-ab85-194dec7aae0fn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhu9f8zNkUv1A-kXHDc-gFy7nDsLtKPTW8JoJeVUgRs8A%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread PGC


On Sunday, July 14, 2024 at 3:51:27 AM UTC+2 John Clark wrote:

Yes it's possible to have a universal Turing machine in the sense that you 
can run any program by just changing the tape, however ONLY if that tape 
has instructions for changing the set of states  that the machine can be 
in. 



It still boggles my mind that matter is Turing-complete. And this despite 
parts of physics being not Turing emulable. We can implement Turing 
Machines with matter, and even with constraints in the physical world, it 
appears to be the basic principle of brains, cells, and computers.

Just for clarity’s sake, we should distinguish the idea of Turing/universal 
machine with some demonstrative physical implementation, like some 
computer, tape machine, or LLM running on my table/in the cloud: By Turing 
machine, I mean a T machine u such that phi_u(x, y) = phi_x(y). We call “u” 
the computer, x is named the program, and y is the data. Of course, (x, y) 
is supposed to be a number (coding the two numbers x and y). And yeah, you 
can specify it with infinite tape, print, read, write heads, and many other 
formalisms that have proven equivalent etc. but the class of functions is 
the same. The set of partially computable functions from N to N with the 
standard definitions and axioms.

There are a lot of posts distinguishing this computer here, that LLM there, 
that brain in my head etc. ostensively, as if we knew what we were talking 
about. If we believe we are Turing emulable at some level of description, 
then we are not able to distinguish between ourselves and our experiences 
when emulated in say Python, which is emulated by Rust, which is emulated 
by Swift, which is emulated by Kotlin, which is emulated by Go, which is 
emulated by Elixir, which is emulated by Julia, which is emulated by 
TypeScript, which is emulated by R, which is emulated by a physical 
universe, itself emulated by arithmetic (e.g. assuming arithmetical realism 
like Russell and Bruno), from “our self” emulated in Rust, emulated by 
Python, emulated by Go, emulated by Swift, emulated by Julia, emulated by 
Elixir, emulated by Kotlin, emulated by R, emulated by TypeScript, emulated 
by arithmetic, emulated by a physical universe… 

That’s the difficulty of defining what a physical instantiation of a 
computation is (See Maudlin and MGA). For if we could distinguish those 
computations, we’d have something funky in consciousness, which would not 
be Turing emulable, falsifying the arithmetical realism type approaches. 
And if you have that, I’d like to know everything about you, your diet, 
reading habits, pets, family, beverages, medicines etc. and whether 
something like gravity is Turing emulable, even if I guess it isn’t. Send 
me that message in private though and don’t publish anything. 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/602ae080-85fe-4a99-ab85-194dec7aae0fn%40googlegroups.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread John Clark
On Sat, Jul 13, 2024 at 8:37 PM Brent Meeker  wrote:


>> Well that certainly is not true! There is a Turing Machine for any
>> computable task, but any PARTICULAR  Turing Machine has a finite number of
>> internal states and can only do one thing. If you want something else done
>> then you are going to have to use a Turing Machine with a different set
>> of internal states.
>
>
> *> *
> *Or a different tape/program. "A Turing machine is a mathematical model of
> computation describing an abstract machine that manipulates symbols on a
> strip of tape according to a table of rules.Despite the model's simplicity,
> it is **capable of implementing any computer algorithm.**"*
>


Yes exactly. As I said before, if you want a Turing Machine to do something
different then you've got to pick a Turing machine with a different set of
internal states, or to say the same thing with different words, you've got
to program it differently. For every computable function there is a Turing
Machine that will compute it if it has the correct set of internal states.


John K ClarkSee what's on my new list at  Extropolis




> The number of n-state 2-symbol Turing Machines that exist is (4(n+1))^(2n),
> This is because there are n-1 non-halting states, and we have n choices
> for the next state, and 2 choices for which symbol to write, and 2
> choices for which direction to move the read head. So for example there
> are 16,777,216 different three state Turing Machines, and 25,600,000,000
> different four state turing machines.
>
>
> nrp
>
> --
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2qN1VR0HTMSu8Jq9DHjsjr6ZeghxEgcHLnhHh-2eZMyw%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread Brent Meeker



On 7/13/2024 3:21 PM, John Clark wrote:
On Sat, Jul 13, 2024 at 4:29 PM Brent Meeker  
wrote:


/> All Turing machines have the same computational capability. /

Well that certainly is not true! There is a Turing Machine for any 
computable task, but any PARTICULAR  Turing Machine has a finite 
number of internal states and can only do one thing. If you want 
something else done then you are going to have to use a Turing Machine 
with a different set of internal states.

Or a different tape/program.

"A Turing machine is a mathematical model of computation describing an 
abstract machine that manipulates symbols on a strip of tape according 
to a table of rules.Despite the model's simplicity, it is /capable of 
implementing any computer algorithm./"


Brent


The number of n-state 2-symbol Turing Machines that exist is 
(4(n+1))^(2n), This is because there are n-1 non-halting states, and 
we have n choices for the next state, and 2 choices for which symbol 
to write, and 2 choices for which direction to move the read head.So 
for example there are 16,777,216 different three state Turing 
Machines, and 25,600,000,000 differentfour state turing machines.


John K Clark    See what's on my new list at Extropolis 


nrp

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv12iBKidY_a_QC4tvTtdFNdmZRgZ9K-UH0La%3DTvdUMuew%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/84d558fb-d7d8-41b8-8436-997632a25249%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread Jason Resch
On Sat, Jul 13, 2024, 6:22 PM John Clark  wrote:

> On Sat, Jul 13, 2024 at 4:29 PM Brent Meeker 
> wrote:
>
> *> All Turing machines have the same computational capability. *
>
>
> Well that certainly is not true! There is a Turing Machine for any
> computable task, but any PARTICULAR  Turing Machine has a finite number of
> internal states and can only do one thing. If you want something else done
> then you are going to have to use a Turing Machine with a different set
> of internal states.
>

The number of internal states a Turing machine has is unrelated to a Turing
machine's universality. Think of internal states as the instruction set in
a CPU. A CPU can only be in so many states, but pair it with a memory and a
loop, and it can compute anything.

I think what you are saying makes sense if you consider a Turing machine
running a particular fixed program. Then the Turing machine acts like some
particular machine. And if you want it to act differently, you need to
provide a different program.

Jason


>
> The number of n-state 2-symbol Turing Machines that exist is (4(n+1))^(2n),
> This is because there are n-1 non-halting states, and we have n choices
> for the next state, and 2 choices for which symbol to write, and 2
> choices for which direction to move the read head. So for example there
> are 16,777,216 different three state Turing Machines, and 25,600,000,000
> different four state turing machines.
>
>John K ClarkSee what's on my new list at  Extropolis
> 
> nrp
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv12iBKidY_a_QC4tvTtdFNdmZRgZ9K-UH0La%3DTvdUMuew%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjjZ3t7kcDeKAfz3seZC%2BK6NpEQffZJ18v4PW%2BoD%3Do23A%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread John Clark
On Sat, Jul 13, 2024 at 4:29 PM Brent Meeker  wrote:

*> All Turing machines have the same computational capability. *


Well that certainly is not true! There is a Turing Machine for any
computable task, but any PARTICULAR  Turing Machine has a finite number of
internal states and can only do one thing. If you want something else done
then you are going to have to use a Turing Machine with a different set of
internal states.

The number of n-state 2-symbol Turing Machines that exist is (4(n+1))^(2n),
This is because there are n-1 non-halting states, and we have n choices for
the next state, and 2 choices for which symbol to write, and 2 choices for
which direction to move the read head. So for example there are 16,777,216
different three state Turing Machines, and 25,600,000,000 different four
state turing machines.

   John K ClarkSee what's on my new list at  Extropolis

nrp

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv12iBKidY_a_QC4tvTtdFNdmZRgZ9K-UH0La%3DTvdUMuew%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread John Clark
On Sat, Jul 13, 2024 at 4:18 PM Brent Meeker  wrote:

>> If the AI  was trying to deceive the human into believing it was not a
>> computer then it would simply say something like "*I am in Vancouver
>> Canada and it's not raining outside it's snowing*".
>
>

*> Which could easily be checked in real time. *
>

Yes you can easily check if it's snowing in Vancouver right now, so why
couldn't the AI do the same thing? If you insist that the human
interrogator is allowed to have access to the Internet but the AI is not
then you are no longer talking about the Turing Test.

>> I don't see how a question like that could help you figure out the
>> nature of an AI's mind, or any mine for that matter, even if the AI was
>> ordered to tell the truth. The position of a mind in 3D space is a nebulous
>> concept; if your brain is in one place and your sense organs are in another
>> place, and you're thinking
>
>
>
> * > At other times you say consciousness is just how data feels when being
> processed. *


Correct.

*> * * It's processed in your brain...which has a definite location.*
>

But the position is not unique, data can be processed anywhere and the
result is the same. And if exactly the same data is being processed in
exactly the same way at two different places, or even 1 million different
places, then only one consciousness is produced.  Besides, the AI may not
even know or care where its data processors are. And if you aren't
consciously aware that your data is being processed in Vancouver Canada
then what sense does it make to say that your consciousness is located in
Vancouver Canada even though you don't consciously know it? If you're
thinking about Peking at the time it would be slightly less ridiculous to
say that your consciousness is located in China rather than Canada. But
only slightly less ridiculous.


> * > I just asked "Where are you?"  Not "Where is your mind?"*
>

If "you" are not Brent Meeker's mind then what are "you"? Asking where
consciousness is located is like asking where Beethoven's ninth Symphony is
located. A proper noun has a unique position but an adjective does not, and
I am an adjective, I am the way atoms behave when they are organized in a
Johnkclarkian way.

>> I think it's a nonsense question because  "you" should not be thought of
>> as a pronoun but as an adjective.  You are the way atoms behave when they
>> are organized in a Brentmeekerian way.
>
>
>
> *> And those atoms have a location in order to interact.*


But there is no unique location, any place will do fine, and any atoms
will work
fine because all carbon atoms are identical, atoms don't have your name
engraved on them. The location of the interaction has no effect on the
intelligence or on the consciousness, although the location of the sense
organs and the hands could.
 See what's on my new list at  Extropolis

jwc

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2%3DnMaTzscvGuJU6GkFcoWbL3OUS47cF_kLWspcMXw9kQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread Jason Resch
On Sat, Jul 13, 2024, 4:18 PM Brent Meeker  wrote:

>
>
> On 7/13/2024 4:07 AM, John Clark wrote:
>
> On Fri, Jul 12, 2024 at 7:17 PM Brent Meeker 
> wrote:
>
> An AI needs to play dumb in order to fool a human into thinking it is
>>> human. Don't you find that fact to be compelling?
>>
>>
>> *No, it only passed because the human interlocutor didn't ask the right
>> questions; like, "Where are you?" and  "Is it raining outside?". *
>>
>
> If the AI  was trying to deceive the human into believing it was not a
> computer then it would simply say something like "*I am in Vancouver
> Canada and it's not raining outside it's snowing*".
>
> Which could easily be checked in real time.  Anyone question won't resolve
> whether it's a person or not but a sequence can provide good evidence.
> Next question, "Is there a phone in your room."  Answer, "Yes"  Call the
> number and see if anyone answers.  etc.  The point is a human IS in a
> specific place and can act there.  An LLM AI isn't anyplace in particular.
>


The reason for conducting the test by text (rather than in person with an
android body) was to prevent external clues from spoiling the result. To be
completely fair, perhaps the test needs to be amended to judge between an
AI and an uploaded human brain.

Jason


> And I don't see how a question like that could help you figure out the
> nature of an AI's mind, or any mine for that matter, even if the AI was
> ordered to tell the truth. The position of a mind in 3D space is a nebulous
> concept; if your brain is in one place and your sense organs are in another
> place, and you're thinking
>
> At other times you say consciousness is just how data feels when being
> processed.  It's processed in your brain...which has a definite location.
>
> about yet another place, then where exactly is the position of your mind?
>
> I just asked "Where are you?"  Not "Where is your mind?"
>
> I think it's a nonsense question because  "you" should not be thought of
> as a pronoun but as an adjective.  You are the way atoms behave when they
> are organized in a Brentmeekerian way.
>
> And those atoms have a location in order to interact.
>
> Brent
>
> So asking a question like that is like asking where is "big" located or
> the color yellow.
>
>  See what's on my new list at  Extropolis
> 
> y11
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3Lg2s8mRn_n%2B_oKR0ozfLhvJhpgrPA3__PKDjPyW8Cfw%40mail.gmail.com
> 
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/2ca26f33-1346-4abf-a55e-7f5f24704173%40gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgZ7BLUPcgKiG3igTi8%2BSdh_9STNjJ%3DzGgQUwXVvssHnQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread Brent Meeker



On 7/13/2024 5:04 AM, John Clark wrote:
On Fri, Jul 12, 2024 at 7:28 PM Brent Meeker  
wrote:


/> So a Turing machine is more powerful than a human brain/


Yes, anything your brain can do there is a Turing Machine that can do 
them too, including committing all your errors;  but there are lots of 
Turing Machines that can do lots of things that your brain cannot.  
However your brain does have one advantage, it's a real physical 
thing, but a Turing Machine is not, it's more like a schematic diagram 
of the underlying logic of a brain or computer at the most detailed 
and fundamental level possible. A Turing Machine can't calculate 
anything unless it's actually constructed, and for that you need atoms.


/> therefore a human can be consider a Turing machine. /


Yes, you can be considered to be a Turing Machine, a particular Turing 
Machine, but you are NOT ALL Turing Machines


All Turing machines have the same computational capability.

/"A Turing machine is a mathematical model of computation describing an 
abstract machine that manipulates symbols on a strip of tape according 
to a table of rules. Despite the model's simplicity, it is capable of 
implementing any computer algorithm."/


Brent


/> Invalid inference. /

I don't think so.John K Clark    See what's on my new list at 
Extropolis 

tdi

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3VQjkkdWu_nsDQVdOtB11HqkSQJGZNFKJ6gjcfccOmJA%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/730277ee-37ec-457b-84bc-e3d93a73d37e%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread Brent Meeker



On 7/13/2024 4:07 AM, John Clark wrote:
On Fri, Jul 12, 2024 at 7:17 PM Brent Meeker  
wrote:


An AI needs to play dumb in order to fool a human into
thinking itis human. Don't you find that fact to be compelling?


/No, it only passed because the human interlocutor didn't ask the
right questions; like, "Where are you?" and  "Is it raining
outside?". /


If the AI  was trying to deceive the human into believing it was not a 
computer then it would simply say something like "/I am in Vancouver 
Canada and it's not raining outside it's snowing/".
Which could easily be checked in real time.  Anyone question won't 
resolve whether it's a person or not but a sequence can provide good 
evidence.  Next question, "Is there a phone in your room."  Answer, 
"Yes"  Call the number and see if anyone answers.  etc.  The point is a 
human IS in a specific place and can act there.  An LLM AI isn't 
anyplace in particular.


And I don't see how a question like that could help you figure out the 
nature of an AI's mind, or any mine for that matter, even if the AI 
was ordered to tell the truth. The position of a mind in 3D space is a 
nebulous concept; if your brain is in one place and your sense organs 
are in another place, and you're thinking
At other times you say consciousness is just how data feels when being 
processed.  It's processed in your brain...which has a definite location.



about yet another place, then where exactly is the position of your mind?

I just asked "Where are you?"  Not "Where is your mind?"

I think it's a nonsense question because  "you" should not be thought 
of as a pronoun but as an adjective.  You are the way atoms behave 
when they are organized in a Brentmeekerian way.

And those atoms have a location in order to interact.

Brent

So asking a question like that is like asking where is "big" located 
or the color yellow.


See what's on my new list at Extropolis 


y11



--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3Lg2s8mRn_n%2B_oKR0ozfLhvJhpgrPA3__PKDjPyW8Cfw%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2ca26f33-1346-4abf-a55e-7f5f24704173%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread John Clark
On Sat, Jul 13, 2024 at 8:20 AM 'spudboy...@aol.com' via Everything List <
everything-list@googlegroups.com> wrote:

>
*> OpenAI reportedly nears breakthrough with “reasoning” AI, reveals
> progress framework Five-level AI classification system probably best seen
> as a marketing exercise.(More profoundly,OPENAI's 5 tier system for future
> capabilities. Looks like we're at '2')*
>
> *OpenAI reportedly nears breakthrough with “reasoning” AI, reveals
> progress framework*
> 
>


The interesting thing is that OpenAI says that while GPT-4 can answer
questions about as well as a bright high school student can, GPT-5 will be
able to correctly answer the sort of questions a PhD candidate will receive
during the verbal defense of this thesis. And GPT-5 is only level two. As
you point out it's a 5 tier system.
 John K ClarkSee what's on my new list at  Extropolis

u7I




>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1pDyEuOCQz2E5tPeQ%2BFzyuKdSn2bNbbww6%2ByoT4wY2Ag%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread 'spudboy...@aol.com' via Everything List
 Related ARS Technica article titled: 

OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress 
framework

Five-level AI classification system probably best seen as a marketing exercise.
(More profoundly,OPENAI's 5 tier system for future capabilities. Looks like 
we're at '2')OpenAI reportedly nears breakthrough with “reasoning” AI, reveals 
progress framework


| 
| 
| 
|  |  |

 |

 |
| 
|  | 
OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progre...

Five-level AI classification system probably best seen as a marketing exercise.
 |

 |

 |








On Saturday, July 13, 2024 at 08:05:28 AM EDT, John Clark 
 wrote:   

 On Fri, Jul 12, 2024 at 7:28 PM Brent Meeker  wrote:


  > So a Turing machine is more powerful than a human brain

Yes, anything your brain can do there is a Turing Machine that can do them too, 
including committing all your errors;  but there are lots of Turing Machines 
that can do lots of things that your brain cannot.  However your brain does 
have one advantage, it's a real physical thing, but a Turing Machine is not, 
it's more like a schematic diagram of the underlying logic of a brain or 
computer at the most detailed and fundamental level possible. A Turing Machine 
can't calculate anything unless it's actually constructed, and for that you 
need atoms.
 
> therefore a human can be consider a Turing machine. 

Yes, you can be considered to be a Turing Machine, a particular Turing Machine, 
but you are NOT ALL Turing Machines  
> Invalid inference. 
I don't think so.  John K Clark    See what's on my new list at  Extropolis
tdi


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3VQjkkdWu_nsDQVdOtB11HqkSQJGZNFKJ6gjcfccOmJA%40mail.gmail.com.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/402407079.128754.1720873230632%40mail.yahoo.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread John Clark
On Fri, Jul 12, 2024 at 7:28 PM Brent Meeker  wrote:

 *> So a Turing machine is more powerful than a human brain*
>

Yes, anything your brain can do there is a Turing Machine that can do them
too, including committing all your errors;  but there are lots of Turing
Machines that can do lots of things that your brain cannot.  However your
brain does have one advantage, it's a real physical thing, but a Turing
Machine is not, it's more like a schematic diagram of the underlying logic
of a brain or computer at the most detailed and fundamental level possible.
A Turing Machine can't calculate anything unless it's actually constructed,
and for that you need atoms.

>
> *> therefore a human can be consider a Turing machine. *
>

Yes, you can be considered to be a Turing Machine, a particular Turing
Machine, but you are NOT ALL Turing Machines


> *> Invalid inference. *
>
I don't think so.  John K ClarkSee what's on my new list at  Extropolis

tdi

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3VQjkkdWu_nsDQVdOtB11HqkSQJGZNFKJ6gjcfccOmJA%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-13 Thread John Clark
On Fri, Jul 12, 2024 at 7:17 PM Brent Meeker  wrote:

An AI needs to play dumb in order to fool a human into thinking it is
>> human. Don't you find that fact to be compelling?
>
>
> *No, it only passed because the human interlocutor didn't ask the right
> questions; like, "Where are you?" and  "Is it raining outside?". *
>

If the AI  was trying to deceive the human into believing it was not a
computer then it would simply say something like "*I am in Vancouver Canada
and it's not raining outside it's snowing*".  And I don't see how a
question like that could help you figure out the nature of an AI's mind, or
any mine for that matter, even if the AI was ordered to tell the truth. The
position of a mind in 3D space is a nebulous concept; if your brain is in
one place and your sense organs are in another place, and you're thinking
about yet another place, then where exactly is the position of your mind? I
think it's a nonsense question because  "you" should not be thought of as a
pronoun but as an adjective.  You are the way atoms behave when they are
organized in a Brentmeekerian way. So asking a question like that is like
asking where is "big" located or the color yellow.

 See what's on my new list at  Extropolis

y11

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3Lg2s8mRn_n%2B_oKR0ozfLhvJhpgrPA3__PKDjPyW8Cfw%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-12 Thread Brent Meeker



On 7/12/2024 3:24 AM, John Clark wrote:
On Thu, Jul 11, 2024 at 7:04 PM Brent Meeker  
wrote:


>> Sometimes on some problems the human brain could be considered
as being Turing Complete, otherwise we would never be able to
do anything that was intelligent.

/> ??? How on Earth do you reach that conclusion.
/


I reached that conclusion because I know that anything that can 
process data, and the human brain can process data, can be emulated by 
a Turing Machine. And a Turing Machine is Turing Complete.


That says on some (other) problems the human brain may not be able to 
solve them while a Turing machine can.  So a Turing machine is more 
powerful than a human brain, therefore a human can be consider a Turing 
machine.  Invalid inference.


Brent


John K Clark    See what's on my new list at Extropolis 


mnl

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv39%2B52RQg_HbcE3gr4ArUkf4QTsBRgBTU_AAYCuAK9Gdg%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/d0b41c13-5ba4-455c-9322-dda24350c37f%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-12 Thread Brent Meeker



On 7/12/2024 3:15 AM, John Clark wrote:



On Thu, Jul 11, 2024 at 8:09 PM Brent Meeker  
wrote:


> /In case you've forgotten, the Turing test was based on text only
communication between an interlocutor asked to distinguish between
a computer pretending to be a human and a man or woman pretending
to be a woman or man./


Yes but that is an unimportant detail, the essence of the Turing Test 
is that whatever method you use to determine the consciousness or lack 
of it in one of your fellow human beings you should use that same 
method when judging the consciousness of a computer.


/> It's already been passed by some LLM's by dumbing-down their
response/.


Don't you find that fact to be compelling? An AI needs to play dumb in 
order to fool a human into thinking itis human.


No, it only passed because the human interlocutor didn't ask the right 
questions; like, "Where are you?" and  "Is it raining outside?".


Now I think an LLM could be trained to imagine a consistent model of 
itself as a human being, i.e. having a location, having friends, 
motives, a history,...which would fool everyone who didn't actually 
check reality.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2725c000-1985-4649-a1a2-f76e9ea64414%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-12 Thread John Clark
On Fri, Jul 12, 2024 at 9:28 AM Jason Resch  wrote:

>> I know that anything that can process data, and the human brain can
>> process data, can be emulated by a Turing Machine. And a Turing Machine is
>> Turing Complete.
>>
>
>
> *Perhaps you mean the brain is "Turing emulable" i.e. computable here,
> rather than "Turing complete" (which is having the capacity emulate any
> other Turing machine).*
>

OK but by that definition nothing physical is Turing complete because there
are an infinite number of Turing Machines and nothing in the non-abstract
real world can emulate all of them.


John K ClarkSee what's on my new list at  Extropolis

wcd

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0rJSJ7evZ%3DMz%2Bzrp-6z-NEkK_Jo689BzE1yFaK2RhHfQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-12 Thread John Clark
On Fri, Jul 12, 2024 at 9:33 AM Jason Resch  wrote:

*> Do you think that passing the Argonov test would constitute positive
> proof of consciousness?*


Maybe, but unlike Turing's Test, Argonov's Test  will tell you nothing
about intelligence because it's  too easy.

 See what's on my new list at  Extropolis


att

ubu
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAJPayv3QeUwqAay0GfDmXXwySOhQj-%2BBSbfiZcqvMdhGrNq%2BGQ%40mail.gmail.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUh%2BkS-i_n3BTVcB%3D92M0VfESR6n_Ua14fYj4aEFMmCQBQ%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3thMuObmr5E%2Bt1XVEYMB0wdunAS_PBYmahVmbw1u%3DYVQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-12 Thread Jason Resch
On Fri, Jul 12, 2024, 7:02 AM John Clark  wrote:

> On Thu, Jul 11, 2024 at 7:01 PM Jason Resch  wrote:
>
> >> Who judges if the "phenomenal judgments" of the machine are correct or
>>> incorrect? Even humans can't agree among themselves about most
>>> philosophical matters, certainly that's true of members of this list.
>>>
>>
>> *> They don't have to be correct, as far as I know. The machine just has
>> to make phenomenal judgements (without prior training on such topics).*
>>
>
> The AI's responses don't have to be correct?!  Generating philosophical 
> blather
> about consciousness is the easiest thing in the world because there is
> nothing to work on, there are no facts that the blather must fit. For it to
> rise a little above the level of blather you've got to start with an
> unproven axiom such as "*consciousness is the way data feels when it is
> being processed and thus I am not the only conscious being in the universe*".
>
>
>
>> *> Failing the test doesn't imply a lack of consciousness. But passing
>> the test implies the presence of consciousness.*
>>
>
> So the Argonov Test has the same flaw that the Turing Test has, and is far
> easier to pass. For a computer to pass the Turing Test it must be able to
> converse intelligently, but not too intelligently, ON ANY SUBJECT, but to
> pass the  Argonov Test it only needs to be able to prattle on about
> consciousness.
>
>
> *> there must be a source of information to permit the making of
>> phenomenal judgements, and since the machine was not trained on them, what
>> else, would you propose that source could be, other than consciousness?*
>>
>
> From your questions to the AI. When I meet someone we don't spontaneously
> start talking about consciousness, it only happens when one of us steers
> the conversation into that direction, and that seldom happens (except on
> this list) because usually both of us would rather talk about other things.
>
>

Do you think that passing the Argonov test would constitute positive proof
of consciousness?

Jason



>   See what's on my new list at  Extropolis
> 
> ubu
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3QeUwqAay0GfDmXXwySOhQj-%2BBSbfiZcqvMdhGrNq%2BGQ%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUh%2BkS-i_n3BTVcB%3D92M0VfESR6n_Ua14fYj4aEFMmCQBQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-12 Thread Jason Resch
On Fri, Jul 12, 2024, 6:25 AM John Clark  wrote:

> On Thu, Jul 11, 2024 at 7:04 PM Brent Meeker 
> wrote:
>
> >> Sometimes on some problems the human brain could be considered as
>>> being Turing Complete, otherwise we would never be able to do anything that
>>> was intelligent.
>>
>>
>> *> ??? How on Earth do you reach that conclusion.*
>>
>
> I reached that conclusion because I know that anything that can process
> data, and the human brain can process data, can be emulated by a Turing
> Machine. And a Turing Machine is Turing Complete.
>


Perhaps you mean the brain is "Turing emulable" i.e. computable here,
rather than "Turing complete" (which is having the capacity emulate any
other Turing machine).

Jason


> John K ClarkSee what's on my new list at  Extropolis
> 
> mnl
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv39%2B52RQg_HbcE3gr4ArUkf4QTsBRgBTU_AAYCuAK9Gdg%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgiwUN8%3DkMZMZctS4%2Bq-aDLpNcYS2vr7D2PAc7eh6%3D5OA%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-12 Thread John Clark
On Thu, Jul 11, 2024 at 4:58 PM Brent Meeker  wrote:

>  *elsewhere you have agreed that many data processing machines are not
> conscious because they take no actions based on the data being true.  *
>

I don't recall ever saying that! What I may have said is, something may be
intelligent and thus conscious but I would have no way of knowing that
unless it acted intelligently. Maybe a rock is processing data in some way
that I don't understand and thus is conscious. But I doubt it.

*> I think of the paramecium that swims to the left because the water is to
> salty on the right as being conscious.*
>

Probably true. intelligence is not an all or nothing matter and I have
fundamental evidence that the same is true for consciousness: I'm more
conscious when I try to solve a calculus problem than I am when I'm about
to fall asleep.

 John K ClarkSee what's on my new list at  Extropolis
c
iccd

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv04NpaO1MO%3DP_CjsXBkJQ7uT3MbtUhP_%2BAprcmSK5XoSg%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-12 Thread John Clark
On Thu, Jul 11, 2024 at 7:01 PM Jason Resch  wrote:

>> Who judges if the "phenomenal judgments" of the machine are correct or
>> incorrect? Even humans can't agree among themselves about most
>> philosophical matters, certainly that's true of members of this list.
>>
>
> *> They don't have to be correct, as far as I know. The machine just has
> to make phenomenal judgements (without prior training on such topics).*
>

The AI's responses don't have to be correct?!  Generating philosophical blather
about consciousness is the easiest thing in the world because there is
nothing to work on, there are no facts that the blather must fit. For it to
rise a little above the level of blather you've got to start with an
unproven axiom such as "*consciousness is the way data feels when it is
being processed and thus I am not the only conscious being in the universe*".



> *> Failing the test doesn't imply a lack of consciousness. But passing the
> test implies the presence of consciousness.*
>

So the Argonov Test has the same flaw that the Turing Test has, and is far
easier to pass. For a computer to pass the Turing Test it must be able to
converse intelligently, but not too intelligently, ON ANY SUBJECT, but to
pass the  Argonov Test it only needs to be able to prattle on about
consciousness.


*> there must be a source of information to permit the making of phenomenal
> judgements, and since the machine was not trained on them, what else, would
> you propose that source could be, other than consciousness?*
>

>From your questions to the AI. When I meet someone we don't spontaneously
start talking about consciousness, it only happens when one of us steers
the conversation into that direction, and that seldom happens (except on
this list) because usually both of us would rather talk about other things.


  See what's on my new list at  Extropolis

ubu

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3QeUwqAay0GfDmXXwySOhQj-%2BBSbfiZcqvMdhGrNq%2BGQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-12 Thread John Clark
On Thu, Jul 11, 2024 at 7:04 PM Brent Meeker  wrote:

>> Sometimes on some problems the human brain could be considered as being
>> Turing Complete, otherwise we would never be able to do anything that was
>> intelligent.
>
>
> *> ??? How on Earth do you reach that conclusion.*
>

I reached that conclusion because I know that anything that can process
data, and the human brain can process data, can be emulated by a Turing
Machine. And a Turing Machine is Turing Complete.

John K ClarkSee what's on my new list at  Extropolis

mnl

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv39%2B52RQg_HbcE3gr4ArUkf4QTsBRgBTU_AAYCuAK9Gdg%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-12 Thread John Clark
On Thu, Jul 11, 2024 at 8:09 PM Brent Meeker  wrote:

> > *In case you've forgotten, the Turing test was based on text only
> communication between an interlocutor asked to distinguish between a
> computer pretending to be a human and a man or woman pretending to be a
> woman or man.*
>

Yes but that is an unimportant detail, the essence of the Turing Test is
that whatever method you use to determine the consciousness or lack of it
in one of your fellow human beings you should use that same method when
judging the consciousness of a computer.

*> It's already been passed by some LLM's by dumbing-down their response*.
>

Don't you find that fact to be compelling? An AI needs to play dumb in
order to fool a human into thinking it is human.

See what's on my new list at  Extropolis




>
>
> But as I said before, the Turing Test is not perfect, however it's all
> we've got. If something passes the test then it's intelligent and
> conscious. If fails the test then it may or may not be intelligent and or
> conscious
>
>
> asb
>
>
>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2jSCHkw%2BTeV09bWNhH_pO7Ga00udWEoiOTenOJKu3TdQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-12 Thread 'Cosmin Visan' via Everything List
@Jason. Uuu... big boy beliving in Santa Claus! Way to go!

On Tuesday 9 July 2024 at 14:50:10 UTC+3 Jason Resch wrote:

>
>
> On Tue, Jul 9, 2024, 4:05 AM 'Cosmin Visan' via Everything List <
> everyth...@googlegroups.com> wrote:
>
>> So, where is Santa Claus ? 
>
>
> If he's possible in this universe he exists very far away. If he's not 
> possible in this universe but possible in other universes then he exists in 
> some subset of those universes where he is possible. If he's not logically 
> possible he doesn't exist anywhere.
>
>
> Also, does he bring presents to all the children in the world in 1 night ? 
>> How does he do that ?
>>
>
> He sprinkles fairy dust all over the planet (nano bot swarms) which travel 
> down chimneys to self-assemble presents from ambient matter, after they 
> scan the brain's of sleeping children to see if they are naughty or nice 
> and what present they hoped for.
>
> Jason 
>
>
>
>> On Tuesday 9 July 2024 at 07:31:46 UTC+3 Jason Resch wrote:
>>
>>>
>>>
>>> On Mon, Jul 8, 2024, 6:38 PM 'Cosmin Visan' via Everything List <
>>> everyth...@googlegroups.com> wrote:
>>>
 So based on your definition, Santa Claus exists.

>>>
>>> I believe everything possible exists.
>>>
>>> That is the idea this mail list was created to discuss, after all. (That 
>>> is why it is called the "everything list")
>>>
>>> Jason 
>>>
>>>
>>>
 On Tuesday 9 July 2024 at 00:47:28 UTC+3 Jason Resch wrote:

>
>
> On Mon, Jul 8, 2024, 5:17 PM 'Cosmin Visan' via Everything List <
> everyth...@googlegroups.com> wrote:
>
>> Brain doesn't exist.
>
>
> Then it exists as an object in consciousness, which is as much as 
> exist would mean under idealism. Rather than say things don't exist, I 
> think it would be better to redefine what is meant by existence.
>
>
> "Brain" is just an idea in consciousness.
>
>
> Sure, and all objects exist in the mind of God. So "exist" goes back 
> to meaning what it has always meant, as Markus Mueller said (roughly): "A 
> exists for B, when changing the state of A can change the state of B, and 
> vice versa, under certain auxiliary conditions."
>
>
> See my papers, like "How Self-Reference Builds the World": 
>> https://philpeople.org/profiles/cosmin-visan
>
>
>>
> I have, and replied with comments and questions. You, however, 
> dismissed them as me not having read your paper.
>
> Have you seen my paper on how computational observers build the world? 
> It reaches a similar conclusion to yours:
>
> https://philpeople.org/profiles/jason-k-resch
>
> Jason 
>
>
>
>> On Monday 8 July 2024 at 23:35:12 UTC+3 Jason Resch wrote:
>>
>>>
>>>
>>> On Mon, Jul 8, 2024, 4:04 PM John Clark  wrote:
>>>

 On Mon, Jul 8, 2024 at 2:12 PM Jason Resch  
 wrote:

 *>Consciousness is a prerequisite of intelligence.*
>

 I think you've got that backwards, intelligence is a prerequisite 
 of consciousness. And the possibility of intelligent ACTIONS is a  
 prerequisite for Darwinian natural selection to have evolved it.

>>>
>>> I disagree, but will explain below.
>>>
>>>  

> *> One can be conscious without being intelligent,*
>

 Sure. 

>>>
>>> I define intelligence by something capable of intelligent action.
>>>
>>> Intelligent action requires non random choice: choice informed by 
>>> information from the environment.
>>>
>>> Having information about the environment (i.e. perceptions) is 
>>> consciousness. You cannot have perceptions without there being some 
>>> process 
>>> or thing to perceive them.
>>>
>>> Therefore perceptions (i.e. consciousness) is a requirement and 
>>> precondition of being able to perform intelligent actions.
>>>
>>> Jason 
>>>
>>> The Turing Test is not perfect, it has a lot of flaws, but it's all 
 we've got. If something passes the Turing Test then it's intelligent 
 and 
 conscious, but if it fails the test then it may or may not be 
 intelligent 
 and or conscious. 

  *You need to have perceptions (of the environment, or the current 
> situation) in order to act intelligently. *


 For intelligence to have evolved, and we know for a fact that it 
 has, you not only need to be able to perceive the environment you 
 also need to be able to manipulate it. That's why zebras didn't evolve 
 great intelligence, they have no hands, so a brilliant zebra wouldn't 
 have 
 a great advantage over a dumb zebra, in fact he'd probably be at a 
 disadvantage because a big brain is a great energy hog.  
   John K ClarkSee w

Re: Are Philosophical Zombies possible?

2024-07-12 Thread 'Cosmin Visan' via Everything List
Who cares about the Turing test ? What does it have to do with being alive 
? =)))

On Friday 12 July 2024 at 03:09:57 UTC+3 Brent Meeker wrote:

>
>
> On 7/11/2024 1:56 PM, John Clark wrote:
>
> On Thu, Jul 11, 2024 at 4:37 PM Brent Meeker  wrote:
>
> *> A rock, along with many other things, can't pass a first grade 
>> arithmetic tes either; but that doesn't show that anything that can't pass 
>> a first grade arithmetic test is unintelligent or unconscious, as for 
>> example an octopus or a 3yr old child.*
>>
>
> And because of their failure to pass a first year arithmetic test we would 
> say that a rock, an octopus and a three year old child are not behaving 
> very intelligently. 
>
>
>
> *In case you've forgotten, the Turing test was based on text only 
> communication between an interlocutor asked to distinguish between a 
> computer pretending to be a human and a man or woman pretending to be a 
> woman or man.  It's already been passed by some LLM's by dumbing-down their 
> response.  It may be all you've got but it's a very poor test that can't 
> tell the difference between a 3yr old and a rock. Brent*
>
>
> But as I said before, the Turing Test is not perfect, however it's all 
> we've got. If something passes the test then it's intelligent and 
> conscious. If fails the test then it may or may not be intelligent and or 
> conscious 
>
> See what's on my new list at  Extropolis 
> 
> asb
>
>
>
>> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com.
>
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv0VcZSw-%2Bk1McSRRw6DJbwpCz7efp1fA1C9jkRz9xe%2B7Q%40mail.gmail.com
>  
> 
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/e007f448-a2be-42cf-8c70-65c6cb8a87a1n%40googlegroups.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Brent Meeker



On 7/11/2024 1:56 PM, John Clark wrote:
On Thu, Jul 11, 2024 at 4:37 PM Brent Meeker  
wrote:


/> A rock, along with many other things, can't pass a first grade
arithmetic teseither; but that doesn't show that anything that
can't pass a first grade arithmetic test is unintelligent or
unconscious, as for example an octopus or a 3yr old child./


And because of their failure to pass a first year arithmetic test we 
would saythata rock, an octopus and a three year old child are not 
behaving very intelligently.
*In case you've forgotten, the Turing test was based on text only 
communication between an interlocutor asked to distinguish between a 
computer pretending to be a human and a man or woman pretending to be a 
woman or man.  It's already been passed by some LLM's by dumbing-down 
their response.  It may be all you've got but it's a very poor test that 
can't tell the difference between a 3yr old and a rock.


Brent*


But as I said before, the Turing Test is not perfect, however it's all 
we've got. If something passes the test then it's intelligent and 
conscious. If fails the test then it may or may not be intelligent and 
or conscious


See what's on my new list at Extropolis 


asb



--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0VcZSw-%2Bk1McSRRw6DJbwpCz7efp1fA1C9jkRz9xe%2B7Q%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/51640fbc-4cba-4184-9fd9-a6d334cf9204%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Brent Meeker



On 7/11/2024 12:53 PM, John Clark wrote:



On Thu, Jul 11, 2024 at 2:43 PM Jason Resch  wrote:

/> Turing completeness is not required for consciousness. The
human brain (given it's limited and faulty memory) wouldn't even
meet the definition of being Turing complete./


Sometimes on some problems the human brain could be considered as 
being Turing Complete, otherwise we would never be able to do anything 
that was intelligent.

*??? How on Earth do you reach that conclusion.

Brent*

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/fff98396-5958-447e-a438-239179a85ada%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Jason Resch
On Thu, Jul 11, 2024, 6:00 PM John Clark  wrote:

> On Thu, Jul 11, 2024 at 5:33 PM Jason Resch  wrote:
>
> *> Consider a deterministic intelligent machine having no innate
>> philosophical knowledge or philosophical discussions while learning. Also,
>> the machine does not contain informational models of other creatures (that
>> may implicitly or explicitly contain knowledge about these creatures’
>> consciousness). If, under these conditions, the machine produces phenomenal
>> judgments on all problematic properties of consciousness, then, according
>> to [the postulates], materialism is true and the machine is conscious.*
>>
>
> Who judges if the "phenomenal judgments" of the machine are correct or
> incorrect? Even humans can't agree among themselves about most
> philosophical matters, certainly that's true of members of this list.
>

They don't have to be correct, as far as I know. The machine just has to
make phenomenal judgements (without prior training on such topics). If a
machine said "I think, therefore I am", or proposed epiphenomenalism,
without having been trained on any philosophical topics, those would
constitute phenomenal judgements that suggest the machine possesses
consciousness.


And the fact is many, perhaps most, human beings don't think about deep
> philosophical questions at all, they find it all to be a big bore, so does
> that mean they're philosophical zombies?
>

Failing the test doesn't imply a lack of consciousness. But passing the
test implies the presence of consciousness.

And just because a machine can pontificate about consciousness, what
> reason, other than Argonov's authority, would I have for believing the
> machine was conscious?
>

That there must be a source of information to permit the making of
phenomenal judgements, and since the machine was not trained on them, what
else, would you propose that source could be, other than consciousness?

Jason


> I'm going to take a break from the list right now because I wanna watch
> Joe Biden's new press conference  ah... I think I think I wanna watch
> it it
>
>  See what's on my new list at  Extropolis
> 
> bfq
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1Vc%2BxDqYGT9uz8TNNvLb5nijcJePj2yTzN74c%2BJK5NNQ%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhQMYTCG_3AH0GOobmOefeq44b0-_iZ_sk6bQNqsT9msQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Brent Meeker



On 7/11/2024 12:49 PM, John Clark wrote:
On Thu, Jul 11, 2024 at 2:39 PM Brent Meeker  
wrote:


> /My point was that consciousness doesn't require Turing completeness/.


Maybe, you and I will never knowfor sure, but intelligence certainly 
does require Touring Completeness.

I know because I'm conscious and brains aren't Turing complete.

Brent


John K Clark    See what's on my new list at Extropolis 


/ctc/




--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3saMxUfA2O3Gkbc07npav8RAxSB0-736x8N_1fCjS2Lw%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/dbc668fb-1a17-4be9-ad46-2b4c6181061a%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread John Clark
On Thu, Jul 11, 2024 at 5:33 PM Jason Resch  wrote:

*> Consider a deterministic intelligent machine having no innate
> philosophical knowledge or philosophical discussions while learning. Also,
> the machine does not contain informational models of other creatures (that
> may implicitly or explicitly contain knowledge about these creatures’
> consciousness). If, under these conditions, the machine produces phenomenal
> judgments on all problematic properties of consciousness, then, according
> to [the postulates], materialism is true and the machine is conscious.*
>

Who judges if the "phenomenal judgments" of the machine are correct or
incorrect? Even humans can't agree among themselves about most
philosophical matters, certainly that's true of members of this list. And
the fact is many, perhaps most, human beings don't think about deep
philosophical questions at all, they find it all to be a big bore, so does
that mean they're philosophical zombies? And just because a machine can
pontificate about consciousness, what reason, other than Argonov's
authority, would I have for believing the machine was conscious?

I'm going to take a break from the list right now because I wanna watch Joe
Biden's new press conference  ah... I think I think I wanna watch it it

 See what's on my new list at  Extropolis

bfq


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1Vc%2BxDqYGT9uz8TNNvLb5nijcJePj2yTzN74c%2BJK5NNQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Jason Resch
On Thu, Jul 11, 2024, 5:28 PM John Clark  wrote:

> On Thu, Jul 11, 2024 at 5:01 PM Jason Resch  wrote:
>
>
>> *> There are easier and harder tests than the Turing test. I don't know
>> why you say it's the only test we have. Also: would passing the Argonov
>> test (which I described in my document on whether zombies are possible) not
>> be a sufficient proof of consciousness? Note that the Argonov test is much
>> harder to pass than the Turing test.*
>>
>
> I have a clear understanding of exactly what the Turing Test is, but I am
> unable to get a clear understanding of exactly, or even approximately, what
> the Argonov test is. I know it has something to do with "phenomenal
> judgments" but I don't know what that means and I don't know what I need to
> do to pass the Argonov Test, so I guess I'd fail it. And because of my
> failure to understand the test it seems that I've been wrong all my life
> about being conscious and really I am a philosophical zombie.
>


“Phenomenal judgments” are the words,
discussions, and texts about consciousness,
subjective phenomena, and the mind-body
problem. […]

In order to produce detailed phenomenal
judgments about problematic properties of
consciousness, an intelligent system must have a source of knowledge about
the properties of consciousness. [...]

Consider a deterministic intelligent machine
having no innate philosophical knowledge or
philosophical discussions while learning. Also, the machine does not
contain informational models of other creatures (that may implicitly or
explicitly contain knowledge about these creatures’ consciousness). If,
under these conditions, the machine produces phenomenal judgments on all
problematic properties of consciousness, then, according to [the
postulates], materialism is true and the machine is conscious.
— Victor Argonov in “Experimental Methodsfor Unraveling the Mind-Body
Problem: The Phenomenal Judgment Approach” (2014)



Jason



>
>   See what's on my new list at  Extropolis
> 
> pzx
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3DiWF-qSnKRkpu_oRKG%2BhASJE86inx%2BNYYqk7fZ69LXw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj5euGRoNkcgSDLYRd1hs1Df%2Bs2jf4cV3uf6%3D%2BvrxnW%2Bw%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread John Clark
On Thu, Jul 11, 2024 at 5:01 PM Jason Resch  wrote:


> *> There are easier and harder tests than the Turing test. I don't know
> why you say it's the only test we have. Also: would passing the Argonov
> test (which I described in my document on whether zombies are possible) not
> be a sufficient proof of consciousness? Note that the Argonov test is much
> harder to pass than the Turing test.*
>

I have a clear understanding of exactly what the Turing Test is, but I am
unable to get a clear understanding of exactly, or even approximately, what
the Argonov test is. I know it has something to do with "phenomenal
judgments" but I don't know what that means and I don't know what I need to
do to pass the Argonov Test, so I guess I'd fail it. And because of my
failure to understand the test it seems that I've been wrong all my life
about being conscious and really I am a philosophical zombie.

  See what's on my new list at  Extropolis

pzx

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3DiWF-qSnKRkpu_oRKG%2BhASJE86inx%2BNYYqk7fZ69LXw%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Jason Resch
On Thu, Jul 11, 2024, 4:57 PM John Clark  wrote:

> On Thu, Jul 11, 2024 at 4:37 PM Brent Meeker 
> wrote:
>
> *> A rock, along with many other things, can't pass a first grade
>> arithmetic tes either; but that doesn't show that anything that can't pass
>> a first grade arithmetic test is unintelligent or unconscious, as for
>> example an octopus or a 3yr old child.*
>>
>
> And because of their failure to pass a first year arithmetic test we would
> say that a rock, an octopus and a three year old child are not behaving
> very intelligently. But as I said before, the Turing Test is not perfect,
> however it's all we've got. If something passes the test then it's
> intelligent and conscious. If fails the test then it may or may not be
> intelligent and or conscious
>

There are easier and harder tests than the Turing test. I don't know why
you say it's the only test we have.

Also: would passing the Argonov test (which I described in my document on
whether zombies are possible) not be a sufficient proof of consciousness?
Note that the Argonov test is much harder to pass than the Turing test.

Jason



> See what's on my new list at  Extropolis
> 
> asb
>
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0VcZSw-%2Bk1McSRRw6DJbwpCz7efp1fA1C9jkRz9xe%2B7Q%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhCVnObf57LOsMANYXGhd27FL6bJs7_wSUpAK7Wj3ZR1g%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Brent Meeker



On 7/11/2024 5:01 AM, John Clark wrote:
On Wed, Jul 10, 2024 at 11:53 PM Brent Meeker  
wrote:


>> "My dictionary says the definition of "*prerequisite*" is "*a
thing that is required as a prior condition for something else
to happen or exist*". And it says the definition of "*cause*"
is "*a person or thing that gives rise to an action,
phenomenon, or condition*". So cause and prerequisite are
synonyms."


/> A more careful reading of the definitions would tell you that a
prerequisite doesn't does not give rise to an action; but it's
absence precludes the action./


OK. But if it's a brute fact that consciousness is the way data feels 
when it is being processed,
I disagree; and elsewhere you have agreed that many data processing 
machines are not conscious because they take no actions based on the 
data being true.   Data has to be processed in ways leading to 
intelligent action or decisions.


and if intelligent action requires data processing, then if by some 
magic (it has to be magic because neither science nor mathematics can 
help you with this) you knew that system X was not conscious, then you 
could correctly predict that its actions would not be intelligent.


Consciousness is a high-level description of the state of a system, 
that's why when I asked somebody "/why did you do that?/" sometimes 
"because I wanted to" is an adequate explanation. But sometimes I want 
a lower level more detailed explanation such as "/I frowned when I 
took a bite of that food because it was much too salty/". A 
neurophysiologist might want an even more detailed explanation 
involving neurons and synapses.  None of these explanations are wrong 
and all of them are consistent with each other. It would be correct to 
say that the reason a balloon gets bigger when I blow into it is 
because the pressure inside the balloon increases, but it would also 
be correct to say that the reason a balloon gets bigger when I blow 
into it is because there are more air molecules inside the balloon 
randomly hitting the inner surface.
Yes, I agree that explanations in terms of consciousness and 
explanations in terms of rationale may just be differences in level of 
description.  But perhaps I have a more expansive idea of consciousness 
than you do.  I think of the paramecium that swims to the left because 
the water is to salty on the right as being conscious.


Brent



John K Clark    See what's on my new list at Extropolis 
m

isb
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3pGTYkrOUtTytHKPE30Tf5hmGaeRTwWKjFVvbL%2BSQ6ng%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/e6830c2a-494b-404b-83ff-78e7853b9ba7%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread John Clark
On Thu, Jul 11, 2024 at 4:37 PM Brent Meeker  wrote:

*> A rock, along with many other things, can't pass a first grade
> arithmetic tes either; but that doesn't show that anything that can't pass
> a first grade arithmetic test is unintelligent or unconscious, as for
> example an octopus or a 3yr old child.*
>

And because of their failure to pass a first year arithmetic test we would
say that a rock, an octopus and a three year old child are not behaving
very intelligently. But as I said before, the Turing Test is not perfect,
however it's all we've got. If something passes the test then it's
intelligent and conscious. If fails the test then it may or may not be
intelligent and or conscious

See what's on my new list at  Extropolis

asb



>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0VcZSw-%2Bk1McSRRw6DJbwpCz7efp1fA1C9jkRz9xe%2B7Q%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Brent Meeker



On 7/11/2024 4:34 AM, John Clark wrote:
On Wed, Jul 10, 2024 at 11:46 PM Brent Meeker  
wrote:


>> Consciousness is the inevitable product of intelligence, it is
not the cause of intelligence.

/ I think that's wrong. /


You say I'm wrong and yet I 99% agree with what you say in the very 
next sentence, it would be 100% except that I'm not quite sure whether 
the pronoun "it" refers to intelligence or consciousness.

Consciousness.


/> It is the cause of some instances of intelligence. Imagining
yourself in various scenario's and running them forward in
imagination is very much the cause of on kind of intelligence,
i.e. foresight./


Yes, foresight is the kind of intelligence that examines possible 
future scenarios and takes actions that increase the likelihood that 
the scenario that actuallyoccurs isone that is desirable from its 
point of view. That's how a Chess program became a grandmaster in the 
1990's. AlphaZerodid the same thing when it beat the world champion 
human player at the game of GO, except that it didn't think (a.k.a. 
imagine) all possible scenarios but only moves that an intelligent 
opponent would likely make, including an opponent that was as 
intelligent as itself. That's how AlphaZerowent from knowing nothing 
about GO, except for the few simple rules of the game, to being able 
to play GO at a superhuman level in just a few hours.


>>it's a brute fact that consciousness is the way data feels
when it is being processed.

/> That's false. /


Once more you say I'm wrong,but this time I agree 100% not 99% with 
what you say in the very next sentence.


/> Lots of data is processed every day by machines that are not
conscious and we "see" they are not conscious because they take no
intelligent action based on the data being true. /


So like me you believe the Turing Test is not just a test for 
intelligence, it is also a test for consciousness. In fact, although 
imperfect, it is the only test for consciousness we have, or will ever 
have. So if a computer is behaving as intelligently as a human then it 
must be as conscious as a human. Probably.


John K Clark    See what's on my new list at Extropolis 


uu6

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1CCP1MMbhS-vRuUDxf89Xey4h_GoUJ8mLzNdNGb%2BHG2g%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/829f9444-dbbd-466b-a7b8-f7d63633b184%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Brent Meeker



On 7/11/2024 3:47 AM, John Clark wrote:
On Thu, Jul 11, 2024 at 2:08 AM Brent Meeker  
wrote:



>> That [/lack of a multiply operation/] would be no problem as
long as the AI still had the addition operation, just do
repeated additions, although it would slow things down. But
you could start removing more and more operations until you
got all the way down to First Order Logic, and then an AI
could actually prove its own consistency. Kurt Godel showed
that a few years before he came up with this famous
incompleteness theorem  in what we now call Godel's
Completeness Theorem. His later Incompleteness Theorem only
applies to logical systems powerful enough to do arithmetic,
and you can't do arithmetic with nothing but first order
logic. The trouble is you couldn't really say an Artificial
Intelligence was intelligent if it couldn't even pass a first
grade arithmetic test. 



/> There are many levels of intelligence.  An octopus can't pass a
first grade arithmetic test but it can escape thru a difficult maze/


Claude Shannon, the father of information theory, made a computerized 
mouse way back in 1951 that was able to escape a difficult maze. It 
was a big advance at the time, if the term had been invented, some 
would've called it Artificial Intelligence. However these days nobody 
would call something like that AI; one of the many reasons why is that 
it couldn't pass a first grade arithmetic test.
A rock, along with many other things, can't pass a first grade 
arithmetic test either; but that doesn't show that anything that can't 
pass a first grade arithmetic test is unintelligent or unconscious, as 
for example an octopus or a 3yr old child.


Brent


See what's on my new list at Extropolis 


mey

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1faq%2BFXVrLv8_%3DQhNnspADRir3op7xBWBZo6ZXHFSA7w%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/a7f5c8ea-7168-4ae9-b8c2-fbde2ed4b742%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread John Clark
On Thu, Jul 11, 2024 at 2:43 PM Jason Resch  wrote:

*> Turing completeness is not required for consciousness. The human brain
> (given it's limited and faulty memory) wouldn't even meet the definition of
> being Turing complete.*
>

Sometimes on some problems the human brain could be considered as being
Turing Complete, otherwise we would never be able to do anything that was
intelligent. And on rare occasions the human brain has been known to do
smart things. But sometimes we screw up and do dumb things. You could say
pretty much the same thing about a computer,  an idealized Turing Machine
could calculate the two-argument Ackermann function for any input numbers
and so Akermann is computable, but the input numbers get larger so fast
(Super-exponentially) that when the input numbers get larger than about 5
the output number becomes so huge that no real computer, even if it was the
size of the observable universe, could compute the output number.

And the Busy Beaver Function grows even faster then Ackermann, in fact it
grows faster than ANY computable function, that's why Busy Beaver is
uncomputable. Even in theory an idealized perfect Turing Machine couldn't
calculate the Busy Beaver Numbers except for the first 5 and maybe 6, much
less a real computer.

   John K ClarkSee what's on my new list at  Extropolis

bac

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1ysoFxxumBMEtu_HbkDfT_jiFrjSVzHEgsPzzcs3%2BnFA%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread John Clark
On Thu, Jul 11, 2024 at 2:39 PM Brent Meeker  wrote:

>   *My point was that consciousness doesn't require Turing completeness*.
>

Maybe, you and I will never know for sure, but intelligence certainly does
require Touring Completeness.

John K ClarkSee what's on my new list at  Extropolis

*ctc*

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3saMxUfA2O3Gkbc07npav8RAxSB0-736x8N_1fCjS2Lw%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread 'Cosmin Visan' via Everything List
AI is just a fancy word for lonely boys to give meaning to their empty 
life. lol

On Thursday 11 July 2024 at 13:48:19 UTC+3 John Clark wrote:

> On Thu, Jul 11, 2024 at 2:08 AM Brent Meeker  wrote:
>
> >> That [*lack of a multiply operation*] would be no problem as long as 
>>> the AI still had the addition operation, just do repeated additions, 
>>> although it would slow things down. But you could start removing more and 
>>> more operations until you got all the way down to First Order Logic, and 
>>> then an AI could actually prove its own consistency. Kurt Godel showed that 
>>> a few years before he came up with this famous incompleteness theorem  in 
>>> what we now call Godel's Completeness Theorem. His later Incompleteness 
>>> Theorem only applies to logical systems powerful enough to do arithmetic, 
>>> and you can't do arithmetic with nothing but first order logic. The trouble 
>>> is you couldn't really say an Artificial Intelligence was intelligent if it 
>>> couldn't even pass a first grade arithmetic test.  
>>
>>
>> *> There are many levels of intelligence.  An octopus can't pass a first 
>> grade arithmetic test but it can escape thru a difficult maze*
>>
>
> Claude Shannon, the father of information theory, made a computerized 
> mouse way back in 1951 that was able to escape a difficult maze. It was a 
> big advance at the time, if the term had been invented, some would've 
> called it Artificial Intelligence. However these days nobody would call 
> something like that AI; one of the many reasons why is that it couldn't 
> pass a first grade arithmetic test.
>
>  See what's on my new list at  Extropolis 
> 
> mey
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1ca20647-7c96-463f-9cda-454d00eeb8bcn%40googlegroups.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread 'Cosmin Visan' via Everything List
Brain doesn't exist. "Brain" is just an idea in consciousness.

On Thursday 11 July 2024 at 22:04:14 UTC+3 Terren Suydam wrote:

> Only in the most idealized sense of Turing completeness would we argue 
> whether the brain is Turing complete. Neural networks are Turing complete.
>
> If we're interested in whether consciousness requires Turing completeness, 
> it seems silly to use the brain as a *counter example* of Turing 
> completeness only because it happens to be a finite, physical object with 
> noise/errors in the system. For all practical purposes, whatever properties 
> one would confer to a Turing complete system, the brain has them.
>
> On Thu, Jul 11, 2024 at 2:43 PM Jason Resch  wrote:
>
>>
>> I agree Turing completeness is not required for consciousness. The human 
>> brain (given it's limited and faulty memory) wouldn't even meet the 
>> definition of being Turing complete.
>>
>> Jason 
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/790395f7-2797-41d7-9d62-9f366a6051e7n%40googlegroups.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Terren Suydam
Only in the most idealized sense of Turing completeness would we argue
whether the brain is Turing complete. Neural networks are Turing complete.

If we're interested in whether consciousness requires Turing completeness,
it seems silly to use the brain as a *counter example* of Turing
completeness only because it happens to be a finite, physical object with
noise/errors in the system. For all practical purposes, whatever properties
one would confer to a Turing complete system, the brain has them.

On Thu, Jul 11, 2024 at 2:43 PM Jason Resch  wrote:

>
> I agree Turing completeness is not required for consciousness. The human
> brain (given it's limited and faulty memory) wouldn't even meet the
> definition of being Turing complete.
>
> Jason
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-TOXz9fQ7Na2DbzN2gsFzUPzq-MsyeyzvvpzrGt3_x%2Bw%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Jason Resch
On Thu, Jul 11, 2024, 2:39 PM Brent Meeker  wrote:

> I stand corrected.  But that just means I chose a bad example.  My point
> was that consciousness doesn't require Turing completeness.  You agreed
> with me about the paramecium.
>


I agree Turing completeness is not required for consciousness. The human
brain (given it's limited and faulty memory) wouldn't even meet the
definition of being Turing complete.

Jason


> Brent
>
> On 7/10/2024 7:24 AM, Jason Resch wrote:
>
> There was a study done in the 1950s on probabilistic Turing machines (
> https://www.degruyter.com/document/doi/10.1515/9781400882618-010/html?lang=en
> ) that found what they could compute is no different than what a
> deterministic Turing machine can compute.
>
> "The computing power of Turing machines
> provided with a random number generator was
> studied in the classic paper [Computability by
> Probabilistic Machines]. It turned out that such
> machines could compute only functions that are already computable by
> ordinary Turing machines."
> — Martin Davis in “The Myth of Hypercomputation” (2004)
>
> To see why consider that programs can similarly split themselves and run
> in parallel
> with each of the possible values. To each instance of the split program,
> the value it is provided will seem random. But importantly: what the
> program can computes with this value
> is the same as what it would compute had the value come from a "truly
> random" quantum measurement.
>
> It would make a difference if it were a quantum computer or not.
>>
>
> For us observing the program run from the outside, it would make a
> difference. But the program itself has way of distinguishing if it is
> receiving a value that came from a real measurement of a quantum system, or
> if it was provided the result of a simulated quantum system.
>
>
> And going the other way, what if it didn't have a multiply operation.
>> We're so accustomed the standard Turing-complete von Neumann computer we
>> take it for granted.
>>
>
> A program will crash if it's run on a hardware that it's not compatible
> with. This is why you can't take a .exe from windows and run it on a Mac.
> But if you run a windows emulator on the Mac you can then run the .exe
> within it.
>
> The program the has no idea it is running on a Mac, it has every reason to
> believe it is running on a real windows computer, but it is fooled by the
> emulation layer (this emulation layer is what I speak of when to refer to
> the "Turing firewall"). That such layers can be created is a direct
> consequence of the fact that all Turing machines are capable of emulating
> each other.
>
> Jason
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/1c7cc5d2-93a1-4ac3-ab70-d5a99341346b%40gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiCxPqjP%3Dx-_tVSW2%2By7K7RgDgEhMyCEK35HGNytVvAWg%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Brent Meeker
I stand corrected.  But that just means I chose a bad example.  My point 
was that consciousness doesn't require Turing completeness. You agreed 
with me about the paramecium.


Brent

On 7/10/2024 7:24 AM, Jason Resch wrote:
There was a study done in the 1950s on probabilistic Turing machines ( 
https://www.degruyter.com/document/doi/10.1515/9781400882618-010/html?lang=en 
) that found what they could compute is no different than what a 
deterministic Turing machine can compute.


"The computing power of Turing machines
provided with a random number generator was
studied in the classic paper [Computability by
Probabilistic Machines]. It turned out that such
machines could compute only functions that are already computable by 
ordinary Turing machines."

— Martin Davis in “The Myth of Hypercomputation” (2004)

To see why consider that programs can similarly split themselves and 
run in parallel
with each of the possible values. To each instance of the split 
program, the value it is provided will seem random. But importantly: 
what the program can computes with this value
is the same as what it would compute had the value come from a "truly 
random" quantum measurement.


It would make a difference if it were a quantum computer or not.


For us observing the program run from the outside, it would make a 
difference. But the program itself has way of distinguishing if it is 
receiving a value that came from a real measurement of a quantum 
system, or if it was provided the result of a simulated quantum system.



And going the other way, what if it didn't have a multiply
operation.  We're so accustomed the standard Turing-complete von
Neumann computer we take it for granted.


A program will crash if it's run on a hardware that it's not 
compatible with. This is why you can't take a .exe from windows and 
run it on a Mac. But if you run a windows emulator on the Mac you can 
then run the .exe within it.


The program the has no idea it is running on a Mac, it has every 
reason to believe it is running on a real windows computer, but it is 
fooled by the emulation layer (this emulation layer is what I speak of 
when to refer to the "Turing firewall"). That such layers can be 
created is a direct consequence of the fact that all Turing machines 
are capable of emulating each other.


Jason



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1c7cc5d2-93a1-4ac3-ab70-d5a99341346b%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread John Clark
On Wed, Jul 10, 2024 at 11:53 PM Brent Meeker  wrote:

> >> "My dictionary says the definition of "*prerequisite*"  is  "*a thing
>> that is required as a prior condition for something else to happen or 
>> exist*". And
>> it says the definition of "*cause*" is "*a person or thing that gives
>> rise to an action, phenomenon, or condition*". So cause and prerequisite
>> are synonyms."
>
>
> *> A more careful reading of the definitions would tell you that a
> prerequisite doesn't does not give rise to an action; but it's absence
> precludes the action.*
>

OK. But if it's a brute fact that consciousness is the way data feels when
it is being processed, and if intelligent action requires data processing,
then if by some magic (it has to be magic because neither science nor
mathematics can help you with this) you knew that system X was not
conscious, then you could correctly predict that its actions would not be
intelligent.

Consciousness is a high-level description of the state of a system, that's
why when I asked somebody "*why did you do that?*" sometimes "because I
wanted to" is an adequate explanation. But sometimes I want a lower level
more detailed explanation such as "*I frowned when I took a bite of that
food because it was much too salty*". A neurophysiologist might want an
even more detailed explanation involving neurons and synapses.  None of
these explanations are wrong and all of them are consistent with each
other. It would be correct to say that the reason a balloon gets bigger
when I blow into it is because the pressure inside the balloon increases,
but it would also be correct to say that the reason a balloon gets bigger
when I blow into it is because there are more air molecules inside the
balloon randomly hitting the inner surface.

John K ClarkSee what's on my new list at  Extropolis
m
isb

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3pGTYkrOUtTytHKPE30Tf5hmGaeRTwWKjFVvbL%2BSQ6ng%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread John Clark
On Wed, Jul 10, 2024 at 11:46 PM Brent Meeker  wrote:

>> Consciousness is the inevitable product of intelligence, it is not the
>> cause of intelligence.
>
>

* I think that's wrong. *
>

You say I'm wrong and yet I 99% agree with what you say in the very next
sentence, it would be 100% except that I'm not quite sure whether the
pronoun "it" refers to intelligence or consciousness.


> * > It is the cause of some instances of intelligence.  Imagining yourself
> in various scenario's and running them forward in imagination is very much
> the cause of on kind of intelligence, i.e. foresight.*
>

Yes, foresight is the kind of intelligence that examines possible future
scenarios and takes actions that increase the likelihood that the scenario
that actually occurs is one that is desirable from its point of view.
That's how a Chess program became a grandmaster in the 1990's. AlphaZero
did the same thing when it beat the world champion human player at the game
of GO, except that it didn't think (a.k.a. imagine) all possible scenarios
but only moves that an intelligent opponent would likely make, including an
opponent that was as intelligent as itself. That's how AlphaZero went from
knowing nothing about GO, except for the few simple rules of the game, to
being able to play GO at a superhuman level in just a few hours.

>> it's a brute fact that consciousness is the way data feels when it is
>> being processed.
>
>

* > That's false. *
>

Once more you say I'm wrong, but this time I agree 100% not 99% with what
you say in the very next sentence.

*> Lots of data is processed every day by machines that are not conscious
> and we "see" they are not conscious because they take no intelligent action
> based on the data being true. *
>

So like me you believe the Turing Test is not just a test for intelligence,
it is also a test for consciousness. In fact, although imperfect, it is the
only test for consciousness we have, or will ever have. So if a computer is
behaving as intelligently as a human then it must be as conscious as a
human. Probably.

 John K ClarkSee what's on my new list at  Extropolis

uu6

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1CCP1MMbhS-vRuUDxf89Xey4h_GoUJ8mLzNdNGb%2BHG2g%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread John Clark
On Thu, Jul 11, 2024 at 2:08 AM Brent Meeker  wrote:

>> That [*lack of a multiply operation*] would be no problem as long as the
>> AI still had the addition operation, just do repeated additions, although
>> it would slow things down. But you could start removing more and more
>> operations until you got all the way down to First Order Logic, and then an
>> AI could actually prove its own consistency. Kurt Godel showed that a few
>> years before he came up with this famous incompleteness theorem  in what we
>> now call Godel's Completeness Theorem. His later Incompleteness Theorem
>> only applies to logical systems powerful enough to do arithmetic, and you
>> can't do arithmetic with nothing but first order logic. The trouble is you
>> couldn't really say an Artificial Intelligence was intelligent if it
>> couldn't even pass a first grade arithmetic test.
>
>
> *> There are many levels of intelligence.  An octopus can't pass a first
> grade arithmetic test but it can escape thru a difficult maze*
>

Claude Shannon, the father of information theory, made a computerized mouse
way back in 1951 that was able to escape a difficult maze. It was a big
advance at the time, if the term had been invented, some would've called it
Artificial Intelligence. However these days nobody would call something
like that AI; one of the many reasons why is that it couldn't pass a first
grade arithmetic test.

 See what's on my new list at  Extropolis

mey

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1faq%2BFXVrLv8_%3DQhNnspADRir3op7xBWBZo6ZXHFSA7w%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-10 Thread Brent Meeker



On 7/10/2024 6:07 AM, John Clark wrote:


On Tue, Jul 9, 2024 at 7:22 PM Brent Meeker  
wrote:




/> And going the other way, what if it didn't have a multiply
operation. /


That would be no problem as long as the AI still had the addition 
operation, just do repeated additions, although it would slow things 
down. But you could start removing more and more operations until you 
got all the way down to First Order Logic, and then an AI could 
actually prove its own consistency. Kurt Godel showed that a few years 
before he came up with this famous incompleteness theorem  in what we 
now call Godel's Completeness Theorem. His later Incompleteness 
Theorem only applies to logical systems powerful enough to do 
arithmetic, and you can't do arithmetic with nothing but first order 
logic. The trouble is you couldn't really say an Artificial 
Intelligence was intelligent if it couldn't even pass a first grade 
arithmetic test.


*There are many levels of intelligence.  An octopus can't pass a first 
grade arithmetic test but it can escape thru a difficult maze.


Brent*

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/0216716c-68c1-40a7-9aff-fddc9dabad9c%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-10 Thread Brent Meeker

Do you know how to avoid this boredom, or do I need to explain it to you?

Brent
"The man who lets himself be bored is even more contemptible than
the bore."
  --- Samuel Butler


On 7/10/2024 12:13 AM, 'Cosmin Visan' via Everything List wrote:
@Brent. Playing with words doesn't make you smart. Quite the opposite. 
Maaan... you people are so boring. You have the same memes that you 
keep repeating over and over and over again. Zero presence of 
intelligent thought. Just memes.


--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/0d80fc9c-300a-4e97-ab3e-a540cf7224f1n%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/70321764-34e4-4a82-8b58-eb29a28e741e%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-10 Thread Brent Meeker



On 7/9/2024 1:56 PM, John Clark wrote:

On Tue, Jul 9, 2024 at 4:29 PM Brent Meeker  wrote:

//
/>So you wrote a whole paragraph but it's unclear whether you are
agreeing with me that consciousness is NOT just some mysterious
byproduct of intelligence,/


Consciousness is not mysterious, unless you think a brute fact is 
mysterious, but there are only two ways an iterative sequence of "how" 
or "why" questions can go, it can either terminate with a brute fact 
or it goes on forever. I think an iterated sequence of questions going 
on forever is far more mysterious than a brute fact. And I think it's 
a brute fact that consciousness is the way data feels when it is being 
processed.


/>but is an essential source of intelligent actions because it
provides plans and evaluates planned actions and scenarios.
/


You've got cause-and-effect mixed up.Consciousness is not a source of 
intelligent action, consciousness is an inevitableconsequence of 
intelligence.


Most intelligent action is preceded by conscious planning.

Brent


John K Clark    See what's on my new list at Extropolis 
m



asd

sssfisdft

n
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2tGnc5KSCnKoJeK0h1wbP93Fp2io_qqtO2cpQE7u9mng%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/71ce55e8-7b4b-4606-ac2a-70ad63e6797d%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-10 Thread Brent Meeker



On 7/9/2024 7:16 AM, Stathis Papaioannou wrote:



Stathis Papaioannou


On Tue, 9 Jul 2024 at 22:15, Jason Resch  wrote:



On Tue, Jul 9, 2024, 4:33 AM Stathis Papaioannou
 wrote:



On Tue, 9 Jul 2024 at 04:23, Jason Resch
 wrote:



On Sun, Jul 7, 2024 at 3:14 PM John Clark
 wrote:

On Sun, Jul 7, 2024 at 1:58 PM Jason Resch
 wrote:

/>>> // I think such foresight is a
necessary component of intelligence, not a
"byproduct"./


>>I agree, I can detect the existence of
foresight in others and so can natural
selection, and that's why we have it.  It aids
in getting our genes transferred into the next
generation. But I was talking about
consciousness not foresight, and regardless of
how important we personally think
consciousness is, from evolution's point of
view it's utterly useless, and yet we have it,
or at least I have it.


/> you don't seem to think zombies are logically
possible,/


Zombies are possible, it's philosophical zombies,
a.k.a. smart zombies, that are impossible because it's
a brute fact that consciousness is the way data
behaves when it is being processed intelligently, or
at least that's what I think. Unless you believe that
all iterated sequences of "why" or "how" questions go
on forever then you must believe that brute facts
exist; and I can't think of a better candidate for one
than consciousness.

/> so then epiphenomenalism is false/


According to the InternetEncyclopedia of Philosophy
"/Epiphenomenalism is a position in the philosophy of
mind according to which mental states or events are
caused by physical states or events in the brain but
do not themselves cause anything/".If that is the
definition then I believe in Epiphenomenalism.


If you believe mental states do not cause anything, then
you believe philosophical zombies are logically possible
(since we could remove consciousness without altering
behavior).

Mental states could be necessarily tied to physical states
without having any separate causal efficacy, and zombies would
not be logically possible. Software is necessarily tied to
hardware activity: if a computer runs a particular program, it
is not optional that the program is implemented. However, the
software does not itself have causal efficacy, causing current
to flow in wires and semiconductors and so on: there is always
a sufficient explanation for such activity in purely physical
terms.


I don't disagree that there is sufficient explanation in all the
particle movements all following physical laws.

But then consider the question, how do we decide what level is in
control? You make the case that we should consider the quantum
field level in control because everything is ultimately reducible
to it.

But I don't think that's the best metric for deciding whether it's
in control or not. Do the molecules in the brain tell neurons what
do, or do neurons tell molecules what to do (e.g. when they fire)?
Or is it some mutually conditioned relationship?

Do neurons fire on their own and tell brains what to do, or do
neurons only fire when other neurons of the whole brain stimulate
them appropriately so they have to fire? Or is it again, another
case of mutualism?

When two people are discussing ideas, are the ideas determining
how each brain thinks and responds, or are the brains determining
the ideas by virtue of generating the words through which they are
expressed?

Through in each of these cases, we can always drop a layer and
explain all the events at that layer, that is not (in my view)
enough of a reason to argue that the events at that layer are "in
charge." Control structures, such as whole brain regions, or
complex computer programs, can involve and be influenced by the
actions of billions of separate events and separate parts, and as
such, they transcend the behaviors of any single physical particle
or physical law.

Consider: whether or not a program halts might only be
determinable by some rules and proof in a mathematical system, and
in this case no physical law will reveal the answer to that
physical system's (the computer's)

Re: Are Philosophical Zombies possible?

2024-07-10 Thread Brent Meeker



On 7/9/2024 5:17 AM, John Clark wrote:



On Tue, Jul 9, 2024 at 7:54 AM Jason Resch  wrote:

>>Consciousness is the inevitable product of intelligence, it is
not the cause of intelligence.



/> //I didn't say it was the cause, I said it is a prerequisite./


My dictionary says the definition of "*prerequisite*" is "*a thing 
that is required as a prior condition for something else to happen or 
exist*". And it says the definition of "*cause*" is "*a person or 
thing that gives rise to an action, phenomenon, or condition*". So 
cause and prerequisite are synonyms.


*A more careful reading of the definitions would tell you that a 
prerequisite doesn't does not give rise to an action; but it's absence 
precludes the action.


Brent*


/> You conveniently (for you but not for me) ignored and deleted
my explanation in your reply./


Somehow I missed that "detailed explanation" you refer to.

John K Clark



--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1eurdLV9%2B_-PD0wXqv3AxsSXZEm%3DgFBfGE6xa1SaeX9Q%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/6cd9ca32-d822-4c43-a208-484e16cf86f1%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-10 Thread Brent Meeker



On 7/9/2024 4:48 AM, John Clark wrote:

On Mon, Jul 8, 2024 at 4:20 PM Jason Resch  wrote:

/> If consciousness is necessary for intelligence///[...]

Consciousness is the inevitable product of intelligence, it is not the 
cause of intelligence.
I think that's wrong.  It is the cause of some instances of 
intelligence.  Imagining yourself in various scenario's and running them 
forward in imagination is very much the cause of on kind of 
intelligence, i.e. foresight.


And as I cannot emphasize enough, natural selection can't select for 
something it can't see and it can't see consciousness, but natural 
selection CAN see intelligent actions.
And intelligent actions (of a certain kind, often speech) follow from 
conscious thought and so evolution CAN see conscious thought.  Remember 
you're using "see" metaphorically. You "see" actions as intelligent by 
inference, often by modeling in consciousness what you would do in the 
other's situation.  So you can "see" conscious thoughts by extension of 
the same kind of inference.


And you know for a fact that natural selection has managed to produce 
at least one conscious being and probably mini billions of them.
Don't you understand how those two facts are telling you something 
that is philosophically important?


> /If on the other hand, consciousness is just a useless byproduct,
then it could (logically if not nomologically) be eliminated
without affecting intelligent./


That would not be possible if it's a brute fact that consciousness is 
the way data feels when it is being processed.


That's false.  Lots of data is processed every day by machines that are 
not conscious and we "see" they are not conscious because they take no 
intelligent action based on the data being true.


Brent


John K Clark

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1i%2BParGqA%2Bcy%3D-HpirKE%3DX_Y58UEgeCy4-ORRbOfH3Mw%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/c480852a-1e43-420a-82f2-2b402dffc775%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-10 Thread Brent Meeker



On 7/9/2024 1:32 AM, Stathis Papaioannou wrote:



On Tue, 9 Jul 2024 at 04:23, Jason Resch  wrote:



On Sun, Jul 7, 2024 at 3:14 PM John Clark 
wrote:

On Sun, Jul 7, 2024 at 1:58 PM Jason Resch
 wrote:

/>>> // I think such foresight is a necessary
component of intelligence, not a "byproduct"./


>>I agree, I can detect the existence of foresight in
others and so can natural selection, and that's why we
have it.  It aids in getting our genes transferred
into the next generation. But I was talking about
consciousness not foresight, and regardless of how
important we personally think consciousness is, from
evolution's point of view it's utterly useless, and
yet we have it, or at least I have it.


/> you don't seem to think zombies are logically possible,/


Zombies are possible, it's philosophical zombies, a.k.a. smart
zombies, that are impossible because it's a brute fact that
consciousness is the way data behaves when it is being
processed intelligently, or at least that's what I think.
Unless you believe that all iterated sequences of "why" or
"how" questions go on forever then you must believe that brute
facts exist; and I can't think of a better candidate for one
than consciousness.

/> so then epiphenomenalism is false/


According to the InternetEncyclopedia of Philosophy
"/Epiphenomenalism is a position in the philosophy of mind
according to which mental states or events are caused by
physical states or events in the brain but do not themselves
cause anything/".If that is the definition then I believe in
Epiphenomenalism.


If you believe mental states do not cause anything, then you
believe philosophical zombies are logically possible (since we
could remove consciousness without altering behavior).

Mental states could be necessarily tied to physical states without 
having any separate causal efficacy, and zombies would not be 
logically possible. Software is necessarily tied to hardware activity: 
if a computer runs a particular program, it is not optional that the 
program is implemented. However, the software does not itself have 
causal efficacy, causing current to flow in wires and semiconductors 
and so on: there is always a sufficient explanation for such activity 
in purely physical terms.


That's why I view it as a choice in level of description.  This seems to 
parallel the 19th Century discussions of life.  That life is an 
organization of molecules capable of metabolism and reproduction, 
eventually prevailed over the need for an animating spirit.  But there 
remained a lot to be discovered about that organization.  It is a very 
specific organization.  Probably not the only possible kind of life, but 
certainly distinct from non-life.


Brent



I view mental states as high-level states operating in their own
regime of causality (much like a Java computer program). The java
computer program can run on any platform, regardless of the
particular physical nature of it. It has in a sense isolated
itself from the causality of the electrons and semiconductors, and
operates in its own realm of the causality of if statements, and
for loops. Consider this program, for example:

twin-prime-program2.png

What causes the program to terminate? Is it the inputs, and the
logical relation of primality, or is it the electrons flowing
through the CPU? I would argue that the higher-level causality,
regarding the logical relations of the inputs to the program logic
is just as important. It determines the physics of things like
when the program terminates. At this level, the microcircuitry is
relevant only to its support of the higher level causal
structures, but the program doesn't need to be aware of nor
consider those low-level things. It operates the same regardless.

I view consciousness as like that high-level control structure. It
operates within a causal realm where ideas and thoughts have
causal influence and power, and can reach down to the lower level
to do things like trigger nerve impulses.





--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2a8491a1-b95c-43cb-85e8-9f1f56eb4c7c%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-10 Thread 'Cosmin Visan' via Everything List
lol. You are so obsessed with the AI as if it is some supernatural entity. 
When in fact is just a random computer program. Even worse: a computer 
program with no use whatsoever. Nobody actually uses AI for anything.

On Wednesday 10 July 2024 at 19:50:04 UTC+3 John Clark wrote:

> On Wed, Jul 10, 2024 at 11:24 AM 'Cosmin Visan' via Everything List <
> everyth...@googlegroups.com> wrote:
>
> > *Why do you even use the word AI ? Why can't use just use the words 
>> "computer program" ? Aaa... hype. Makes you look more intelligent than you 
>> actually are! Look at me: AI! AI! AI! Ooo so smrt! 
>> =*
>
>
> You sir are an ass.  
>  See what's on my new list at  Extropolis 
> 
> ayr
>
>
>  
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/83dd0aea-3ed2-44ad-a279-19d899804c4en%40googlegroups.com.


Re: Are Philosophical Zombies possible?

2024-07-10 Thread John Clark
On Wed, Jul 10, 2024 at 11:24 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> *Why do you even use the word AI ? Why can't use just use the words
> "computer program" ? Aaa... hype. Makes you look more intelligent than you
> actually are! Look at me: AI! AI! AI! Ooo so smrt!
> =*


You sir are an ass.
 See what's on my new list at  Extropolis

ayr

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1uPzY98EDF2az%2BU7%3DYtSy57hooPEE_zS%3Db6Jiz-QB%3DoQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-10 Thread 'Cosmin Visan' via Everything List
Why do you even use the word AI ? Why can't use just use the words 
"computer program" ? Aaa... hype. Makes you look more intelligent than you 
actually are! Look at me: AI! AI! AI! Ooo so smrt! 
=

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/9e4e1166-b62d-4c51-93cb-b8fdea2663b1n%40googlegroups.com.


Re: Are Philosophical Zombies possible?

2024-07-10 Thread Jason Resch
On Tue, Jul 9, 2024, 7:22 PM Brent Meeker  wrote:

>
>
> On 7/8/2024 1:20 PM, Jason Resch wrote:
>
>
>
> On Mon, Jul 8, 2024, 4:01 PM John Clark  wrote:
>
>> On Mon, Jul 8, 2024 at 2:23 PM Jason Resch  wrote:
>>
>> *> If you believe mental states do not cause anything, then you believe
>>> philosophical zombies are logically possible (since we could remove
>>> consciousness without altering behavior).*
>>>
>>
>> Not if consciousness is the inevitable byproduct of intelligece, and I'm
>> almost certain that it is.
>>
>
> If consciousness is necessary for intelligence, then it's not a byproduct.
> If on the other hand, consciousness is just a useless byproduct, then it
> could (logically if not nomologically) be eliminated without affecting
> intelligent.
>
> You seem to want it to be both necessary but also be something that makes
> no difference to anything (which makes it unnecessary).
>
> I would be most curious to hear your thoughts  regarding the section of my
> article on "Conscious behaviors" -- that is, behaviors which (seem to)
> require consciousness in order to do them.
>
>
>> *> I view mental states as high-level states operating in their own
>>> regime of causality (much like a Java computer program).*
>>>
>>
>> I have no problem with that, actually it's very similar to my view.
>>
>
> That's good to hear.
>
>
>>
>>> *> The java computer program can run on any platform, regardless of the
>>> particular physical nature of it.*
>>>
>>
>> Right. You could even say that "computer program" is not a noun, it is an
>> adjective, it is the way a computer will behave when the machine's  logical
>> states are organized in a certain way.  And "I" is the way atoms behave
>> when they are organized in a Johnkclarkian way, and "you" is the way atoms
>> behave when they are organized in a Jasonreschian way.
>>
>
> I'm not opposed to that framing.
>
>>
>> *> I view consciousness as like that high-level control structure. It
>>> operates within a causal realm where ideas and thoughts have causal
>>> influence and power, and can reach down to the lower level to do things
>>> like trigger nerve impulses.*
>>>
>>
>> Consciousness is a high-level description of brain states that can be
>> extremely useful, but that doesn't mean that lower level and much more
>> finely grained description of brain states involving nerve impulses, or
>> even more finely grained descriptions involving electrons and quarks are
>> wrong, it's just that such level of detail is unnecessary and impractical
>> for some purposes.
>>
>
> I would even say, that at a certain level of abstraction, they become
> irrelevant. It is the result of what I call "a Turing firewall", software
> has no ability to know its underlying hardware implementation, it is an
> inviolable separation of layers of abstraction, which makes the lower
> levels invisible to the layers above.
>
> That's roughly true, but not exactly.  If you think of intelligence
> implemented on a computer it would make a difference if it had a true
> random number generator (hardware) or not.
>

There was a study done in the 1950s on probabilistic Turing machines (
https://www.degruyter.com/document/doi/10.1515/9781400882618-010/html?lang=en
) that found what they could compute is no different than what a
deterministic Turing machine can compute.

"The computing power of Turing machines
provided with a random number generator was
studied in the classic paper [Computability by
Probabilistic Machines]. It turned out that such
machines could compute only functions that are already computable by
ordinary Turing machines."
— Martin Davis in “The Myth of Hypercomputation” (2004)

To see why consider that programs can similarly split themselves and run in
parallel
with each of the possible values. To each instance of the split program,
the value it is provided will seem random. But importantly: what the
program can computes with this value
is the same as what it would compute had the value come from a "truly
random" quantum measurement.

It would make a difference if it were a quantum computer or not.
>

For us observing the program run from the outside, it would make a
difference. But the program itself has way of distinguishing if it is
receiving a value that came from a real measurement of a quantum system, or
if it was provided the result of a simulated quantum system.


And going the other way, what if it didn't have a multiply operation.
> We're so accustomed the standard Turing-complete von Neumann computer we
> take it for granted.
>

A program will crash if it's run on a hardware that it's not compatible
with. This is why you can't take a .exe from windows and run it on a Mac.
But if you run a windows emulator on the Mac you can then run the .exe
within it.

The program the has no idea it is running on a Mac, it has every reason to
believe it is running on a real windows computer, but it is fooled by the
emulation layer (this emulation layer is what I speak of when to refer to
the "Tu

Re: Are Philosophical Zombies possible?

2024-07-10 Thread Jason Resch
On Tue, Jul 9, 2024, 6:59 PM Brent Meeker  wrote:

>
>
> On 7/8/2024 11:12 AM, Jason Resch wrote:
>
>
>
> On Mon, Jul 8, 2024 at 10:29 AM John Clark  wrote:
>
>>
>> On Sun, Jul 7, 2024 at 9:28 PM Brent Meeker 
>> wrote:
>>
>> *>I thought it was obvious that foresight requires consciousness. It
>>> requires the ability of think in terms of future scenarios*
>>>
>>
>> The keyword in the above is "think". Foresight means using logic to
>> predict, given current starting conditions, what the future will likely be,
>> and determining how a change in the initial conditions will likely
>> affect the future.  And to do any of that requires intelligence. Both Large
>> Language Models and picture to video AI programs have demonstrated that
>> they have foresight ; if you ask them what will happen if you cut the
>> string holding down a helium balloon they will tell you it will flow away,
>> but if you add that the instant string is cut an Olympic high jumper will
>> make a grab for the dangling string they will tell you what will likely
>> happen then too. So yes, foresight does imply consciousness because
>> foresight demands intelligence and consciousness is the inevitable
>> byproduct of intelligence.
>>
>
> Consciousness is a prerequisite of intelligence. One can be conscious
> without being intelligent, but one cannot be intelligent without being
> conscious.
> Someone with locked-in syndrome can do nothing, and can exhibit no
> intelligent behavior. They have no measurable intelligence. Yet they are
> conscious. You need to have perceptions (of the environment, or the current
> situation) in order to act intelligently. It is in having perceptions that
> consciousness appears. So consciousness is not a byproduct of, but an
> integral and necessary requirement for intelligent action.
>
> And not necessarily a high-level language based consiousness.  Paramecia
> act intelligently based on perception of chemical gradients.  So one would
> say they are conscious of said gradients.
>


Yes, I agree.

Jason


> Brent
>
>
> Jason
>
>
>>
>>
>>> *> in which you are an actor*
>>>
>>
>> Obviously any intelligence will have to take its own actions in account
>> to determine what the likely future will be. After a LLM gives you an
>> answer to a question, based on that answer I'll bet an AI  will be able to
>> make a pretty good guess what your next question to it will be.
>>
>> John K ClarkSee what's on my new list at  Extropolis
>> 
>> ods
>>
>>
>>
>>
>>
>>
>>>
>>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAJPayv1rXGetCmp5R8Zpakx5YVHdkNJMn-OrwL7Z3-E9Aka73g%40mail.gmail.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUh%3D_HknXVnLpnd2fr6XkTbiDY0TU8hdqq%3DpPW5UfAwYUw%40mail.gmail.com
> 
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/8e3133ae-abcc-48fe-966f-96210858f33d%40gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhd9yd9o4c2d7HDabAuYWxhp%2BfiHHyFWzU72KQ5g0Ch%2BA%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-10 Thread 'Cosmin Visan' via Everything List
@Brent. Playing with words doesn't make you smart. Quite the opposite. 
Maaan... you people are so boring. You have the same memes that you keep 
repeating over and over and over again. Zero presence of intelligent 
thought. Just memes.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/0d80fc9c-300a-4e97-ab3e-a540cf7224f1n%40googlegroups.com.


Re: Are Philosophical Zombies possible?

2024-07-09 Thread Brent Meeker



On 7/8/2024 1:34 PM, Jason Resch wrote:



On Mon, Jul 8, 2024, 4:04 PM John Clark  wrote:


On Mon, Jul 8, 2024 at 2:12 PM Jason Resch 
wrote:

/>Consciousness is a prerequisite of intelligence./


I think you've got that backwards, intelligence is a prerequisite
of consciousness. And the possibility of intelligent ACTIONS is a
prerequisitefor Darwinian natural selection to have evolved it.


I disagree, but will explain below.

/> One can be conscious without being intelligent,/


Sure.


I define intelligence by something capable of intelligent action.

Intelligent action requires non random choice: choice informed by 
information from the environment.


Having information about the environment (i.e. perceptions) is 
consciousness. You cannot have perceptions without there being some 
process or thing to perceive them.


Therefore perceptions (i.e. consciousness) is a requirement and 
precondition of being able to perform intelligent actions.


I agree.  And there for evolution can "see" consciousness, just like it 
can see metabolism.


Brent



Jason

The Turing Test is not perfect, it has a lot of flaws, but it's
all we've got. If something passes the Turing Test then it's
intelligent and conscious, but if it fails the test then it may or
may not be intelligent and or conscious.

/You need to have perceptions (of the environment, or the
current situation) in order to act intelligently. /


For intelligence to have evolved, and we know for a fact that it
has, younot only need to be able to perceive the environment you
also need to be able to manipulate it. That's why zebras didn't
evolve great intelligence, they have no hands, so a brilliant
zebra wouldn't have a great advantage over a dumb zebra, in fact
he'd probably be at a disadvantage because a big brain is a great
energy hog.
John K Clark    See what's on my new list at Extropolis

339

3b4


-- 
You received this message because you are subscribed to the Google

Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit

https://groups.google.com/d/msgid/everything-list/CAJPayv2Zjakk5szeMFfZu%3DCYp3FzopZsOOMXW%2Bx7qPH9_pujfg%40mail.gmail.com

.

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi-LWn1sGnWc95aUw1ib9a7WXV%2BCkj9b%2Bgq0OboAes7mw%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/a2780ded-24b9-4463-a156-99e13b37fc6a%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-09 Thread Brent Meeker



On 7/8/2024 1:20 PM, Jason Resch wrote:



On Mon, Jul 8, 2024, 4:01 PM John Clark  wrote:

On Mon, Jul 8, 2024 at 2:23 PM Jason Resch 
wrote:

/> If you believe mental states do not cause anything, then
you believe philosophical zombies are logically possible
(since we could remove consciousness without altering behavior)./


Not if consciousness is the inevitable byproduct of intelligece,
and I'm almost certain that it is.


If consciousness is necessary for intelligence, then it's not a 
byproduct. If on the other hand, consciousness is just a useless 
byproduct, then it could (logically if not nomologically) be 
eliminated without affecting intelligent.


You seem to want it to be both necessary but also be something that 
makes no difference to anything (which makes it unnecessary).


I would be most curious to hear your thoughts regarding the section of 
my article on "Conscious behaviors" -- that is, behaviors which (seem 
to) require consciousness in order to do them.



/> I view mental states as high-level states operating in
their own regime of causality (much like a Java computer
program)./


I have no problem with that, actuallyit's very similar to my view.


That's good to hear.

/> The java computer program can run on any platform,
regardless of the particular physical nature of it./


Right. You could even say that "computer program" is not a noun,
it is an adjective, it is the way a computer will behave when the
machine's  logical states are organized in a certain way.  And "I"
is the way atoms behave when they are organized in a Johnkclarkian
way, and "you" is the way atoms behave when they are organized in
a Jasonreschian way.


I'm not opposed to that framing.


/> I view consciousness as like that high-level control
structure. It operates within a causal realm where ideas and
thoughts have causal influence and power, and can reach down
to the lower level to do things like trigger nerve impulses./


Consciousness is a high-level description of brain statesthat can
be extremely useful, but that doesn't mean that lower level and
much more finely grained description of brain states involving
nerve impulses, or even more finely grained descriptions involving
electrons and quarks are wrong, it's just that such level of
detail is unnecessary and impractical for some purposes.


I would even say, that at a certain level of abstraction, they become 
irrelevant. It is the result of what I call "a Turing firewall", 
software has no ability to know its underlying hardware 
implementation, it is an inviolable separation of layers of 
abstraction, which makes the lower levels invisible to the layers above.
That's roughly true, but not exactly.  If you think of intelligence 
implemented on a computer it would make a difference if it had a true 
random number generator (hardware) or not.  It would make a difference 
if it were a quantum computer or not.  And going the other way, what if 
it didn't have a multiply operation.  We're so accustomed the standard 
Turing-complete von Neumann computer we take it for granted.


Brent

So the neurons and molecular forces aren't in the drivers seat for 
what goes on in the brain. That is the domain of higher level 
structures and forces. We cannot ignore completely the lower levels, 
they provide the substrate upon which the higher levels are built, but 
I think it is an abuse of reductionism that leads people to saying 
consciousness is an epiphenomenon and doesn't do anything. When no one 
would try to apply reductionism to explain why, when a glider in the 
game of life hits a block and causes it to self destruct, that it is 
due to quantum mechanics in our universe, rather than a consequence of 
the very different rules of the game of life as they operate in the 
game of life universe.


Jason


John K Clark    See what's on my new list at Extropolis

qb2

-- 
You received this message because you are subscribed to the Google

Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit

https://groups.google.com/d/msgid/everything-list/CAJPayv28Yh4o5TGpuZ2nfh7NFxYWbi4yVW%2B5v%3DbeXULDqdbPsg%40mail.gmail.com

.

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/e

Re: Are Philosophical Zombies possible?

2024-07-09 Thread Brent Meeker




On 7/8/2024 11:40 AM, 'Cosmin Visan' via Everything List wrote:
Philosophical zombies are not possible, for the trivial reason that 
body doesn't even exist. "Body" is just an idea in consciousness. 

So is consciousness.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/8d2e8954-9df9-47f3-a71c-4353a49d561f%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-09 Thread Brent Meeker



On 7/8/2024 11:23 AM, Jason Resch wrote:



On Sun, Jul 7, 2024 at 3:14 PM John Clark  wrote:

On Sun, Jul 7, 2024 at 1:58 PM Jason Resch 
wrote:

/>>> // I think such foresight is a necessary
component of intelligence, not a "byproduct"./


>>I agree, I can detect the existence of foresight in others
and so can natural selection, and that's why we have it. 
It aids in getting our genes transferred into the next
generation. But I was talking about consciousness not
foresight, and regardless of how important we personally
think consciousness is, from evolution's point of view
it's utterly useless, and yet we have it, or at least I
have it.


/> you don't seem to think zombies are logically possible,/


Zombies are possible, it's philosophical zombies, a.k.a. smart
zombies, that are impossible because it's a brute fact that
consciousness is the way data behaves when it is being processed
intelligently, or at least that's what I think. Unless you believe
that all iterated sequences of "why" or "how" questions go on
forever then you must believe that brute facts exist; and I can't
think of a better candidate for one than consciousness.

/> so then epiphenomenalism is false/


According to the InternetEncyclopedia of Philosophy
"/Epiphenomenalism is a position in the philosophy of mind
according to which mental states or events are caused by physical
states or events in the brain but do not themselves cause
anything/".If that is the definition then I believe in
Epiphenomenalism.


If you believe mental states do not cause anything, then you believe 
philosophical zombies are logically possible (since we could remove 
consciousness without altering behavior).


I view mental states as high-level states operating in their own 
regime of causality (much like a Java computer program). The java 
computer program can run on any platform, regardless of the particular 
physical nature of it. It has in a sense isolated itself from the 
causality of the electrons and semiconductors, and operates in its own 
realm of the causality of if statements, and for loops. Consider this 
program, for example:


twin-prime-program2.png

What causes the program to terminate? Is it the inputs, and the 
logical relation of primality, or is it the electrons flowing through 
the CPU? I would argue that the higher-level causality, regarding the 
logical relations of the inputs to the program logic is just as 
important. It determines the physics of things like when the program 
terminates. At this level, the microcircuitry is relevant only to its 
support of the higher level causal structures, but the program doesn't 
need to be aware of nor consider those low-level things. It operates 
the same regardless.


I view consciousness as like that high-level control structure. It 
operates within a causal realm where ideas and thoughts have causal 
influence and power, and can reach down to the lower level to do 
things like trigger nerve impulses.



Here is a quote from Roger Sperry, who eloquently describes what I am 
speaking of:



"I am going to align myself in a counterstand, along with that
approximately 0.1 per cent mentalist minority, in support of a
hypothetical brain model in which consciousness and mental forces
generally are given their due representation as important features
in the chain of control. These appear as active operational forces
and dynamic properties that interact with and upon the
physiological machinery. Any model or description that leaves out
conscious forces, according to this view, is bound to be pretty
sadly incomplete and unsatisfactory. The conscious mind in this
scheme, far from being put aside and dispensed with as an
"inconsequential byproduct," "epiphenomenon," or "inner aspect,"
as is the customary treatment these days, gets located, instead,
front and center, directly in the midst of the causal interplay of
cerebral mechanisms.

Mental forces in this particular scheme are put in the driver's
seat, as it were. They give the orders and they push and haul
around the physiology and physicochemical processes as much as or
more than the latter control them. This is a scheme that puts mind
back in its old post, over matter, in a sense-not under, outside,
or beside it. It's a scheme that idealizes ideas and ideals over
physico-chemical interactions, nerve impulse traffic-or DNA. It's
a brain model in which conscious, mental, psychic forces are
recognized to be the crowning achievement of some five hundred
million years or more of evolution.

[...] The basic reasoning is simple: First, we contend that
conscious or mental phenomena are dynamic, emergent, pattern (or
configurational) properties of the 

Re: Are Philosophical Zombies possible?

2024-07-09 Thread Brent Meeker



On 7/8/2024 11:12 AM, Jason Resch wrote:



On Mon, Jul 8, 2024 at 10:29 AM John Clark  wrote:


On Sun, Jul 7, 2024 at 9:28 PM Brent Meeker
 wrote:

/>I thought it was obvious that foresight requires
consciousness. It requires the ability of think in terms of
future scenarios/


The keyword in the above is "think". Foresight means using logic
to predict, given current starting conditions, what the future
will likely be, and determining how a change in the initial
conditions will likely affect the future.  And to do any of that
requires intelligence. Both Large Language Models and picture to
video AI programs have demonstrated that they have foresight ; if
you ask them what will happen if you cut the string holding down a
helium balloon they will tell you it will flow away, but if you
add that the instant string is cut an Olympic high jumper will
make a grab for the dangling string they will tell you what will
likely happen then too. So yes, foresight does imply consciousness
because foresight demands intelligence and consciousness is the
inevitable byproduct of intelligence.


Consciousness is a prerequisite of intelligence. One can be conscious 
without being intelligent, but one cannot be intelligent without being 
conscious.
Someone with locked-in syndrome can do nothing, and can exhibit no 
intelligent behavior. They have no measurable intelligence. Yet they 
are conscious. You need to have perceptions (of the environment, or 
the current situation) in order to act intelligently. It is in having 
perceptions that consciousness appears. So consciousness is not a 
byproduct of, but an integral and necessary requirement for 
intelligent action.
And not necessarily a high-level language based consiousness. Paramecia 
act intelligently based on perception of chemical gradients.  So one 
would say they are conscious of said gradients.


Brent


Jason

/> in which you are an actor/


Obviously any intelligence will have to take its own actions in
account to determine what the likely future will be. After a LLM
gives you an answer to a question, based on that answer I'll bet
an AI  will be able to make a pretty good guess what your next
question to it will be.

John K Clark    See what's on my new list at Extropolis

ods





-- 
You received this message because you are subscribed to the Google

Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit

https://groups.google.com/d/msgid/everything-list/CAJPayv1rXGetCmp5R8Zpakx5YVHdkNJMn-OrwL7Z3-E9Aka73g%40mail.gmail.com

.

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUh%3D_HknXVnLpnd2fr6XkTbiDY0TU8hdqq%3DpPW5UfAwYUw%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/8e3133ae-abcc-48fe-966f-96210858f33d%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-09 Thread John Clark
On Tue, Jul 9, 2024 at 4:29 PM Brent Meeker  wrote:

* >So you wrote a whole paragraph but it's unclear whether you are agreeing
> with me that consciousness is NOT just some mysterious byproduct of
> intelligence,*
>

Consciousness is not mysterious, unless you think a brute fact is
mysterious, but there are only two ways an iterative sequence of "how" or
"why" questions can go, it can either terminate with a brute fact or it
goes on forever. I think an iterated sequence of questions going on forever
is far more mysterious than a brute fact. And I think it's a brute fact
that consciousness is the way data feels when it is being processed.


> *>but is an essential source of intelligent actions because it provides
> plans and evaluates planned actions and scenarios.*
>

You've got cause-and-effect mixed up. Consciousness is not a source of
intelligent action, consciousness is an inevitable consequence of
intelligence.

John K ClarkSee what's on my new list at  Extropolis
m


asd

sssfisdft

n

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2tGnc5KSCnKoJeK0h1wbP93Fp2io_qqtO2cpQE7u9mng%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-09 Thread Brent Meeker
So you wrote a whole paragraph but it's unclear whether you are agreeing 
with me that consciousness is NOT just some mysterious byproduct of 
intelligence, but is an essential source of intelligent actions because 
it provides plans and evaluates planned actions and scenarios.


Brent

On 7/8/2024 7:28 AM, John Clark wrote:


On Sun, Jul 7, 2024 at 9:28 PM Brent Meeker  wrote:

/>I thought it was obvious that foresight requires consciousness.
It requires the ability of think in terms of future scenarios/


The keyword in the above is "think". Foresight means using logic to 
predict, given current starting conditions, what the future will 
likely be, and determining how a change in the initial conditions will 
likely affect the future.  And to do any of that requires 
intelligence. Both Large Language Models and picture to video AI 
programs have demonstrated that they have foresight ; if you ask them 
what will happen if you cut the string holding down a helium balloon 
they will tell you it will flow away, but if you add that the instant 
string is cut an Olympic high jumper will make a grab for the dangling 
string they will tell you what will likely happen then too. So yes, 
foresight does imply consciousness because foresight demands 
intelligence and consciousness is the inevitable byproduct of 
intelligence.


/> in which you are an actor/


Obviously any intelligence will have to take its own actions in 
account to determine what the likely future will be. After a LLM gives 
you an answer to a question, based on that answer I'll bet an AI  will 
be able to make a pretty good guess what your next question to it will 
be.


John K Clark    See what's on my new list at Extropolis 


ods





--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1rXGetCmp5R8Zpakx5YVHdkNJMn-OrwL7Z3-E9Aka73g%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/19bc0634-dbd7-4681-9d33-70b0d68b3767%40gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-09 Thread 'Cosmin Visan' via Everything List
Brain doesn't exist. "Brain" is just an idea in consciousness.

On Tuesday 9 July 2024 at 20:47:44 UTC+3 Stathis Papaioannou wrote:

>
>
> Stathis Papaioannou
>
>
> On Wed, 10 Jul 2024 at 02:12, Jason Resch  wrote:
>
>>
>>
>> On Tue, Jul 9, 2024, 11:18 AM Stathis Papaioannou  
>> wrote:
>>
>>>
>>>
>>> Stathis Papaioannou
>>>
>>>
>>> On Wed, 10 Jul 2024 at 00:34, Jason Resch  wrote:
>>>


 On Tue, Jul 9, 2024, 10:16 AM Stathis Papaioannou  
 wrote:

>
>
> Stathis Papaioannou
>
>
> On Tue, 9 Jul 2024 at 22:15, Jason Resch  wrote:
>
>>
>>
>> On Tue, Jul 9, 2024, 4:33 AM Stathis Papaioannou  
>> wrote:
>>
>>>
>>>
>>> On Tue, 9 Jul 2024 at 04:23, Jason Resch  wrote:
>>>


 On Sun, Jul 7, 2024 at 3:14 PM John Clark  
 wrote:

> On Sun, Jul 7, 2024 at 1:58 PM Jason Resch  
> wrote:
>
> *>>> ** I think such foresight is a necessary component of 
 intelligence, not a "byproduct".*
>>>
>>>
>>> >>I agree, I can detect the existence of foresight in others 
>>> and so can natural selection, and that's why we have it.  It aids 
>>> in 
>>> getting our genes transferred into the next generation. But I was 
>>> talking 
>>> about consciousness not foresight, and regardless of how important 
>>> we 
>>> personally think consciousness is, from evolution's point of 
>>> view it's utterly useless, and yet we have it, or at least I have 
>>> it. 
>>>
>>
>> *> you don't seem to think zombies are logically possible,*
>>
>
> Zombies are possible, it's philosophical zombies, a.k.a. smart 
> zombies, that are impossible because it's a brute fact that 
> consciousness 
> is the way data behaves when it is being processed intelligently, 
> or at least that's what I think. Unless you believe that all 
> iterated sequences of "why" or "how" questions go on forever then 
> you must believe that brute facts exist; and I can't think of a 
> better 
> candidate for one than consciousness.
>
> *> so then epiphenomenalism is false*
>>
>
> According to the Internet Encyclopedia of Philosophy 
> "*Epiphenomenalism 
> is a position in the philosophy of mind according to which mental 
> states or 
> events are caused by physical states or events in the brain but do 
> not 
> themselves cause anything*". If that is the definition then I 
> believe in Epiphenomenalism.
>

 If you believe mental states do not cause anything, then you 
 believe philosophical zombies are logically possible (since we could 
 remove 
 consciousness without altering behavior).

>>>  
>>> Mental states could be necessarily tied to physical states without 
>>> having any separate causal efficacy, and zombies would not be logically 
>>> possible. Software is necessarily tied to hardware activity: if a 
>>> computer 
>>> runs a particular program, it is not optional that the program is 
>>> implemented. However, the software does not itself have causal 
>>> efficacy, 
>>> causing current to flow in wires and semiconductors and so on: there is 
>>> always a sufficient explanation for such activity in purely physical 
>>> terms.
>>>
>>
>> I don't disagree that there is sufficient explanation in all the 
>> particle movements all following physical laws.
>>
>> But then consider the question, how do we decide what level is in 
>> control? You make the case that we should consider the quantum field 
>> level 
>> in control because everything is ultimately reducible to it.
>>
>> But I don't think that's the best metric for deciding whether it's in 
>> control or not. Do the molecules in the brain tell neurons what do, or 
>> do 
>> neurons tell molecules what to do (e.g. when they fire)? Or is it some 
>> mutually conditioned relationship?
>>
>> Do neurons fire on their own and tell brains what to do, or do 
>> neurons only fire when other neurons of the whole brain stimulate them 
>> appropriately so they have to fire? Or is it again, another case of 
>> mutualism?
>>
>> When two people are discussing ideas, are the ideas determining how 
>> each brain thinks and responds, or are the brains determining the ideas 
>> by 
>> virtue of generating the words through which they are expressed?
>>
>> Through in each of these cases, we can always drop a layer and 
>> explain all the events at that layer, that is not (in my view) enough of 
>> a 
>> reason to argue that the events at that layer are "in charge.

Re: Are Philosophical Zombies possible?

2024-07-09 Thread Stathis Papaioannou
Stathis Papaioannou


On Wed, 10 Jul 2024 at 02:12, Jason Resch  wrote:

>
>
> On Tue, Jul 9, 2024, 11:18 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> Stathis Papaioannou
>>
>>
>> On Wed, 10 Jul 2024 at 00:34, Jason Resch  wrote:
>>
>>>
>>>
>>> On Tue, Jul 9, 2024, 10:16 AM Stathis Papaioannou 
>>> wrote:
>>>


 Stathis Papaioannou


 On Tue, 9 Jul 2024 at 22:15, Jason Resch  wrote:

>
>
> On Tue, Jul 9, 2024, 4:33 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Tue, 9 Jul 2024 at 04:23, Jason Resch 
>> wrote:
>>
>>>
>>>
>>> On Sun, Jul 7, 2024 at 3:14 PM John Clark 
>>> wrote:
>>>
 On Sun, Jul 7, 2024 at 1:58 PM Jason Resch 
 wrote:

 *>>> ** I think such foresight is a necessary component of
>>> intelligence, not a "byproduct".*
>>
>>
>> >>I agree, I can detect the existence of foresight in others and
>> so can natural selection, and that's why we have it.  It aids in 
>> getting
>> our genes transferred into the next generation. But I was talking 
>> about
>> consciousness not foresight, and regardless of how important we 
>> personally
>> think consciousness is, from evolution's point of view it's
>> utterly useless, and yet we have it, or at least I have it.
>>
>
> *> you don't seem to think zombies are logically possible,*
>

 Zombies are possible, it's philosophical zombies, a.k.a. smart
 zombies, that are impossible because it's a brute fact that 
 consciousness
 is the way data behaves when it is being processed intelligently,
 or at least that's what I think. Unless you believe that all
 iterated sequences of "why" or "how" questions go on forever then
 you must believe that brute facts exist; and I can't think of a better
 candidate for one than consciousness.

 *> so then epiphenomenalism is false*
>

 According to the Internet Encyclopedia of Philosophy "*Epiphenomenalism
 is a position in the philosophy of mind according to which mental 
 states or
 events are caused by physical states or events in the brain but do not
 themselves cause anything*". If that is the definition then I
 believe in Epiphenomenalism.

>>>
>>> If you believe mental states do not cause anything, then you believe
>>> philosophical zombies are logically possible (since we could remove
>>> consciousness without altering behavior).
>>>
>>
>> Mental states could be necessarily tied to physical states without
>> having any separate causal efficacy, and zombies would not be logically
>> possible. Software is necessarily tied to hardware activity: if a 
>> computer
>> runs a particular program, it is not optional that the program is
>> implemented. However, the software does not itself have causal efficacy,
>> causing current to flow in wires and semiconductors and so on: there is
>> always a sufficient explanation for such activity in purely physical 
>> terms.
>>
>
> I don't disagree that there is sufficient explanation in all the
> particle movements all following physical laws.
>
> But then consider the question, how do we decide what level is in
> control? You make the case that we should consider the quantum field level
> in control because everything is ultimately reducible to it.
>
> But I don't think that's the best metric for deciding whether it's in
> control or not. Do the molecules in the brain tell neurons what do, or do
> neurons tell molecules what to do (e.g. when they fire)? Or is it some
> mutually conditioned relationship?
>
> Do neurons fire on their own and tell brains what to do, or do neurons
> only fire when other neurons of the whole brain stimulate them
> appropriately so they have to fire? Or is it again, another case of
> mutualism?
>
> When two people are discussing ideas, are the ideas determining how
> each brain thinks and responds, or are the brains determining the ideas by
> virtue of generating the words through which they are expressed?
>
> Through in each of these cases, we can always drop a layer and explain
> all the events at that layer, that is not (in my view) enough of a reason
> to argue that the events at that layer are "in charge." Control 
> structures,
> such as whole brain regions, or complex computer programs, can involve and
> be influenced by the actions of billions of separate events and separate
> parts, and as such, they transcend the behaviors of any single physical
> particle or physical law.
>
> Consider: whether or not a program halts might only be determinable by
> some rules

Re: Are Philosophical Zombies possible?

2024-07-09 Thread John Clark
On Tue, Jul 9, 2024 at 12:16 PM Jason Resch  wrote:




> You can't make lemonade without lemons, and lemons can't make lemonade
>> without you.
>>
>
> And this highlights the distinction between a prerequisite and a cause.
>

I don't see how. Lemons are a cause of lemonade and so are you. Lemons are
a prerequisite for a lemonade and so are you.









> *> I define intelligence by something capable of intelligent action.*
>>>
>>
>> Intelligent action is what drove evolution to amplify intelligence, but
>> if Stephen Hawking's voice generator had broken down for one hour I would
>> still say I have  reason to believe that he remained intelligent during
>> that hour.
>>
>
> *> Sure, but that is just a delayed action. Would he still be intelligent
> if he never was able to speak again (even with the help of a machine)?*
>

Yes, although we would never know it. Maybe rocks are brilliant but shy and
don't like to show off so they played dumb. But I doubt it.


> * > He wouldn't be according to evolution.*
>

*True, but natural selection has its opinion about what is intelligent and
I have mine.  *

>> But you can't have perceptions without intelligence, sight and sound
>> would just be meaningless gibberish.
>>
>
> > *How do you define intelligence?*
>

I don't have a definition for intelligence but I have something much
better, examples. After all, definitions are made of words and thus all
definitions are inherently circular, examples are the only thing that give
meaning to words. Intelligence is the thing that Einstein was famous for
having a lot of; from that even somebody who knew nothing about Einstein
except for having read a child's biography of the man would understand what
the word "intelligence" stands for.

>> in the last few years there has been enormous progress in figuring out
>> how intelligence works, but nobody has found anything new to say about
>> consciousness in centuries.
>>
>
> *> You don't think functionalism is progress?*
>

I don't think it says anything fundamental about consciousness that hadn't
been said a thousand times many centuries ago.

John K ClarkSee what's on my new list at  Extropolis

atq


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0L7T-pa3DWAMJqm8GUKaiak74-30UcEp%3DAgTNChe9GnA%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-09 Thread Jason Resch
On Tue, Jul 9, 2024, 10:50 AM John Clark  wrote:

> On Tue, Jul 9, 2024 at 8:31 AM Jason Resch  wrote:
>
> >> My dictionary says the definition of "*prerequisite*"  is  "*a thing
>>> that is required as a prior condition for something else to happen or 
>>> exist*". And
>>> it says the definition of "*cause*" is "*a person or thing that gives
>>> rise to an action, phenomenon, or condition*". So cause and
>>> prerequisite are synonyms.
>>>
>>
>> *> There's a subtle distinction. Muscles and bones are prerequisites for
>> limbs, but muscles and bones do not cause limbs.*
>>
>
> There are many things that caused limbs to come into existence, one of
> them was the existence of muscles, another was the existence of bones,
> and yet another was the help limbs gave to organisms in getting genes into
> the next generation.
>
> *> Lemons are a prerequisite for lemonade, but do not cause lemonade.*
>>
>
> You can't make lemonade without lemons, and lemons can't make lemonade
> without you.
>

And this highlights the distinction between a prerequisite and a cause.



>
>> *> I define intelligence by something capable of intelligent action.*
>>
>
> Intelligent action is what drove evolution to amplify intelligence, but if
> Stephen Hawking's voice generator had broken down for one hour I would
> still say I have  reason to believe that he remained intelligent during
> that hour.
>


Sure, but that is just a delayed action. Would he still be intelligent if
he never was able to speak again (even with the help of a machine)? He
wouldn't be according to evolution.


>
> *> Intelligent action requires non random choice:*
>>
>
> If it's non-random then by definition it is deterministic.
>

We aren't debating free will here. Not sure why you mention this.


> > *Having information about the environment (i.e. perceptions) is
>> consciousness.*
>>
>
> But you can't have perceptions without intelligence, sight and sound
> would just be meaningless gibberish.
>

How do you define intelligence?


> > *You cannot have perceptions without there being some process or thing
>> to perceive them.*
>>
>
> Yes, and that thing is intelligence.
>
> *> Therefore perceptions (i.e. consciousness) is a requirement and
>> precondition of being able to perform intelligent actions.*
>>
>
> The only perceptions we have firsthand experience with are our own, so
> investigating perceptions is not very useful in Philosophy or in trying to
> figure out how the world works, but intelligence is another matter
> entirely.
>

It is if we want to answer the question of why consciousness evolved.


That's why in the last few years there has been enormous progress in
> figuring out how intelligence works, but nobody has found anything new to
> say about consciousness in centuries.
>

You don't think functionalism is progress?

Jason



> John K Clark
>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0kBCxYZaN474fj0S5i5RBUGYZ_dHiU2a3b2mesTpyR2w%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjTVvE-yogEMwXWGJkHk78nW-CdzAvfWg8X%2BQvnO0RVkQ%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-09 Thread Jason Resch
On Tue, Jul 9, 2024, 11:18 AM Stathis Papaioannou 
wrote:

>
>
> Stathis Papaioannou
>
>
> On Wed, 10 Jul 2024 at 00:34, Jason Resch  wrote:
>
>>
>>
>> On Tue, Jul 9, 2024, 10:16 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> Stathis Papaioannou
>>>
>>>
>>> On Tue, 9 Jul 2024 at 22:15, Jason Resch  wrote:
>>>


 On Tue, Jul 9, 2024, 4:33 AM Stathis Papaioannou 
 wrote:

>
>
> On Tue, 9 Jul 2024 at 04:23, Jason Resch  wrote:
>
>>
>>
>> On Sun, Jul 7, 2024 at 3:14 PM John Clark 
>> wrote:
>>
>>> On Sun, Jul 7, 2024 at 1:58 PM Jason Resch 
>>> wrote:
>>>
>>> *>>> ** I think such foresight is a necessary component of
>> intelligence, not a "byproduct".*
>
>
> >>I agree, I can detect the existence of foresight in others and
> so can natural selection, and that's why we have it.  It aids in 
> getting
> our genes transferred into the next generation. But I was talking 
> about
> consciousness not foresight, and regardless of how important we 
> personally
> think consciousness is, from evolution's point of view it's
> utterly useless, and yet we have it, or at least I have it.
>

 *> you don't seem to think zombies are logically possible,*

>>>
>>> Zombies are possible, it's philosophical zombies, a.k.a. smart
>>> zombies, that are impossible because it's a brute fact that 
>>> consciousness
>>> is the way data behaves when it is being processed intelligently,
>>> or at least that's what I think. Unless you believe that all
>>> iterated sequences of "why" or "how" questions go on forever then
>>> you must believe that brute facts exist; and I can't think of a better
>>> candidate for one than consciousness.
>>>
>>> *> so then epiphenomenalism is false*

>>>
>>> According to the Internet Encyclopedia of Philosophy "*Epiphenomenalism
>>> is a position in the philosophy of mind according to which mental 
>>> states or
>>> events are caused by physical states or events in the brain but do not
>>> themselves cause anything*". If that is the definition then I
>>> believe in Epiphenomenalism.
>>>
>>
>> If you believe mental states do not cause anything, then you believe
>> philosophical zombies are logically possible (since we could remove
>> consciousness without altering behavior).
>>
>
> Mental states could be necessarily tied to physical states without
> having any separate causal efficacy, and zombies would not be logically
> possible. Software is necessarily tied to hardware activity: if a computer
> runs a particular program, it is not optional that the program is
> implemented. However, the software does not itself have causal efficacy,
> causing current to flow in wires and semiconductors and so on: there is
> always a sufficient explanation for such activity in purely physical 
> terms.
>

 I don't disagree that there is sufficient explanation in all the
 particle movements all following physical laws.

 But then consider the question, how do we decide what level is in
 control? You make the case that we should consider the quantum field level
 in control because everything is ultimately reducible to it.

 But I don't think that's the best metric for deciding whether it's in
 control or not. Do the molecules in the brain tell neurons what do, or do
 neurons tell molecules what to do (e.g. when they fire)? Or is it some
 mutually conditioned relationship?

 Do neurons fire on their own and tell brains what to do, or do neurons
 only fire when other neurons of the whole brain stimulate them
 appropriately so they have to fire? Or is it again, another case of
 mutualism?

 When two people are discussing ideas, are the ideas determining how
 each brain thinks and responds, or are the brains determining the ideas by
 virtue of generating the words through which they are expressed?

 Through in each of these cases, we can always drop a layer and explain
 all the events at that layer, that is not (in my view) enough of a reason
 to argue that the events at that layer are "in charge." Control structures,
 such as whole brain regions, or complex computer programs, can involve and
 be influenced by the actions of billions of separate events and separate
 parts, and as such, they transcend the behaviors of any single physical
 particle or physical law.

 Consider: whether or not a program halts might only be determinable by
 some rules and proof in a mathematical system, and in this case no physical
 law will reveal the answer to that physical system's (the computer's)
 behavior. So if higher level laws are required in the explanation, does it
>>>

  1   2   >