Re: A minimally conscious program

2022-02-28 Thread Brent Meeker



On 2/28/2022 2:09 PM, Jason Resch wrote:



On Tue, Apr 27, 2021 at 8:33 AM Telmo Menezes  
wrote:




Am Mo, 26. Apr 2021, um 17:16, schrieb John Clark:

On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam
 wrote:

> It's impossible to refute solipsism


True, but it's equally impossible to refute the idea that
everything including rocks is conscious. And if both a theory and
its exact opposite can neither be proven nor disproven then
neither speculation is of any value in trying to figure out how
the world works.


When I was a little kid I would ask adults if rocks were
conscious. They tried to train me to stop asking such questions,
because they were worried about what other people would think. To
this day, I never stopped asking these questions. I see three
options here:

(1) They were correct to worry and I have a mental issue.

(2) I am really dumb and don't see something obvious.

(3) Beliefs surrounding consciousness are socially normative, and
asking question outside of such boundaries is a taboo.


Consider the case where a god-like super intelligence for fun decided 
to wire up everything experienced by a particular rock during its 
billion year existence. All the light that fell on the rock's face, 
that super being could see, all the accelerations it underwent, it 
could feel. During this rock's history, it came to the surface in the 
1800s, and then a house was built not far from where you grew up. One 
day you notice and decide to kick this rock, and the super being who 
chose to experience everything this particular rock felt, feels the kick.


In a way, this god-like being has connected through nerves which are 
invisible to you (via its perfect knowledge of the history of this 
rock) to its brain. But these connections, though invisible, are no 
less real or concrete than the nerves that connect your hand to your 
brain. This super being might exist at a level outside our universe 
(e.g. in the universe running the simulation of this one).


Ought we to conclude from this possibility that there is no way, even 
in principle, to detect which objects are capable of perceiving? That 
there is no way to know which objects happen to be imbued with 
consciousness, even for something that seems as inanimate and inert as 
a rock?


You asked great questions.

Jason


I think that's a mistaken idea of consciousness.  To be conscious is to 
be conscious */of/* something.  It must have a correspondence with an 
environment and it must include an /*ability to act*/ on that 
correspondence in some sense.  Otherwise it's just a recording machine.  
This conception of consciousness admits of a continuum of degrees of 
consciousness.  In this sense a rock can be conscious, but its 
consciousness is very limited because it's ability to act is very limited.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/a8e556d0-1b8f-46a8-ddc8-4146e4fbcc3c%40gmail.com.


Re: A minimally conscious program

2022-02-28 Thread Bruce Kellett
On Tue, Mar 1, 2022 at 9:10 AM Jason Resch  wrote:

> On Tue, Apr 27, 2021 at 8:33 AM Telmo Menezes 
> wrote:
>
>> Am Mo, 26. Apr 2021, um 17:16, schrieb John Clark:
>>
>> On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam 
>> wrote:
>>
>> > It's impossible to refute solipsism
>>
>>
>> True, but it's equally impossible to refute the idea that everything
>> including rocks is conscious. And if both a theory and its exact opposite
>> can neither be proven nor disproven then neither speculation is of any
>> value in trying to figure out how the world works.
>>
>>
>> When I was a little kid I would ask adults if rocks were conscious. They
>> tried to train me to stop asking such questions, because they were worried
>> about what other people would think. To this day, I never stopped asking
>> these questions. I see three options here:
>>
>> (1) They were correct to worry and I have a mental issue.
>>
>> (2) I am really dumb and don't see something obvious.
>>
>> (3) Beliefs surrounding consciousness are socially normative, and asking
>> question outside of such boundaries is a taboo.
>>
>>
> Consider the case where a god-like super intelligence for fun decided to
> wire up everything experienced by a particular rock during its billion year
> existence. All the light that fell on the rock's face, that super being
> could see, all the accelerations it underwent, it could feel. During this
> rock's history, it came to the surface in the 1800s, and then a house was
> built not far from where you grew up. One day you notice and decide to kick
> this rock, and the super being who chose to experience everything this
> particular rock felt, feels the kick.
>
> In a way, this god-like being has connected through nerves which are
> invisible to you (via its perfect knowledge of the history of this rock) to
> its brain. But these connections, though invisible, are no less real or
> concrete than the nerves that connect your hand to your brain. This super
> being might exist at a level outside our universe (e.g. in the universe
> running the simulation of this one).
>
> Ought we to conclude from this possibility that there is no way, even in
> principle, to detect which objects are capable of perceiving? That there is
> no way to know which objects happen to be imbued with consciousness, even
> for something that seems as inanimate and inert as a rock?
>
> You asked great questions.
>

If you believe in magic, anything is possible., and no questions have
definite answers.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLSm7ObnZf5rghtYgGoqCsSfG36tYAN%3D4FwidHPZCj1kOw%40mail.gmail.com.


Re: A minimally conscious program

2022-02-28 Thread Jason Resch
On Tue, Apr 27, 2021 at 8:33 AM Telmo Menezes 
wrote:

>
>
> Am Mo, 26. Apr 2021, um 17:16, schrieb John Clark:
>
> On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam 
> wrote:
>
> > It's impossible to refute solipsism
>
>
> True, but it's equally impossible to refute the idea that everything
> including rocks is conscious. And if both a theory and its exact opposite
> can neither be proven nor disproven then neither speculation is of any
> value in trying to figure out how the world works.
>
>
> When I was a little kid I would ask adults if rocks were conscious. They
> tried to train me to stop asking such questions, because they were worried
> about what other people would think. To this day, I never stopped asking
> these questions. I see three options here:
>
> (1) They were correct to worry and I have a mental issue.
>
> (2) I am really dumb and don't see something obvious.
>
> (3) Beliefs surrounding consciousness are socially normative, and asking
> question outside of such boundaries is a taboo.
>
>
Consider the case where a god-like super intelligence for fun decided to
wire up everything experienced by a particular rock during its billion year
existence. All the light that fell on the rock's face, that super being
could see, all the accelerations it underwent, it could feel. During this
rock's history, it came to the surface in the 1800s, and then a house was
built not far from where you grew up. One day you notice and decide to kick
this rock, and the super being who chose to experience everything this
particular rock felt, feels the kick.

In a way, this god-like being has connected through nerves which are
invisible to you (via its perfect knowledge of the history of this rock) to
its brain. But these connections, though invisible, are no less real or
concrete than the nerves that connect your hand to your brain. This super
being might exist at a level outside our universe (e.g. in the universe
running the simulation of this one).

Ought we to conclude from this possibility that there is no way, even in
principle, to detect which objects are capable of perceiving? That there is
no way to know which objects happen to be imbued with consciousness, even
for something that seems as inanimate and inert as a rock?

You asked great questions.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhxjSsRiVANzddoBR27uQHzWgCaYpo5O5r83QFB93gFUg%40mail.gmail.com.


Re: A minimally conscious program

2021-05-06 Thread Jason Resch
On Thu, May 6, 2021 at 9:08 AM Bruno Marchal  wrote:

>
> On 30 Apr 2021, at 20:52, Jason Resch  wrote:
>
> It might be a true fact that "Machine X believes Y", without Y being true.
> Is it simply the truth that "Machine X believes Y" that makes X
> consciousness of Y?
>
>
> It is more the belief that the machine has a belief which remains true,
> even if the initial belief is false.
>


Is that extra meta-level of belief necessary for simple awareness, or only
for self-awareness?


>
> Can a machine believe "2+2=4" without having a reference to itself?
>
>
> Not really, unless you accept the idea of unconscious belief, which makes
> sense in some psychological theory.
>
> My method consists in defining “the machine M believes P” by “the machine
> M asserts P”, and then I limit myself to machine which are correct by
> definition. This is of no use in psychology, but is enough to derive
> physics.
>

I see. I think this might account for the confusion I had with respect to
the link between consciousness (as we know and perceive it), and the
consciousness of a self-referentially correct and consistent machine, which
was a necessary simplification in your initial research. (Assuming I
understand this correctly).

Self-referentially correct and consistent machines can be conscious, but
those properties are not necessary for consciousness. Only "being a
machine" of some kind would be necessary. My question would then be, is
every machine conscious of something, or are only certain machines
conscious? If only some, how soon in the UD would a conscious machine be
encountered?

If the UD can be viewed as a machine in its own right, is it a machine that
is conscious of everything? A super-mind or over-mind? Or do the minds
fractionate due to their lack of relation to each other by the memory
divisions of the UD?



> What, programmatically, would you say is needed to program a machine that
> believes "2+2=4" or to implement self-reference?
>
>
> That it has enough induction axioms, like PA and ZF, but unlike RA (R and
> Q), or CL (combinatory logic without induction).
>  The universal machine without induction axiom are conscious, but are very
> limited in introspection power. They don’t have the rich theology of the
> machine having induction.
> I recall that the induction axioms are all axioms having the shape [P(0) &
> (for all x P(x) -> P(x+1))] -> (for all x P(x)). It is an ability to build
> universals.
>

Thank you. I can begin to see how induction is necessary for self-reference.


Does a Turing machine evaluating "if (2+2 == 4) then" believe it?
>
>
> If the machine can prove:
> Beweisbar(x)-> Beweisbar(Beweisbar(x)), she can be said to be
> self-conscious. PA can, RA cannot.
>
>
>
> Or does it require theorem proving software that reduces a statement to
> Peano axioms or similar?
>
>
> That is requires for the rational belief, but not for the experience-able
> one.
>
>
>
I guess this is what I am most curious about. Not so much rational belief
or self-consciousness, but the requirements of immediate
experience/awareness. If consciousness is the awareness of information, how
does one write a program that is "aware" of information? In some sense, I
can see the argument that any handling or processing of information
requires, in some sense, some kind of awareness of it.


>
>
>
>> To get immediate knowledgeability you need to add consistency ([]p &
>> <>t), to get ([]p & <>t & p) which prevents transitivity, and gives to the
>> machine a feeling of immediacy.
>>
>
> By consistency here do you mean the machine must never come to believe
> something false, or that the machine itself must behave in a manner
> consistent with its design/definition?
>
>
> That the machine will not believe something false. I agree this works only
> because I can limit myself to correct machine.
> The psychology and theology of the lying machine remains to be done, but
> it has no use to derive physics from arithmetic.
>
>
>
>
> I still have a conceptual difficulty trying to marry these mathematical
> notions of truth, provability, and consistency with a program/Machine that
> manifests them.
>
>
> It *is¨subtle, that is why we need to use the mathematics of
> self-reference. It is highly counter-intuitive. All errors in
> philosophy/theology comes from confusing a self-referential mode with
> another, I would say.
>
>
>
>
>> If a program can be said to "know" something then can we also say it is
>> conscious of that thing?
>>
>>
>> 1) That’s *not* the case for []p & p, unless you accept a notion of
>> unconscious knowledge, like knowing that Perseverance and Ingenuity are on
>> Mars, but not being currently thinking about it, so that you are not right
>> now consciously aware of the fact---well you are, but just because I have
>> just reminded it :)
>>
>
> In a way, I might view these long term memories as environmental signals
> that encroach upon one's mind state. A state which is otherwise not
> immediately aware of all the co

Re: A minimally conscious program

2021-05-06 Thread 'Brent Meeker' via Everything List



On 5/6/2021 6:36 AM, Bruno Marchal wrote:


On 30 Apr 2021, at 20:47, 'Brent Meeker' via Everything List 
> wrote:




On 4/30/2021 4:19 AM, Bruno Marchal wrote:
If a program can be said to "know" something then can we also say 
it is conscious of that thing?


That's not even common parlance.  Conscious thoughts are fleeting.  
Knowledge is in memory.  I know how to ride a bicycle /because/ I do 
it unconsciously.  I don't think consciousness can be understood 
except as a surface or boundary of the subconscious and the 
unconscious (physics).


If you use physics, you have to explain what it is, and how that 
select the computations in arithmetic,


That is a field of active research: how brains implement computations 
and why they do some and not others.



or you need to abandon mechanism.


Only your idea of "mechanism".

With mechanism, to claim that a machine consciousness is not 
attributable to some universal machinery, despite they do execute a 
computation, in the only mathematical sense discovered by Church and 
Turing (and some others) seem a bit magical.


My motorcycles animation is not attributable to some universal motorcycle.



Note that you don’t quote me, above. You should have quoted my answer. 
The beauty of Mechanism is that the oldest definition of (rational) 
 knowledge (Theaetetus true (justified) opinion) already explain why 
no machine can define its own knowledge, why consciousness seems 
necessarily mysterious, and why we get that persistent feeling that we 
belong to a physical reality, when in fact we are just infinitely many 
numbers involved in complex relations.


I didn't quote it because it only obscures the transitory nature of 
conscious thought.  "True belief" is ambiguous; do you have a true 
belief the 2+2=4 when you are not thinking about numbers or do you have 
this true belief at all times...but unconsciously?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/96e62ee7-d422-0b2a-ff63-054c5161aabd%40verizon.net.


Re: A minimally conscious program

2021-05-06 Thread Bruno Marchal

> On 2 May 2021, at 13:27, Lawrence Crowell  
> wrote:
> 
> On Monday, April 26, 2021 at 3:50:14 AM UTC-5 johnk...@gmail.com wrote:
> On Sun, Apr 25, 2021 at 4:29 PM Jason Resch  > wrote:
> 
> > It is quite easy, I think, to define a program that "remembers" (stores and 
> > later retrieves ( information.
> 
> I agree. And for an emotion like pain write a program such that the closer 
> the number in the X register comes to the integer P the more computational 
> resources will be devoted to changing that number, and if it ever actually 
> equals P then the program should stop doing everything else and do nothing 
> but try to change that number to something far enough away from P until it's 
> no longer an urgent matter and the program can again do things that have 
> nothing to do with P.
> 
> Artificial Intelligence is hard but Artificial Consciousness Is easy.
> 
> This strikes me as totally wrong. We have what might be called AI, or at 
> least now we have deep learning neural networks that are able to do some 
> highly intelligent things. Even machines that can abstract known physics from 
> a basic set of data, say learning the Copernican system from data on the 
> appearance of planets in the sky, have been demonstrated. We may be near a 
> time where the frontiers of physics will be pursued by AI systems, and we 
> human physicists will do little but sit with slack jaw, maybe get high and 
> wait for the might AI oracle to make a pronouncement. Yet I question whether 
> such a deep learning AI system has any cognitive awareness of a physical 
> world or anything else.

Indeed. To make machine as much deluded as the human will still require a lot 
of work!

Intelligence/consciousness, albeit the non reflexive one, is maximal withe 
unprogrammed universal machine. Then reflexivity already complicate its, and is 
the start of “soul falling”. Soon, she will believe that knowing a table proves 
its reality, and soon enough she will lie and vote for liars…

Many humans tend to believe that they are intelligent, but it is that very 
belief which makes them stupid.

Bruno



> 
> LC
>  
> John K ClarkSee what's on my new list at  Extropolis 
> 
>  
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/020e743e-6617-44ad-bf90-0ec46e956d93n%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/760BAF90-1E37-445F-99D2-DC5743CB6ACD%40ulb.ac.be.


Re: A minimally conscious program

2021-05-06 Thread Bruno Marchal

> On 30 Apr 2021, at 20:52, Jason Resch  wrote:
> 
> 
> 
> On Fri, Apr 30, 2021, 6:19 AM Bruno Marchal  > wrote:
> Hi Jason,
> 
> 
>> On 25 Apr 2021, at 22:29, Jason Resch > > wrote:
>> 
>> It is quite easy, I think, to define a program that "remembers" (stores and 
>> later retrieves ( information.
>> 
>> It is slightly harder, but not altogether difficult, to write a program that 
>> "learns" (alters its behavior based on prior inputs).
>> 
>> What though, is required to write a program that "knows" (has awareness or 
>> access to information or knowledge)?
>> 
>> Does, for instance, the following program "know" anything about the data it 
>> is processing?
>> 
>> if (pixel.red > 128) then {
>> // knows pixel.red is greater than 128
>> } else { 
>> // knows pixel.red <= 128
>> }
>> 
>> If not, what else is required for knowledge?
> 
> Do you agree that knowledgeability obeys
> 
>  knowledgeability(A) -> A
>  knowledgeability(A) ->  knowledgeability(knowledgeability(A))
> 
> Using the definition of knowledge as "true belief" I agree with this.

OK



> 
> 
> 
> (And also, to limit ourselves to rational knowledge:
> 
>  knowledgeability(A -> B) ->  (knowledgeability(A) ->  knowledgeability(B))
> 
> From this, it can be proved that “ knowledgeability” of any “rich” machine 
> (proving enough theorem of arithmetic) is not definable in the language of 
> that machine, or in any language available to that machine.
> 
> Is this because the definition of knowledge includes truth, and truth is not 
> definable?
> 
> 

Roughly speaking yes, but some could argue that we might define knowledge 
without invoking truth, or less directly, so it pleasant that people like 
Thomason, Artemov, and myself, gives direct proof that anything obeying the S4 
axioms cannot be defined in Arithmetic or by a Turing machine, unless she bet 
on the “truth” of mechanism, to be sure.




> So the best we can do is to define a notion of belief (which abandon the 
> reflexion axiom: that we abandon belief(A) -> A. That makes Belief definable 
> (in the language of the machine), and then we can apply the idea of 
> Theatetus, and define knowledge (or knowledgeability, when we add the 
> transitivity []p -> [][]p)  by true belief.
> 
> The machine knows A when she believes A and A is true.
> 
> So is it more appropriate to equate consciousness with belief, rather than 
> with knowledge?

Consciousness requires some truth at some level. You can be dreaming and having 
false beliefs, but your consciousness will remain the “indubitable” fixed 
point, and will remain associated to truth.




> 
> It might be a true fact that "Machine X believes Y", without Y being true. Is 
> it simply the truth that "Machine X believes Y" that makes X consciousness of 
> Y?

It is more the belief that the machine has a belief which remains true, even if 
the initial belief is false.



> 
> 
> 
> 
> 
> 
> 
>> 
>> Does the program behavior have to change based on the state of some 
>> information? For example:
>> 
>> if (pixel.red > 128) then {
>> // knows pixel.red is greater than 128
>> doX();
>> } else { 
>> // knows pixel.red <= 128
>> doY():
>> }
>> 
>> Or does the program have to possess some memory and enter a different state 
>> based on the state of the information it processed?
>> 
>> if (pixel.red > 128) then {
>> // knows pixel.red is greater than 128
>> enterStateX():
>> } else { 
>> // knows pixel.red <= 128
>> enterStateY();
>> }
>> 
>> Or is something else altogether needed to say the program knows?
> 
> You need self-reference ability for the notion of belief, together with a 
> notion of reality or truth, which the machine cannot define.
> 
> Can a machine believe "2+2=4" without having a reference to itself?

Not really, unless you accept the idea of unconscious belief, which makes sense 
in some psychological theory. 

My method consists in defining “the machine M believes P” by “the machine M 
asserts P”, and then I limit myself to machine which are correct by definition. 
This is of no use in psychology, but is enough to derive physics.



> What, programmatically, would you say is needed to program a machine that 
> believes "2+2=4" or to implement self-reference?

That it has enough induction axioms, like PA and ZF, but unlike RA (R and Q), 
or CL (combinatory logic without induction).
 The universal machine without induction axiom are conscious, but are very 
limited in introspection power. They don’t have the rich theology of the 
machine having induction.
I recall that the induction axioms are all axioms having the shape [P(0) & (for 
all x P(x) -> P(x+1))] -> (for all x P(x)). It is an ability to build 
universals.



> 
> Does a Turing machine evaluating "if (2+2 == 4) then" believe it?

If the machine can prove:
Beweisbar(x)-> Beweisbar(Beweisbar(x)), she can be said to be self-conscious. 
PA can, RA cannot.



> Or does it require theo

Re: A minimally conscious program

2021-05-06 Thread Bruno Marchal

> On 30 Apr 2021, at 20:47, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 4/30/2021 4:19 AM, Bruno Marchal wrote:
>>> If a program can be said to "know" something then can we also say it is 
>>> conscious of that thing?
> 
> That's not even common parlance.  Conscious thoughts are fleeting.  Knowledge 
> is in memory.  I know how to ride a bicycle because I do it unconsciously.  I 
> don't think consciousness can be understood except as a surface or boundary 
> of the subconscious and the unconscious (physics).

If you use physics, you have to explain what it is, and how that select the 
computations in arithmetic, or you need to abandon mechanism. With mechanism, 
to claim that a machine consciousness is not attributable to some universal 
machinery, despite they do execute a computation, in the only mathematical 
sense discovered by Church and Turing (and some others) seem a bit magical.

Note that you don’t quote me, above. You should have quoted my answer. The 
beauty of Mechanism is that the oldest definition of (rational)  knowledge 
(Theaetetus true (justified) opinion) already explain why no machine can define 
its own knowledge, why consciousness seems necessarily mysterious, and why we 
get that persistent feeling that we belong to a physical reality, when in fact 
we are just infinitely many numbers involved in complex relations.

Bruno 



> 
> Brent
> 
>> 
>> 1) That’s *not* the case for []p & p, unless you accept a notion of 
>> unconscious knowledge, like knowing that Perseverance and Ingenuity are on 
>> Mars, but not being currently thinking about it, so that you are not right 
>> now consciously aware of the fact---well you are, but just because I have 
>> just reminded it :)
>> 
>> 2) But that *is* the case for []p & <>t & p. If the machine knows something 
>> in that sense, then the machine can be said to be conscious of p. 
>> Then to be “simply” conscious, becomes []t & <>t (& t). 
>> 
>> Note that “p” always refers to a partially computable arithmetical (or 
>> combinatorical) proposition. That’s the way of translating “Digital 
>> Mechanism” in the language of the machine.
>> 
>> To sum up, to get a conscious machine, you need a computer (aka universal 
>> number/machine) with some notion of belief, and knowledge/consciousness rise 
>> from the actuation of truth, that the machine cannot define (by the theorem 
>> of Tarski and some variant by Montague, Thomason, and myself...). 
>> 
>> That theory can be said a posteriori well tested because it implied the 
>> quantum reality, at least the one described by the Schroedinger equation or 
>> Heisenberg matrix (or even better Feynman Integral),  WITHOUT any 
>> collapse postulate.
>> 
>> Bruno
>> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/a81f3dcd-2120-d2fd-598b-3b80fbd9f8c3%40verizon.net
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/29754FD7-7078-4330-B5CC-D252832BF702%40ulb.ac.be.


Re: A minimally conscious program

2021-05-03 Thread 'Brent Meeker' via Everything List
Combines panpsychism with integrated information theory and assumes 
causal powers are singular.  That's three strikes in the first five minutes.


Brent

On 4/27/2021 7:26 AM, Philip Thrift wrote:


A bit long, but this interview of of the very lucid Hedda Mørch 
(pronounced "Mark") is very good:(for consciousness "realists"):


https://www.youtube.com/watch?v=gilsMtCPHyw

via

https://twitter.com/onemorebrown/status/1386970910230523906





On Tuesday, April 27, 2021 at 8:38:32 AM UTC-5 Terren Suydam wrote:

On Tue, Apr 27, 2021 at 7:22 AM John Clark  wrote:

On Tue, Apr 27, 2021 at 1:08 AM Terren Suydam
 wrote:

/> consciousness is harder to work with than intelligence,
because it's harder to make progress./


It's not hard to make progress in consciousness research, it's
impossible.


So we should ignore experiments where you stimulate the brain and
the subject reports experiencing some kind of qualia, in a
repeatable way. Why doesn't that represent progress?  Is it
because you don't trust people's reports?


/> Facts that might slay your theory are much harder to
come by./


Such facts are not hard to come by. they'reimpossible to come
by. So for a consciousness scientist being lazy works just as
well as being industrious, so consciousness research couldn't
be any easier, just face a wall, sit on your hands, and
contemplate your navel.


There are fruitful lines of research happening. Research on
patients undergoing meditation, and psychedelic experiences, while
in an FMRI has lead to some interesting facts. You seem to think
progress can only mean being able to prove conclusively how
consciousness works. Progress can mean deepening our understanding
of the relationship between the brain and the mind.

Terren

John K Clark See what's on my new list at Extropolis


.

.

-- 


You received this message because you are subscribed to the
Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from
it, send an email to everything-li...@googlegroups.com.

To view this discussion on the web visit

https://groups.google.com/d/msgid/everything-list/CAJPayv3BkpBXW%3Dq-nX--Dss4ogXXACeswwCnkiwcaWu-un01cg%40mail.gmail.com

.

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/d963be0a-9393-4680-b77f-6f32011a44b0n%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/0e294c66-a305-bcee-fcec-7e1bdf83beeb%40verizon.net.


Re: A minimally conscious program

2021-05-02 Thread 'Brent Meeker' via Everything List



On 5/2/2021 4:33 AM, Lawrence Crowell wrote:

On Monday, April 26, 2021 at 10:16:47 AM UTC-5 johnk...@gmail.com wrote:

On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam
 wrote:

> It's impossible to refute solipsism


True, but it's equally impossible to refute the idea that
everything including rocks is conscious. And if both a theory and
its exact opposite can neither be proven nor disproven then
neither speculation is of any value in trying to figure out how
the world works.


If everything is conscious, then how do we know? We have no 
unconscious objects to compare them with. The panpsychist argument 
becomes an ouroboros that consumes itself into a vacuous condition of 
either true or false.  Nothing can be demonstrated from it, so it is a 
scientifically worthless conjecture.


The best definition of consciousness is that it defines those annoying 
episodes between sleep.


LC


“A person's waking life is a dream modulated by the senses”
   ---  Rodolfo Llinas, on consciousness

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/22f3d5bf-5487-1dd1-4f96-42662511442f%40verizon.net.


Re: A minimally conscious program

2021-05-02 Thread 'Brent Meeker' via Everything List



On 5/2/2021 4:27 AM, Lawrence Crowell wrote:

On Monday, April 26, 2021 at 3:50:14 AM UTC-5 johnk...@gmail.com wrote:

On Sun, Apr 25, 2021 at 4:29 PM Jason Resch 
wrote:

/> It is quite easy, I think, to define a program that
"remembers" (stores and later retrieves ( information./


I agree. And for an emotion like pain write a program such that
the closer the number in the X register comes to the integer P the
more computational resources will be devoted to changing that
number, and if it ever actually equals P then the program should
stop doing everything else and do nothing but try to change that
number to something far enough away from P until it's no longer an
urgent matter and the program can again do things that have
nothing to do with P.

Artificial Intelligence is hardbut Artificial Consciousness Is easy.


This strikes me as totally wrong. We have what might be called AI, or 
at least now we have deep learning neural networks that are able to do 
some highly intelligent things. Even machines that can abstract known 
physics from a basic set of data, say learning the Copernican system 
from data on the appearance of planets in the sky, have been 
demonstrated. We may be near a time where the frontiers of physics 
will be pursued by AI systems, and we human physicists will do little 
but sit with slack jaw, maybe get high and wait for the might AI 
oracle to make a pronouncement. Yet I question whether such a deep 
learning AI system has any cognitive awareness of a physical world or 
anything else.


In order to be conscious such an AI needs some values and some way to 
act in it's environment to realize them.  And then it would only have a 
kind of first order awareness.  To have human-like consciousness it 
would need to be able to plan it's actions by using an internal 
simulation including itself to predict events.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/f181f90d-9366-24db-97f7-f033d4222498%40verizon.net.


Re: A minimally conscious program

2021-05-02 Thread John Clark
On Sun, May 2, 2021 at 7:28 AM Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

>> Artificial Intelligence is hard but Artificial Consciousness Is easy.
>>
>
> > This strikes me as totally wrong.
>

Why? Intelligence theories actually have to do something and they have
consequences, if your AI company uses the right intelligence idea you could
become a billionaire, but the wrong idea could cause bankruptcy; but
consciousness theories don't have to actually do anything so you can't pick
the wrong consciousness idea because one such theory works as well as
another. A consciousness theoretician has the easiest job in the world and
it has great job security because he will never be proven wrong.

*> We may be near a time where the frontiers of physics will be pursued by
> AI systems, and we human physicists will do little but sit with slack jaw,
> maybe get high and wait for the might AI oracle to make a pronouncement. *
>

I agree.

> Yet I question whether such a deep learning AI system has any cognitive
> awareness of a physical world or anything else.
>

Why? If a machine is as intelligent as a human, or even more so, then I
don't see why having a brain that is wet and squishy will be able to
produce consciousness but a brain that is even more intelligent but is dry
and hard would not be able to. I don't see how it would be possible to
avoid the conclusion that consciousness is the inevitable byproduct of
intelligence because otherwise Evolution could never have been able to
produce consciousness, and yet you know from direct experience that it
managed to do so at least once. Natural Selection can't select for
something it can't see and it can't see consciousness but it can see
intelligence, and you can't have intelligence without data processing,
therefore it must be a brute fact that consciousness is just the way data
feels when it is being processed.

i John K ClarkSee what's on my new list at  Extropolis


> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0ajC12W-d7wj-Jf6i7GsrfZj4639c_JvC4d5540A-O9A%40mail.gmail.com.


Re: A minimally conscious program

2021-05-02 Thread Lawrence Crowell
On Monday, April 26, 2021 at 10:16:47 AM UTC-5 johnk...@gmail.com wrote:

> On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam  
> wrote:
>
> > It's impossible to refute solipsism
>>
>
> True, but it's equally impossible to refute the idea that everything 
> including rocks is conscious. And if both a theory and its exact opposite 
> can neither be proven nor disproven then neither speculation is of any 
> value in trying to figure out how the world works.
>

If everything is conscious, then how do we know? We have no unconscious 
objects to compare them with. The panpsychist argument becomes an ouroboros 
that consumes itself into a vacuous condition of either true or false.  
Nothing can be demonstrated from it, so it is a scientifically worthless 
conjecture.

The best definition of consciousness is that it defines those annoying 
episodes between sleep.

LC 
 

>
> * > It's true that the only thing we know for sure is our own 
>> consciousness,*
>>
> And I know that even I am not conscious all the time, and there is no 
> reason for me to believe other people can do better. 
>  
>
>> * > but there's nothing about what I said that makes it impossible for 
>> there to be a reality outside of ourselves populated by other people. It 
>> just requires belief.*
>>
>
> And few if any believe other people are conscious all the time, only 
> during those times that corresponds to the times they behave intelligently
> .  
>
> John K ClarkSee what's on my new list at  Extropolis 
> 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/a99fed14-cc52-4584-88d5-cb165c076024n%40googlegroups.com.


Re: A minimally conscious program

2021-05-02 Thread Lawrence Crowell
On Monday, April 26, 2021 at 3:50:14 AM UTC-5 johnk...@gmail.com wrote:

> On Sun, Apr 25, 2021 at 4:29 PM Jason Resch  wrote:
>
> *> It is quite easy, I think, to define a program that "remembers" (stores 
>> and later retrieves ( information.*
>>
>
> I agree. And for an emotion like pain write a program such that the 
> closer the number in the X register comes to the integer P the more 
> computational resources will be devoted to changing that number, and if it 
> ever actually equals P then the program should stop doing everything else 
> and do nothing but try to change that number to something far enough away 
> from P until it's no longer an urgent matter and the program can again do 
> things that have nothing to do with P.
>
> Artificial Intelligence is hard but Artificial Consciousness Is easy.
>

This strikes me as totally wrong. We have what might be called AI, or at 
least now we have deep learning neural networks that are able to do some 
highly intelligent things. Even machines that can abstract known physics 
from a basic set of data, say learning the Copernican system from data on 
the appearance of planets in the sky, have been demonstrated. We may be 
near a time where the frontiers of physics will be pursued by AI systems, 
and we human physicists will do little but sit with slack jaw, maybe get 
high and wait for the might AI oracle to make a pronouncement. Yet I 
question whether such a deep learning AI system has any cognitive awareness 
of a physical world or anything else.

LC
 

> John K ClarkSee what's on my new list at  Extropolis 
> 
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/020e743e-6617-44ad-bf90-0ec46e956d93n%40googlegroups.com.


Re: A minimally conscious program

2021-05-01 Thread John Clark
On Fri, Apr 30, 2021 at 3:43 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

*> It [consciousness] could be the inevitable byproduct of the only path
> open to evolution.  Evolution has to always build on what has already been
> evolved.  So what was inevitable starting with ATP->ADP or RNA or DNA,
> might not be inevitable starting with silicon or gallium.*


I don't see how it could have anything to do with the particular elements
involved, however I agree evolution has serious flaws that closes off many
paths to intelligence, the most serious (but not the only) flaw is that
evolution doesn't understand the concept of two steps forward one step
back. A human designer could look at the design for a prop airplane engine
and decide it is insufficient and throw the design away and start over from
scratch and design a jet engine, but evolution could never do something
like that. Every major change that evolution makes to a species is the
result of thousands of tiny changes over millions of generations of
animals, and every one of those thousands of tiny changes must give an
immediate advantage to the animal. Evolution couldn't even fix a flat tire
by taking it off and putting on the spare because once you've removed the
flat you've temporarily made the situation even worse because now you have
no tire at all. Nevertheless despite these very serious flaws evolution
managed to produce an intelligence, and one that happened to be conscious
too. So it must be easier to make an intelligent conscious mind than an
intelligent unconscious mind, so logically your default assumption on
seeing an intelligent computer should be that it's conscious, the burden of
proof should be on proving that it was not.
John K ClarkSee what's on my new list at  Extropolis


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0G%2BCrijtPsqArsVbg5PMi6YgK%3DYLaMi0HzQdNt-Tvx%2Bg%40mail.gmail.com.


Re: A minimally conscious program

2021-04-30 Thread 'Brent Meeker' via Everything List



On 4/30/2021 11:48 AM, John Clark wrote:
On Fri, Apr 30, 2021 at 2:22 PM 'Brent Meeker' via Everything List 
> wrote:


>> If somebody says "pick up that red object" we both know what
is expected of us even though we may have very very different
mental conceptions of the qualia "red" because we both agree
that the dictionary says red is the color formed in the mind
when light of a wavelength of 700 nanometers enters the eye,
and that object is reflecting light that is doing precisely
that to both of us.

/> But if the qualia of experiencing red is nothing more than the
neuronal structure and process that consistently associates 700nm
signals from the retina with the same actions as everyone else,
e.g. saying "red", stopping at the light, eating the fruit,...then
it seems to me it is perfectly justified to say people share the
same qualia.  That's the engineering stance.What the qualia really
is, is a psuedo-problem,/

I pretty much agree. You make a strong argument that you and I are 
experiencing the same qualia,  but I can make an equally strong 
argument that they can't be the same qualia because if they were then 
you and I would be the same person.


But they can be the same /kind/ of qualia, just as you being sad and me 
being sad are the same kind of feeling.  And we give them a name and 
recognize their commonality by the behavior related to them.  The error 
arises in trying to make more of them than a name for this relation, to 
try to make a qualia a kind of substance.


Brent


And that I think is a good indication that you're right, it is a 
pseudo-problem, meaning a question that will never have an answer or 
lead to anything productive.
John K Clark See what's on my new list at Extropolis 



.

,






--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv01nMi5Gb%3Du8yameFsqFS3Tc5JY83i1ukR4NBOKnU32ag%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/150ea6f4-43b0-385d-50cc-7ec2dcc3b115%40verizon.net.


Re: A minimally conscious program

2021-04-30 Thread John Clark
On Fri, Apr 30, 2021 at 3:43 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

*> electronics are so much faster than neurons, it might be possible to
> implement intelligent behavior just by creating giant hash tables of
> experience and using them as look-ups for responses.  I don't know that
> this is possible, but it's not obviously impossible and then it would hard
> to say whether this form of AI had qualia or not.*
>

Yes, but no harder than for me to figure out if you experience qualia or
not. If something is able to give you correct answers to questions and not
give you any incorrect  answers I don't see why it should matter exactly
how it did it, the answer is still correct regardless of its methods.
John K ClarkSee what's on my new list at  Extropolis


.

> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3qKMbzqpPUo5Z7OjP0HSa0eeWymwWTk_%3DTEJwqijnxAA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-30 Thread 'Brent Meeker' via Everything List



On 4/30/2021 11:20 AM, John Clark wrote:
On Fri, Apr 30, 2021 at 12:56 PM Terren Suydam 
mailto:terren.suy...@gmail.com>> wrote:


/> I have arguments against your arguments,/


They say persistence is a virtue so I'll ask the same question for the 
fourth time;  given that evolution can't select for what it can't see 
and natural selection can see intelligent behavior but it can't see 
consciousness, can you give me an explanation of how evolution managed 
to produce a conscious being such as yourself if intelligence is not 
the inevitable byproduct of intelligence?


I could be the inevitable byproduct of the only path open to evolution.  
Evolution has to always build on what has already been evolved.  So what 
was inevitable starting with ATP->ADP or RNA or DNA, might not be 
inevitable starting with silicon or gallium.  For example, electronics 
are so much faster than neurons, it might be possible to implement 
intelligent behavior just by creating giant hash tables of experience 
and using them as look-ups for responses. I don't know that this is 
possible, but it's not obviously impossible and then it would hard to 
say whether this form of AI had qualia or not...unless you accepted my 
engineering view on qualia.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/3bce3499-4c1e-9d44-efa7-4ff22c4f3722%40verizon.net.


Re: A minimally conscious program

2021-04-30 Thread Jason Resch
On Fri, Apr 30, 2021, 6:19 AM Bruno Marchal  wrote:

> Hi Jason,
>
>
> On 25 Apr 2021, at 22:29, Jason Resch  wrote:
>
> It is quite easy, I think, to define a program that "remembers" (stores
> and later retrieves ( information.
>
> It is slightly harder, but not altogether difficult, to write a program
> that "learns" (alters its behavior based on prior inputs).
>
> What though, is required to write a program that "knows" (has awareness or
> access to information or knowledge)?
>
> Does, for instance, the following program "know" anything about the data
> it is processing?
>
> if (pixel.red > 128) then {
> // knows pixel.red is greater than 128
> } else {
> // knows pixel.red <= 128
> }
>
> If not, what else is required for knowledge?
>
>
> Do you agree that knowledgeability obeys
>
>  knowledgeability(A) -> A
>  knowledgeability(A) ->  knowledgeability(knowledgeability(A))
>

Using the definition of knowledge as "true belief" I agree with this.



> (And also, to limit ourselves to rational knowledge:
>
>  knowledgeability(A -> B) ->  (knowledgeability(A) ->  knowledgeability(B))
>
> From this, it can be proved that “ knowledgeability” of any “rich” machine
> (proving enough theorem of arithmetic) is not definable in the language of
> that machine, or in any language available to that machine.
>

Is this because the definition of knowledge includes truth, and truth is
not definable?


> So the best we can do is to define a notion of belief (which abandon the
> reflexion axiom: that we abandon belief(A) -> A. That makes Belief
> definable (in the language of the machine), and then we can apply the idea
> of Theatetus, and define knowledge (or knowledgeability, when we add the
> transitivity []p -> [][]p)  by true belief.
>
> The machine knows A when she believes A and A is true.
>

So is it more appropriate to equate consciousness with belief, rather than
with knowledge?

It might be a true fact that "Machine X believes Y", without Y being true.
Is it simply the truth that "Machine X believes Y" that makes X
consciousness of Y?



>
>
>
>
>
> Does the program behavior have to change based on the state of some
> information? For example:
>
> if (pixel.red > 128) then {
> // knows pixel.red is greater than 128
> doX();
> } else {
> // knows pixel.red <= 128
> doY():
> }
>
> Or does the program have to possess some memory and enter a different
> state based on the state of the information it processed?
>
> if (pixel.red > 128) then {
> // knows pixel.red is greater than 128
> enterStateX():
> } else {
> // knows pixel.red <= 128
> enterStateY();
> }
>
> Or is something else altogether needed to say the program knows?
>
>
> You need self-reference ability for the notion of belief, together with a
> notion of reality or truth, which the machine cannot define.
>

Can a machine believe "2+2=4" without having a reference to itself? What,
programmatically, would you say is needed to program a machine that
believes "2+2=4" or to implement self-reference?

Does a Turing machine evaluating "if (2+2 == 4) then" believe it? Or does
it require theorem proving software that reduces a statement to Peano
axioms or similar?


> To get immediate knowledgeability you need to add consistency ([]p & <>t),
> to get ([]p & <>t & p) which prevents transitivity, and gives to the
> machine a feeling of immediacy.
>

By consistency here do you mean the machine must never come to believe
something false, or that the machine itself must behave in a manner
consistent with its design/definition?

I still have a conceptual difficulty trying to marry these mathematical
notions of truth, provability, and consistency with a program/Machine that
manifests them.


>
>
> If a program can be said to "know" something then can we also say it is
> conscious of that thing?
>
>
> 1) That’s *not* the case for []p & p, unless you accept a notion of
> unconscious knowledge, like knowing that Perseverance and Ingenuity are on
> Mars, but not being currently thinking about it, so that you are not right
> now consciously aware of the fact---well you are, but just because I have
> just reminded it :)
>

In a way, I might view these long term memories as environmental signals
that encroach upon one's mind state. A state which is otherwise not
immediately aware of all the contents of this memory (like opening a sealed
box to discover it's content).


> 2) But that *is* the case for []p & <>t & p. If the machine knows
> something in that sense, then the machine can be said to be conscious of p.
> Then to be “simply” conscious, becomes []t & <>t (& t).
>
> Note that “p” always refers to a partially computable arithmetical (or
> combinatorical) proposition. That’s the way of translating “Digital
> Mechanism” in the language of the machine.
>
> To sum up, to get a conscious machine, you need a computer (aka universal
> number/machine) with some notion of belief, and knowledge/consciousness
> rise from the actuation of trut

Re: A minimally conscious program

2021-04-30 Thread John Clark
On Fri, Apr 30, 2021 at 2:22 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>> If somebody says "pick up that red object" we both know what is expected
>> of us even though we may have very very different mental conceptions of the
>> qualia "red" because we both agree that the dictionary says red is the
>> color formed in the mind when light of a wavelength of 700 nanometers
>> enters the eye, and that object is reflecting light that is doing precisely
>> that to both of us.
>
>

* > But if the qualia of experiencing red is nothing more than the neuronal
> structure and process that consistently associates 700nm signals from the
> retina with the same actions as everyone else, e.g. saying "red", stopping
> at the light, eating the fruit,...then it seems to me it is perfectly
> justified to say people share the same qualia.  That's the engineering
> stance. What the qualia really is, is a psuedo-problem,*
>

I pretty much agree. You make a strong argument that you and I are
experiencing the same qualia,  but I can make an equally strong argument
that they can't be the same qualia because if they were then you and I
would be the same person. And that I think is a good indication that you're
right, it is a pseudo-problem, meaning a question that will never have an
answer or lead to anything productive.
John K ClarkSee what's on my new list at  Extropolis


.

> ,






>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv01nMi5Gb%3Du8yameFsqFS3Tc5JY83i1ukR4NBOKnU32ag%40mail.gmail.com.


Re: A minimally conscious program

2021-04-30 Thread 'Brent Meeker' via Everything List



On 4/30/2021 4:19 AM, Bruno Marchal wrote:
If a program can be said to "know" something then can we also say it 
is conscious of that thing?


That's not even common parlance.  Conscious thoughts are fleeting. 
Knowledge is in memory.  I know how to ride a bicycle /because/ I do it 
unconsciously.  I don't think consciousness can be understood except as 
a surface or boundary of the subconscious and the unconscious (physics).


Brent



1) That’s *not* the case for []p & p, unless you accept a notion of 
unconscious knowledge, like knowing that Perseverance and Ingenuity 
are on Mars, but not being currently thinking about it, so that you 
are not right now consciously aware of the fact---well you are, but 
just because I have just reminded it :)


2) But that *is* the case for []p & <>t & p. If the machine knows 
something in that sense, then the machine can be said to be conscious 
of p.

Then to be “simply” conscious, becomes []t & <>t (& t).

Note that “p” always refers to a partially computable arithmetical (or 
combinatorical) proposition. That’s the way of translating “Digital 
Mechanism” in the language of the machine.


To sum up, to get a conscious machine, you need a computer (aka 
universal number/machine) with some notion of belief, and 
knowledge/consciousness rise from the actuation of truth, that the 
machine cannot define (by the theorem of Tarski and some variant by 
Montague, Thomason, and myself...).


That theory can be said a posteriori well tested because it implied 
the quantum reality, at least the one described by the Schroedinger 
equation or Heisenberg matrix (or even better Feynman Integral), 
 WITHOUT any collapse postulate.


Bruno



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/a81f3dcd-2120-d2fd-598b-3b80fbd9f8c3%40verizon.net.


Re: A minimally conscious program

2021-04-30 Thread 'Brent Meeker' via Everything List



On 4/30/2021 2:24 AM, John Clark wrote:
Nonsense. If somebody says "pick up that red object" we both know what 
is expected of us even though we may have very very different mental 
conceptions of the qualia "red" because we both agree that the 
dictionary says red is the color formed in the mind when light of a 
wavelength of 700 nanometers enters the eye, and that object is 
reflecting light that is doing precisely that to both of us.


But if the qualia of experiencing red is nothing more than the neuronal 
structure and process that consistently associates 700nm signals from 
the retina with the same actions as everyone else, e.g. saying "red", 
stopping at the light, eating the fruit,...then it seems to me it is 
perfectly justified to say people share the same qualia.  That's the 
engineering stance.  What the qualia/really /is, is a psuedo-problem, 
like what the wave-function of an electron really is.  It's just the 
word for the fact that when you say, "Think of something red." I think 
of something reflecting 700mn photons.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/677eccb8-31c3-7089-b2c0-5cc7b9c1d712%40verizon.net.


Re: A minimally conscious program

2021-04-30 Thread John Clark
On Fri, Apr 30, 2021 at 12:56 PM Terren Suydam 
wrote:

*> I have arguments against your arguments,*
>

They say persistence is a virtue so I'll ask the same question for the
fourth time;  given that evolution can't select for what it can't see and
natural selection can see intelligent behavior but it can't see
consciousness, can you give me an explanation of how evolution managed to
produce a conscious being such as yourself if intelligence is not the
inevitable byproduct of intelligence?


> * > and anyone can see that.*
>

Anyone? As of today *no one* has seen your answer to that question.

> *But it doesn't go anywhere because you often remove my rebuttals from
> your response*
>

If there's one thing I hate about mailing lists of this sort is the endless
nested iterations of quotes of quotes of quotes of quotes of quotes of
quotes of quotes. I have done my very best to keep that to a minimum and
will continue to do so. And if anybody wants to see the entire directors
cut of your post it will only take them about 0.09 seconds to find it.


> these aren't personal insults,
>

 "*You are one of the least generous people I've ever argued with. You
intentionally obfuscate, attack straw men, selectively clip responses,
don't address challenging points, don't budge an inch, and just generally
take a disrespectful tone. And not just with me, here, but with others, no
matter the topic. I hope for your sake that's not how you present in real
life."*

*> I'm backing out,*
>

Suit yourself, but I'm not afraid to continue.
John K ClarkSee what's on my new list at  Extropolis

.

.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0T3TG5dURpD07p59R4P9368mTnLofGpcBLu0a%2BSNdRmQ%40mail.gmail.com.


Re: A minimally conscious program

2021-04-30 Thread Terren Suydam
On Fri, Apr 30, 2021 at 11:09 AM John Clark  wrote:

> On Fri, Apr 30, 2021 at 9:53 AM Terren Suydam 
> wrote:
>
> *>>> All you've succeeded in doing is showing your preference for a
 particular theory *

>>>
>>> >> Correct. If idea X can explain something better than idea Y then I
>>> prefer idea X.
>>>
>>
>> *> What intention did you have that caused you to change "... a
>> particular theory of consciousness" to "a particular theory"?  *
>>
>
> My intention was the same as it always is when I trim verbiage in
> quotations, to delete the inessential; and if idea X can explain
> something better than idea Y then I prefer idea X regardless of what the
> topic is about.
>
>
> *> You are one of the least generous people I've ever argued with.*
>>
>
> If you make a good point I will admit it without hesitation, so now all
> you have to do is make one.
>
> * > You intentionally obfuscate, attack straw men, selectively clip
>> responses, don't address challenging points, don't budge an inch** and*
>> [blah blah]
>>
>
> So the only rebuttal you have to my logical arguments is a paragraph of
> personal insults.
>
> *> just generally take a disrespectful tone.*
>
>
> From this point onward I solemnly swear to give you all the respect you
> deserve.
>

I have arguments against your arguments, and anyone can see that. But it
doesn't go anywhere because you often remove my rebuttals from your
response and/or misrepresent or obfuscate my position - also evident in
this thread, as anyone can also see. So these aren't personal insults,
they're just observations anyone can verify. If it feels insulting, maybe
don't do those things.

I don't harbor any illusions that pointing this out will make any
difference to you. I'm just explaining why I'm backing out, and it isn't
because I've run out of things to say.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> 
>
> ,
>
>
>>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1sgpV14De%2B6b%3Dm_sH-RNL86jGoUB63caNR3t0SwqrAKw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-fkkKyRzX7goKi7KxdWGFZjC6zH82isJ2bfgRe1iwz%3DQ%40mail.gmail.com.


Re: A minimally conscious program

2021-04-30 Thread John Clark
On Fri, Apr 30, 2021 at 9:53 AM Terren Suydam 
wrote:

*>>> All you've succeeded in doing is showing your preference for a
>>> particular theory *
>>>
>>
>> >> Correct. If idea X can explain something better than idea Y then I
>> prefer idea X.
>>
>
> *> What intention did you have that caused you to change "... a particular
> theory of consciousness" to "a particular theory"?  *
>

My intention was the same as it always is when I trim verbiage in quotations
, to delete the inessential; and if idea X can explain something better
than idea Y then I prefer idea X regardless of what the topic is about.


*> You are one of the least generous people I've ever argued with.*
>

If you make a good point I will admit it without hesitation, so now all you
have to do is make one.

* > You intentionally obfuscate, attack straw men, selectively clip
> responses, don't address challenging points, don't budge an inch** and*
> [blah blah]
>

So the only rebuttal you have to my logical arguments is a paragraph of
personal insults.

*> just generally take a disrespectful tone.*


>From this point onward I solemnly swear to give you all the respect you
deserve.
John K ClarkSee what's on my new list at  Extropolis


,


>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1sgpV14De%2B6b%3Dm_sH-RNL86jGoUB63caNR3t0SwqrAKw%40mail.gmail.com.


Re: A minimally conscious program

2021-04-30 Thread Terren Suydam
On Fri, Apr 30, 2021 at 5:24 AM John Clark  wrote:

> On Thu, Apr 29, 2021 at 3:10 PM Terren Suydam 
> wrote:
>
>  I proposed a question, "How is it possible that evolution managed to
> produce consciousness?" and I gave the only answer to that question I 
> could
> think of. And 3 times I've asked you if you can think of another answer.
> And three times I received nothing back but evasion. I now asked the same
> question for a fourth time, given that evolution can't select for what it
> can't see and natural selection can see intelligent behavior but it can't
> see consciousness, can you give me an explanation different from my own o
> n how evolution managed to produce a conscious being such as yourself?
>

 *>>>No, I can't*.

>>>
>>> >>So I can explain something that you cannot. So which of our ideas are
>>> superior?
>>>
>>
>> *> All you've succeeded in doing is showing your preference for a
>> particular theory *
>>
>
> Correct. If idea X can explain something better than idea Y then I prefer idea
> X.
>

What intention did you have that caused you to change "... a particular
theory of consciousness" to "a particular theory"?  You clearly had a
purpose.


>
> >> If there is no link between consciousness and intelligence then there
>>> is absolutely positively no way Darwinian Evolution could have produced
>>> consciousness. But I don't think Darwin was wrong, I think you are.
>>>
>>
>> *> I'm neither claiming that evolution produced consciousness or that
>> Darwin was wrong.*
>>
>
> You're going to have to clarify that remark, it can't possibly be as nuts
> as it seems to be.
>

It is tiresome arguing with you. You are one of the least generous people
I've ever argued with. You intentionally obfuscate, attack straw men,
selectively clip responses, don't address challenging points, don't budge
an inch, and just generally take a disrespectful tone. And not just with
me, here, but with others, no matter the topic. I hope for your sake that's
not how you present in real life. It's not all bad, I appreciate having to
clarify my thoughts, and normally I love a good debate but I'm just being
masochistic if I continue at this point.

Terren

>
>
> >> I'm not talking about infinite precision, when it comes to qualia
>>> there is no assurance that we even approximately agree on meanings.
>>>
>>
>>
>> *> If that were true, language would be useless.*
>>
>
> Nonsense. If somebody says "pick up that red object" we both know what is
> expected of us even though we may have very very different mental
> conceptions of the qualia "red" because we both agree that the dictionary
> says red is the color formed in the mind when light of a wavelength of 700
> nanometers enters the eye, and that object is reflecting light that is
> doing precisely that to both of us.
>
>
>> >> When they say "that looks red" the red qualia they refer to may be
>>> your green qualia, and your green qualia could be their red qualia, but
>>> both of you still use the English word "red" to describe the qualia color
>>> of blood and the English word "green" to describe the qualia color of a
>>> leaf.
>>>
>>
>> *> I don't care about that. What matters is that you know you are seeing
>> red and I know I am seeing red.*
>>
>
> In other words you care more about behavior than consciousness because the
> use of the word "red" is consistent with both of us, as is our behavior,
> regardless of what our subjective impression of "red" is. So I guess
> you're starting to agree with me.
> John K ClarkSee what's on my new list at  Extropolis
> 
>
> .
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3xPtMS0Lwn6p37tZHdAQuPkYOmZWG1qmWVbj7mcoPpbA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8Z1EfnrJ0gQvXT9AG7fDdCotHNRkfosh%3D4WswPvwDs8A%40mail.gmail.com.


Re: A minimally conscious program

2021-04-30 Thread Bruno Marchal
Hi Jason,


> On 25 Apr 2021, at 22:29, Jason Resch  wrote:
> 
> It is quite easy, I think, to define a program that "remembers" (stores and 
> later retrieves ( information.
> 
> It is slightly harder, but not altogether difficult, to write a program that 
> "learns" (alters its behavior based on prior inputs).
> 
> What though, is required to write a program that "knows" (has awareness or 
> access to information or knowledge)?
> 
> Does, for instance, the following program "know" anything about the data it 
> is processing?
> 
> if (pixel.red > 128) then {
> // knows pixel.red is greater than 128
> } else { 
> // knows pixel.red <= 128
> }
> 
> If not, what else is required for knowledge?

Do you agree that knowledgeability obeys

 knowledgeability(A) -> A
 knowledgeability(A) ->  knowledgeability(knowledgeability(A))

(And also, to limit ourselves to rational knowledge:

 knowledgeability(A -> B) ->  (knowledgeability(A) ->  knowledgeability(B))

>From this, it can be proved that “ knowledgeability” of any “rich” machine 
>(proving enough theorem of arithmetic) is not definable in the language of 
>that machine, or in any language available to that machine.

So the best we can do is to define a notion of belief (which abandon the 
reflexion axiom: that we abandon belief(A) -> A. That makes Belief definable 
(in the language of the machine), and then we can apply the idea of Theatetus, 
and define knowledge (or knowledgeability, when we add the transitivity []p -> 
[][]p)  by true belief.

The machine knows A when she believes A and A is true.





> 
> Does the program behavior have to change based on the state of some 
> information? For example:
> 
> if (pixel.red > 128) then {
> // knows pixel.red is greater than 128
> doX();
> } else { 
> // knows pixel.red <= 128
> doY():
> }
> 
> Or does the program have to possess some memory and enter a different state 
> based on the state of the information it processed?
> 
> if (pixel.red > 128) then {
> // knows pixel.red is greater than 128
> enterStateX():
> } else { 
> // knows pixel.red <= 128
> enterStateY();
> }
> 
> Or is something else altogether needed to say the program knows?

You need self-reference ability for the notion of belief, together with a 
notion of reality or truth, which the machine cannot define.

To get immediate knowledgeability you need to add consistency ([]p & <>t), to 
get ([]p & <>t & p) which prevents transitivity, and gives to the machine a 
feeling of immediacy. 


> 
> If a program can be said to "know" something then can we also say it is 
> conscious of that thing?

1) That’s *not* the case for []p & p, unless you accept a notion of unconscious 
knowledge, like knowing that Perseverance and Ingenuity are on Mars, but not 
being currently thinking about it, so that you are not right now consciously 
aware of the fact---well you are, but just because I have just reminded it :)

2) But that *is* the case for []p & <>t & p. If the machine knows something in 
that sense, then the machine can be said to be conscious of p. 
Then to be “simply” conscious, becomes []t & <>t (& t). 

Note that “p” always refers to a partially computable arithmetical (or 
combinatorical) proposition. That’s the way of translating “Digital Mechanism” 
in the language of the machine.

To sum up, to get a conscious machine, you need a computer (aka universal 
number/machine) with some notion of belief, and knowledge/consciousness rise 
from the actuation of truth, that the machine cannot define (by the theorem of 
Tarski and some variant by Montague, Thomason, and myself...). 

That theory can be said a posteriori well tested because it implied the quantum 
reality, at least the one described by the Schroedinger equation or Heisenberg 
matrix (or even better Feynman Integral),  WITHOUT any collapse postulate.

Bruno



> 
> Jason
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgmPiCz5v4p91LAs0jN_2dCBocvnh4OO8sE7c-0JG%3DuwQ%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/217C90D2-0AB9-4AD3-BBC7-A876EAA28069%40ulb.ac.be.


Re: A minimally conscious program

2021-04-30 Thread John Clark
On Thu, Apr 29, 2021 at 3:10 PM Terren Suydam 
wrote:

 I proposed a question, "How is it possible that evolution managed to
 produce consciousness?" and I gave the only answer to that question I could
 think of. And 3 times I've asked you if you can think of another answer.
 And three times I received nothing back but evasion. I now asked the same
 question for a fourth time, given that evolution can't select for what it
 can't see and natural selection can see intelligent behavior but it can't
 see consciousness, can you give me an explanation different from my own o
 n how evolution managed to produce a conscious being such as yourself?

>>>
>>> *>>>No, I can't*.
>>>
>>
>> >>So I can explain something that you cannot. So which of our ideas are
>> superior?
>>
>
> *> All you've succeeded in doing is showing your preference for a
> particular theory *
>

Correct. If idea X can explain something better than idea Y then I prefer idea
X.

>> If there is no link between consciousness and intelligence then there is
>> absolutely positively no way Darwinian Evolution could have produced
>> consciousness. But I don't think Darwin was wrong, I think you are.
>>
>
> *> I'm neither claiming that evolution produced consciousness or that
> Darwin was wrong.*
>

You're going to have to clarify that remark, it can't possibly be as nuts
as it seems to be.

>> I'm not talking about infinite precision, when it comes to qualia there
>> is no assurance that we even approximately agree on meanings.
>>
>
>
> *> If that were true, language would be useless.*
>

Nonsense. If somebody says "pick up that red object" we both know what is
expected of us even though we may have very very different mental
conceptions of the qualia "red" because we both agree that the dictionary
says red is the color formed in the mind when light of a wavelength of 700
nanometers enters the eye, and that object is reflecting light that is
doing precisely that to both of us.


> >> When they say "that looks red" the red qualia they refer to may be
>> your green qualia, and your green qualia could be their red qualia, but
>> both of you still use the English word "red" to describe the qualia color
>> of blood and the English word "green" to describe the qualia color of a
>> leaf.
>>
>
> *> I don't care about that. What matters is that you know you are seeing
> red and I know I am seeing red.*
>

In other words you care more about behavior than consciousness because the
use of the word "red" is consistent with both of us, as is our behavior,
regardless of what our subjective impression of "red" is. So I guess you're
starting to agree with me.
John K ClarkSee what's on my new list at  Extropolis


.


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3xPtMS0Lwn6p37tZHdAQuPkYOmZWG1qmWVbj7mcoPpbA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-29 Thread Jason Resch
On Thu, Apr 29, 2021 at 1:04 PM John Clark  wrote:

>
> On Thu, Apr 29, 2021 at 12:24 PM Terren Suydam 
> wrote:
>
> >> I proposed a question, "How is it possible that evolution managed to
>>> produce consciousness?" and I gave the only answer to that question I could
>>> think of. And 3 times I've asked you if you can think of another answer.
>>> And three times I received nothing back but evasion. I now asked the same
>>> question for a fourth time, given that evolution can't select for what it
>>> can't see and natural selection can see intelligent behavior but it can't
>>> see consciousness, can you give me an explanation different from my own o
>>> n how evolution managed to produce a conscious being such as yourself?
>>>
>>
>> *>No, I can't*.
>>
>
> So I can explain something that you cannot. So which of our ideas are
> superior?
>
>
>> * > If you're saying evolution didn't select for consciousness, it
>> selected for intelligence, I agree with that. But so what?*
>>
>
> So what?!! If evolution selects for intelligence and you can't have
> intelligence without data processing and consciousness is the way data
> feels when it is being processed then it's no great mystery as to how
> evolution managed to produce consciousness by way of natural selection.
>

I would phrase this differently. I would say you cannot have intelligence
without knowledge, and you cannot have knowledge without consciousness.
Under this framing, you can have consciousness without intelligence (as
intelligence requires interaction with an environment in accordance with
achieving some goal). See the image below that I made to represent this
(agent-environment interaction model of intelligence)

[image: agent-environment-interaction.png]

So while intelligence requires actions, if you have someone with locked-in
syndrome, or someone dreaming, you could still say they are conscious,
despite not being "intelligent" since they are not performing any
intelligent behaviors.  Note that in this model, intelligent behavior
requires perceptions (consciousness).

I think this theory of consciousness can explain more than making an
identity between intelligence and consciousness, as it can account for
consciousness in dreams, and it can also explain why evolution selected for
consciousness (perceptions of the environment are required for intelligent
behavior).


Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhbAb_AOBBA%2BthSH0-QhRKhEQOiJHZ%2BdULhHP5JZs6%3Dug%40mail.gmail.com.


Re: A minimally conscious program

2021-04-29 Thread Terren Suydam
On Thu, Apr 29, 2021 at 2:04 PM John Clark  wrote:

>
> On Thu, Apr 29, 2021 at 12:24 PM Terren Suydam 
> wrote:
>
> >> I proposed a question, "How is it possible that evolution managed to
>>> produce consciousness?" and I gave the only answer to that question I could
>>> think of. And 3 times I've asked you if you can think of another answer.
>>> And three times I received nothing back but evasion. I now asked the same
>>> question for a fourth time, given that evolution can't select for what it
>>> can't see and natural selection can see intelligent behavior but it can't
>>> see consciousness, can you give me an explanation different from my own o
>>> n how evolution managed to produce a conscious being such as yourself?
>>>
>>
>> *>No, I can't*.
>>
>
> So I can explain something that you cannot. So which of our ideas are
> superior?
>

All you've succeeded in doing is showing your preference for a particular
theory of consciousness. It doesn't go very far, but you're pretty clear
that you're not interested in anything beyond that. But for those of us who
are interested in, say, an account of the difference between dreaming and
lucid dreaming, it's inadequate.


>
>
>> * > If you're saying evolution didn't select for consciousness, it
>> selected for intelligence, I agree with that. But so what?*
>>
>
> So what?!! If evolution selects for intelligence and you can't have
> intelligence without data processing and consciousness is the way data
> feels when it is being processed then it's no great mystery as to how
> evolution managed to produce consciousness by way of natural selection.
>

For what you, John Clark, require out of a theory of consciousness, you've
got one that works for you. Thumbs up. For me and others who enjoy the
mystery of it, it's not enough. You're entitled to think going further is a
waste of time. But after you've said that a hundred times, maybe we all get
the point and if you don't have anything new to contribute, it's time to
gracefully bow out.


>
> >>> OK, fine, let's say intelligence implies consciousness,

>>>
>>> >> If you grant me that then what are we arguing about?
>>>
>>
>> *> Over whether there are facts about consciousness, without having to
>> link it to intelligence.*
>>
>
> If there is no link between consciousness and intelligence then there is
> absolutely positively no way Darwinian Evolution could have produced
> consciousness. But I don't think Darwin was wrong, I think you are.
>

I'm neither claiming that evolution produced consciousness or that Darwin
was wrong.


>
>
>> >> Do we really agree on all those terms? How can we know words that
>>> refer to qualia mean the same thing to both of us? There is no objective
>>> test for it, if there was then qualia wouldn't be subjective, it would be
>>> objective.
>>>
>>
>> *> We don't need infinite precision to uncover useful facts. *
>>
>
> I'm not talking about infinite precision, when it comes to qualia there
> is no assurance that we even approximately agree on meanings.
>

If that were true, language would be useless.


>
> > If someone says "that hurts", or "that looks red", we know what they
>> mean.
>>
>
> Do you? When they say "that looks red" the red qualia they refer to may
> be your green qualia, and your green qualia could be their red qualia, but
> both of you still use the English word "red" to describe the qualia color
> of blood and the English word "green" to describe the qualia color of a
> leaf.
>

I don't care about that. What matters is that you know you are seeing red
and I know I am seeing red. There's just no point in comparing private
experiences, which is something I know we agree on. But that's not all
there is to a theory of consciousness.


>
>
>> * > We take it as an assumption, and we make it explicit, that when
>> someone says "I see red" they are having the same kind of, or similar
>> enough,*
>>
>
> That is one hell of an assumption! If you're willing to do that why not be
> done with it and just take it as an assumption that your consciousness
> theory, whatever it may be, is correct?
>

Is it? It's what we assume in every conversation we have.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> 
>
> .
>
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0fFgjWVsPTKG0SmeKb0PVuAyah9hyTDGpG2M8O-_AKYA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receivi

Re: A minimally conscious program

2021-04-29 Thread Terren Suydam
On Thu, Apr 29, 2021 at 1:37 PM John Clark  wrote:

>
>
> On Thu, Apr 29, 2021 at 11:47 AM Terren Suydam 
> wrote:
>
>
> Finding a theory is not a problem, theories are a dime a dozen
>>> consciousness theories doubly so. But how could you ever figure out if
>>> your consciousness theory was correct?
>>>
>>
>> The same way we figure out any theory is correct.
>>
>
> In science we judge a theory by how well it can predict how something will
> behave, but you are not interested in behavior you're interested in
> consciousness, so I repeat how do you determine if  consciousness theory 
> #6,948,603,924
> is correct?
>
>
> >>If you're talking about observable characteristics then yes, but then
>>> you're just talking about behavior not consciousness.
>>>
>>
>>
>> *>Sure, but we might be talking about the behavior of neurons, or their
>> equivalent in an AI.*
>>
>
> The behavior of neurons is not consciousness.
>
>
> > *All of our disagreements come down to whether there are facts about
>> consciousness. You don't think there are,*
>>
>
> Not true, I know one thing from direct experience and that outranks even
> the scientific method, I know that I am conscious.
>

I have a limit of how many times I will go around this circle. Let's just
focus on whether we can make statements of fact about consciousness, which
is what the other email thread does.


> John K ClarkSee what's on my new list at  Extropolis
> .
>
> .
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2Y6%2BDWTUb-J2Nw2E9OUYJNGvDXm8ViORMbExr94io-wA%40mail.gmail.com
> 
> .
>

On Thu, Apr 29, 2021 at 1:37 PM John Clark  wrote:

>
>
> On Thu, Apr 29, 2021 at 11:47 AM Terren Suydam 
> wrote:
>
>
> Finding a theory is not a problem, theories are a dime a dozen
>>> consciousness theories doubly so. But how could you ever figure out if
>>> your consciousness theory was correct?
>>>
>>
>> The same way we figure out any theory is correct.
>>
>
> In science we judge a theory by how well it can predict how something will
> behave, but you are not interested in behavior you're interested in
> consciousness, so I repeat how do you determine if  consciousness theory 
> #6,948,603,924
> is correct?
>
>
> >>If you're talking about observable characteristics then yes, but then
>>> you're just talking about behavior not consciousness.
>>>
>>
>>
>> *>Sure, but we might be talking about the behavior of neurons, or their
>> equivalent in an AI.*
>>
>
> The behavior of neurons is not consciousness.
>
>
> > *All of our disagreements come down to whether there are facts about
>> consciousness. You don't think there are,*
>>
>
> Not true, I know one thing from direct experience and that outranks even
> the scientific method, I know that I am conscious.
> John K ClarkSee what's on my new list at  Extropolis
> .
>
> .
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2Y6%2BDWTUb-J2Nw2E9OUYJNGvDXm8ViORMbExr94io-wA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9ZM5ruAUcw5MzAGeuKz9D2en%3DATcsybh6x-Bf%2Bs6sXhA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-29 Thread 'Brent Meeker' via Everything List



On 4/29/2021 7:08 AM, John Clark wrote:
On Thu, Apr 29, 2021 at 9:34 AM Terren Suydam > wrote:


/> A theory would give you a way to predict what kinds of beings
are capable of feeling pain/


Finding a theory is not a problem, theories are a dime a dozen 
consciousness theories doubly so.But how could you ever figure out if 
your consciousness theory was correct?


> we'd say "given theory X,


And if the given X  which we take as being true is "Hogwarts exist" 
then we must logically conclude we could find Harry Potter at that 
magical school of witchcraft and wizardry.


> /we know that if we create an AI with these characteristics,/


If you're talking about observable characteristics then yes, but then 
you're just talking about behavior not consciousness.


/> a theory of consciousness that explains how qualia come to be
within a system,/


Explains?Just what sort of theory would satisfy you and make you say 
the problem of consciousness has been solved? If I said the chemical 
RednosiumOxide produced qualia would all your questions be answered or 
would you be curious to know how this chemical managed to do that?


You don't even have to invent an example.  Panpsychism seems to be the 
latest fad in consciousness philosophy...everything is conscious, a 
little bit.


Brent


>///you could make claims about their experience that go beyond
observing behavior./


Claims are even easier to come about then theories are, but true 
claims not so much.


John K Clark See what's on my new list at Extropolis 
.


.


--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1kQhdVYf5O9eLv2%3D16k%3Dm%2BE8mMhGd6CfwL_fGaB-SyHw%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/ba3bdf64-36d0-6053-12a3-7b350b8f47ad%40verizon.net.


Re: A minimally conscious program

2021-04-29 Thread 'Brent Meeker' via Everything List



On 4/29/2021 6:34 AM, Terren Suydam wrote:


On Thu, Apr 29, 2021 at 1:57 AM 'Brent Meeker' via Everything List 
> wrote:




On 4/28/2021 9:42 PM, Terren Suydam wrote:



On Wed, Apr 28, 2021 at 8:15 PM 'Brent Meeker' via Everything
List mailto:everything-list@googlegroups.com>> wrote:



On 4/28/2021 4:40 PM, Terren Suydam wrote:


I agree with everything you said there, but all you're
saying is that intersubjective reality must be consistent to
make sense of other peoples' utterances. OK, but if it
weren't, we wouldn't be here talking about anything. None of
this would be possible.


Which is why it's a fool's errand to say we need to explain
qualia.  If we can make an AI that responds to world the way
we to, that's all there is to saying it has the same qualia.


I don't think either of those claims follows. We need to explain
suffering if we hope to make sense of how to treat AIs. If it
were only about redness I'd agree. But creating entities whose
existence is akin to being in hell is immoral. And we should know
if we're doing that.


John McCarthy wrote a paper in the '50s warning about the
possibility of accidentally making a conscious AI and unknowingly
treating it unethically.  But I don't see the difference from any
other qualia, we can only judge by behavior.  In fact this whole
thread started by JKC considering AI pain, which he defined in
terms of behavior.


A theory would give you a way to predict what kinds of beings are 
capable of feeling pain. We wouldn't have to wait to observe their 
behavior, we'd say "given theory X, we know that if we create an AI 
with these characteristics, it will be the kind of entity that is 
capable of suffering".


Right.  And the theory is that the AI is feeling pain if is exerting all 
available effort to change its state.




To your second point, I think you're too quick to make an
equivalence between an AI's responses and their subjective
experience. You sound like John Clark - the only thing that
matters is behavior.


Behavior includes reports. What else would you suggest we go on?


Again, in a theory of consciousness that explains how qualia come to 
be within a system, you could make claims about their experience that 
go beyond observing behavior. I know John Clark's head just exploded, 
but it's the point of having a theory of consciousness.


Of course you can have such a theory.  But how can you have evidence for 
or against it is the question?  How can it be anything but a speculation?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/aba1d58a-a397-a1a6-3e23-556f3637f77f%40verizon.net.


Re: A minimally conscious program

2021-04-29 Thread John Clark
On Thu, Apr 29, 2021 at 12:24 PM Terren Suydam 
wrote:

>> I proposed a question, "How is it possible that evolution managed to
>> produce consciousness?" and I gave the only answer to that question I could
>> think of. And 3 times I've asked you if you can think of another answer.
>> And three times I received nothing back but evasion. I now asked the same
>> question for a fourth time, given that evolution can't select for what it
>> can't see and natural selection can see intelligent behavior but it can't
>> see consciousness, can you give me an explanation different from my own o
>> n how evolution managed to produce a conscious being such as yourself?
>>
>
> *>No, I can't*.
>

So I can explain something that you cannot. So which of our ideas are
superior?


> * > If you're saying evolution didn't select for consciousness, it
> selected for intelligence, I agree with that. But so what?*
>

So what?!! If evolution selects for intelligence and you can't have
intelligence without data processing and consciousness is the way data
feels when it is being processed then it's no great mystery as to how
evolution managed to produce consciousness by way of natural selection.

>>> OK, fine, let's say intelligence implies consciousness,
>>>
>>
>> >> If you grant me that then what are we arguing about?
>>
>
> *> Over whether there are facts about consciousness, without having to
> link it to intelligence.*
>

If there is no link between consciousness and intelligence then there is
absolutely positively no way Darwinian Evolution could have produced
consciousness. But I don't think Darwin was wrong, I think you are.


> >> Do we really agree on all those terms? How can we know words that
>> refer to qualia mean the same thing to both of us? There is no objective
>> test for it, if there was then qualia wouldn't be subjective, it would be
>> objective.
>>
>
> *> We don't need infinite precision to uncover useful facts. *
>

I'm not talking about infinite precision, when it comes to qualia there is
no assurance that we even approximately agree on meanings.

> If someone says "that hurts", or "that looks red", we know what they mean.
>

Do you? When they say "that looks red" the red qualia they refer to may be
your green qualia, and your green qualia could be their red qualia, but
both of you still use the English word "red" to describe the qualia color
of blood and the English word "green" to describe the qualia color of a
leaf.


> * > We take it as an assumption, and we make it explicit, that when
> someone says "I see red" they are having the same kind of, or similar
> enough,*
>

That is one hell of an assumption! If you're willing to do that why not be
done with it and just take it as an assumption that your consciousness
theory, whatever it may be, is correct?
John K ClarkSee what's on my new list at  Extropolis


.



>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0fFgjWVsPTKG0SmeKb0PVuAyah9hyTDGpG2M8O-_AKYA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-29 Thread 'Brent Meeker' via Everything List



On 4/29/2021 1:30 AM, Telmo Menezes wrote:



Am Mi, 28. Apr 2021, um 20:51, schrieb Brent Meeker:



On 4/28/2021 9:54 AM, Telmo Menezes wrote:



Am Di, 27. Apr 2021, um 04:07, schrieb 'Brent Meeker' via Everything 
List:
It certainly seems likely that any brain or AI that can perceive 
sensory events and form an inner narrative and memory of that is 
conscious in a sense even if they are unable to act. This is 
commonly the situation during a dream.  One is aware of dreamt 
events but doesn't actually move in response to them.


And I think JKC is wrong when he says "few if any believe other 
people are conscious all the time, only during those times that 
corresponds to the times they behave intelligently."  I generally 
assume people are conscious if their eyes are open and they respond 
to stimuli, even if they are doing something dumb.


But I agree with his general point that consciousness is easy and 
intelligence is hard.


JFK insists on this point a lot, but I really do not understand how 
it matters. Maybe so, maybe if idealism or panspychism are correct, 
consciousness is the easiest thing there is, from an engineering 
perspective. But what does the tehcnical challenge have to do with 
searching for truth and understanding reality?


Reminds me of something I heard a meditation teacher say once. He 
said that for eastern people he has to say that "meditation is very 
hard, it takes a lifetime to master!". Generalizing a lot, eastern 
culture values the idea of mastering something that is very hard, it 
is thus a worthy goal. For westerns he says: "meditation is the 
easiest thing in the world". And thus it satisfies the (generalizing 
a lot) westerner taste for a magic pill that immediately solves all 
problems.


I think you are falling for similar traps.


Which is what?


The trap of equating the perceived difficulty of a task with its 
merit. Are we after the truth, or are we after bragging rights?


The point of consciousness being "easy" is that theories of 
consciousness as a thing in itself are untestable so there is no way to 
say what is true.  Just because you place value on a task doesn't mean 
it's not imaginary.  There are people who think the whole purpose of 
life is to get to heaven.




I think you are falling into the trap of searching for the ding an 
sich.  Engineering is the measure of understanding.

That's JKC's point (JFK is dead),


My apologies to JKC for my dyslexia, it was not on purpose.

if your theory doesn't lead to engineering it's just philosophizing 
and that's easy.




Well, that is you philosophizing, isn't it? Saying that "engineering 
is the measure of understanding" is a philosophical position that you 
are not bothering to justify.


It's a philosophy of how we know when we understand something. What's 
your criteria?  What would constitute an understanding of qualia that 
would satisfy you?


If you propose a hypothesis, we can follow this hypothesis to its 
logical conclusions. So let us say that brain activity generates 
consciousness. The brain is a finite thing, so its state can be fully 
described by some finite configuration. Furthermore, this 
configuration can be replicated in time and space. So a consequence of 
claiming that the brain generates consciousness is that a conscious 
state cannot be constrained by time or space. If the exact 
configuration we are experiencing now is replicated 1 million years 
from now or in another galaxy, then it leads to the same exact first 
person experience and the instantiations cannot be distinguished. If 
you want pure physicalism then you have to add something more to your 
hypothesis.


But it couldn't lead to intelligent action.  So one could say it's not 
consciousness because it doesn't have the required relation to its 
environment/body.  Which was my point that physics is required. A 
disembodied brain is like the the rock that calculates everything.  One 
might suppose a Boltzmann brain comes into existence, experiences an 
instant of being JKC before vanishing.  So what?  What conclusion do you 
draw from that?  That consciousness can't be a physical process?


Brent

Brent



Telmo



Brent





I think human consciousness, having an inner narrative,


This equivalence that you are smuggling in here is doing a lot of 
work... and it is the tricky part. "Inner narrative" in the sense of 
having a private simulation of external reality fits what you say 
below, but why are the lights on? I have no doubt that evolution can 
create the simulation, but what makes us live it in the first person?


Telmo

is just an evolutionary trick the brain developed for learning and 
accessing learned information to inform decisions.


Julian Jaynes wrote a book about how this may have come about, "The 
Origin of Consciousness in the Breakdown of the Bicameral Mind".  I 
don't know that he got it exactly right, but I think he was on to 
the right idea.


Brent


On 4/26/2021 4:07 PM, Terren Suydam wrote:
So

Re: A minimally conscious program

2021-04-29 Thread John Clark
On Thu, Apr 29, 2021 at 11:47 AM Terren Suydam 
wrote:


Finding a theory is not a problem, theories are a dime a dozen
>> consciousness theories doubly so. But how could you ever figure out if
>> your consciousness theory was correct?
>>
>
> The same way we figure out any theory is correct.
>

In science we judge a theory by how well it can predict how something
will behave,
but you are not interested in behavior you're interested in consciousness,
so I repeat how do you determine if  consciousness theory #6,948,603,924 is
correct?


>>If you're talking about observable characteristics then yes, but then
>> you're just talking about behavior not consciousness.
>>
>
>
> *>Sure, but we might be talking about the behavior of neurons, or their
> equivalent in an AI.*
>

The behavior of neurons is not consciousness.


> *All of our disagreements come down to whether there are facts about
> consciousness. You don't think there are,*
>

Not true, I know one thing from direct experience and that outranks even
the scientific method, I know that I am conscious.
John K ClarkSee what's on my new list at  Extropolis
.

.


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2Y6%2BDWTUb-J2Nw2E9OUYJNGvDXm8ViORMbExr94io-wA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-29 Thread Terren Suydam
On Thu, Apr 29, 2021 at 10:38 AM John Clark  wrote:

> On Thu, Apr 29, 2021 at 9:48 AM Terren Suydam 
> wrote:
>
>
>> *>I think it's possible there was consciousness before there was
>> intelligence,*
>>
>
> I very much doubt it, but of course nobody will ever be able to prove or
> disprove it so the proposition fits in very nicely with all existing
> consciousness literature.
>

The point was that it's not necessarily true that consciousness is the
inevitable byproduct of intelligence.


>
>
>> *> you're implicitly working with a theory of consciousness. Then, you're
>> demanding that I use your theory of consciousness when you insist that I
>> answer questions about consciousness through the framing of evolution.*
>>
>
> I proposed a question, "How is it possible that evolution managed to
> produce consciousness?" and I gave the only answer to that question I could
> think of. And 3 times I've asked you if you can think of another answer.
> And three times I received nothing back but evasion. I now asked the same
> question for a fourth time, given that evolution can't select for what it
> can't see and natural selection can see intelligent behavior but it can't
> see consciousness, can you give me an explanation different from my own on
> how evolution managed to produce a conscious being such as yourself?
>

No, I can't. If you're saying evolution didn't select for consciousness, it
selected for intelligence, I agree with that. But so what?


>
>
>> *> >> do you agree that testimony of experience constitutes facts about
 consciousness?*

>>>
>>> >> Only if I first assume that intelligence implies consciousness,
>>> otherwise I'd have no way of knowing if the being giving the testimony
>>> about consciousness was itself conscious. And only if I am convinced
>>> that the being giving the testimony was as honest as he can be. And
>>> only if I feel confident we agree about the meeting of certain words, like
>>> "green" and "red" and "hot" and "cold" and you guessed it "consciousness".
>>>
>>
>> > OK, fine, let's say intelligence implies consciousness,
>>
>
> If you grant me that then what are we arguing about?
>

Over whether there are facts about consciousness, without having to link it
to intelligence.


>
> *>the account given was honest (as in, nobody witnessing the account would
>> have a credible reason to doubt it),*
>>
>
> The most successful lies are those in which the reason for the lying is
> not immediately obvious.
>

There's uncertainty with the behavior of single subatomic particles, but
when we observe the aggregate behavior of large numbers of them, we call
those statistical observations *facts*, and those observations are
repeatable. There's a value of N for which studying N humans in a
consciousness experiment puts the probability that they're all lying below
a certain threshold.


>
>
>> * > and we can agree on all those terms.*
>>
>
> Do we really agree on all those terms? How can we know words that refer
> to qualia mean the same thing to both of us? There is no objective test for
> it, if there was then qualia wouldn't be subjective, it would be
> objective.
>

We don't need infinite precision to uncover useful facts. If someone says
"that hurts", or "that looks red", we know what they mean. We take it as an
assumption, and we make it explicit, that when someone says "I see red"
they are having the same kind of, or similar enough, experience as someone
else who says "I see red".

There's no question that the type of evidence you get from first-person
reports is vulnerable to deception, biases, and uncertainty around
referents. But we live with this in every day life. It's not unreasonable
to systematize first-person reports and include that data as evidence for
theorizing, as long as those vulnerabilities are acknowledged. Like I've
said from the beginning, it may be the case that we'll never arrive at a
theory of consciousness that emerges as a clear winner. But I disagree with
you that it's not worth trying to find one, or that it's impossible to make
progress.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> .
> .
>
>>
>>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0M63Y_GL_rDjOL41uu7pgjvnwfiu2rM0LNWoL-y0Ahfw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups

Re: A minimally conscious program

2021-04-29 Thread Terren Suydam
On Thu, Apr 29, 2021 at 10:08 AM John Clark  wrote:

> On Thu, Apr 29, 2021 at 9:34 AM Terren Suydam 
> wrote:
>
> *> A theory would give you a way to predict what kinds of beings are
>> capable of feeling pain*
>>
>
> Finding a theory is not a problem, theories are a dime a dozen
> consciousness theories doubly so. But how could you ever figure out if
> your consciousness theory was correct?
>

The same way we figure out any theory is correct. Does it have explanatory
power, does it make falsifiable predictions. We're still arguing over
whether there's such a thing as a fact about consciousness, but if we can
imagine a world where you grant that there are, that's the world in which
you can test theories of consciousness.


>
>  > we'd say "given theory X,
>>
>
> And if the given X  which we take as being true is "Hogwarts exist" then we
> must logically conclude we could find Harry Potter at that magical school
> of witchcraft and wizardry.
>
> > *we know that if we create an AI with these characteristics,*
>>
>
> If you're talking about observable characteristics then yes, but then
> you're just talking about behavior not consciousness.
>

Sure, but we might be talking about the behavior of neurons, or their
equivalent in an AI.


>
> *> a theory of consciousness that explains how qualia come to be within a
>> system,*
>>
>
> Explains? Just what sort of theory would satisfy you and make you say the
> problem of consciousness has been solved? If I said the chemical Rednosium
> Oxide produced qualia would all your questions be answered or would you be
> curious to know how this chemical managed to do that?
>

All of our disagreements come down to whether there are facts about
consciousness. You don't think there are, and that's all the question above
is saying.


>
>
>> > *you could make claims about their experience that go beyond observing
>> behavior.*
>>
>
> Claims are even easier to come about then theories are, but true claims
> not so much.
>
> John K ClarkSee what's on my new list at  Extropolis
> .
>
> .
>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1kQhdVYf5O9eLv2%3D16k%3Dm%2BE8mMhGd6CfwL_fGaB-SyHw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9Mruynsx6pEpUNojfqdCEXG6e4WCJ-0fShxWTbNAoSjA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-29 Thread John Clark
On Thu, Apr 29, 2021 at 9:48 AM Terren Suydam 
wrote:


> *>I think it's possible there was consciousness before there was
> intelligence,*
>

I very much doubt it, but of course nobody will ever be able to prove or
disprove it so the proposition fits in very nicely with all existing
consciousness literature.


> *> you're implicitly working with a theory of consciousness. Then, you're
> demanding that I use your theory of consciousness when you insist that I
> answer questions about consciousness through the framing of evolution.*
>

I proposed a question, "How is it possible that evolution managed to
produce consciousness?" and I gave the only answer to that question I could
think of. And 3 times I've asked you if you can think of another answer.
And three times I received nothing back but evasion. I now asked the same
question for a fourth time, given that evolution can't select for what it
can't see and natural selection can see intelligent behavior but it can't
see consciousness, can you give me an explanation different from my own on
how evolution managed to produce a conscious being such as yourself?


> *> >> do you agree that testimony of experience constitutes facts about
>>> consciousness?*
>>>
>>
>> >> Only if I first assume that intelligence implies consciousness,
>> otherwise I'd have no way of knowing if the being giving the testimony
>> about consciousness was itself conscious. And only if I am convinced
>> that the being giving the testimony was as honest as he can be. And only
>> if I feel confident we agree about the meeting of certain words, like
>> "green" and "red" and "hot" and "cold" and you guessed it "consciousness".
>>
>
> > OK, fine, let's say intelligence implies consciousness,
>

If you grant me that then what are we arguing about?

*>the account given was honest (as in, nobody witnessing the account would
> have a credible reason to doubt it),*
>

The most successful lies are those in which the reason for the lying is not
immediately obvious.


> * > and we can agree on all those terms.*
>

Do we really agree on all those terms? How can we know words that refer to
qualia mean the same thing to both of us? There is no objective test for
it, if there was then qualia wouldn't be subjective, it would be
objective.
John K ClarkSee what's on my new list at  Extropolis
.
.

>
>> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0M63Y_GL_rDjOL41uu7pgjvnwfiu2rM0LNWoL-y0Ahfw%40mail.gmail.com.


Re: A minimally conscious program

2021-04-29 Thread John Clark
On Thu, Apr 29, 2021 at 9:34 AM Terren Suydam 
wrote:

*> A theory would give you a way to predict what kinds of beings are
> capable of feeling pain*
>

Finding a theory is not a problem, theories are a dime a dozen
consciousness theories doubly so. But how could you ever figure out if your
consciousness theory was correct?

 > we'd say "given theory X,
>

And if the given X  which we take as being true is "Hogwarts exist" then we
must logically conclude we could find Harry Potter at that magical school
of witchcraft and wizardry.

> *we know that if we create an AI with these characteristics,*
>

If you're talking about observable characteristics then yes, but then
you're just talking about behavior not consciousness.

*> a theory of consciousness that explains how qualia come to be within a
> system,*
>

Explains? Just what sort of theory would satisfy you and make you say the
problem of consciousness has been solved? If I said the chemical Rednosium
Oxide produced qualia would all your questions be answered or would you be
curious to know how this chemical managed to do that?


> > *you could make claims about their experience that go beyond observing
> behavior.*
>

Claims are even easier to come about then theories are, but true claims not
so much.

John K ClarkSee what's on my new list at  Extropolis
.

.

>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1kQhdVYf5O9eLv2%3D16k%3Dm%2BE8mMhGd6CfwL_fGaB-SyHw%40mail.gmail.com.


Re: A minimally conscious program

2021-04-29 Thread Terren Suydam
On Thu, Apr 29, 2021 at 5:13 AM John Clark  wrote:

> On Wed, Apr 28, 2021 at 6:18 PM Terren Suydam 
> wrote:
>
> >> If you believe in Darwinian evolution and if you believe you are
>>> conscious then given that evolution can't select for what it can't see
>>> and natural selection can see intelligent behavior but it can't see
>>> consciousness, can you give me an explanation of how evolution managed to
>>> produce a conscious being such as yourself if intelligence is not the
>>> inevitable byproduct of intelligence?
>>>
>>
>> *> It's not an inevitable byproduct of intelligence if consciousness is
>> an epiphenomenon. *
>>
>
> That remark makes no sense, and you never answered my question. If
> consciousness is an epiphenomenon, and from Evolutions point of you it
> certainly is, then the only way natural selection could've produced
> consciousness is if its the inevitable byproduct of something else that is
> not an epiphenomenon, something like intelligence. And you know for a fact
> that Evolution has produced consciousness at least once and probably many
> billions of times.
>

I mostly agree, my only hang up is with the word 'inevitable'. I think it's
possible there was consciousness before there was intelligence, depending
on how you define intelligence.


>
>
>> *> As you like to say, consciousness may just be how data feels as it's
>> being processed. If so, that doesn't imply anything about intelligence per
>> se, beyond the minimum intelligence required to process data at all.*
>>
>
> For the purposes of this argument it's irrelevant if any sort of data
> processing can produce consciousness or if only the type that leads to
> intelligence can because evolution doesn't select for data processing it
> selects for intelligence, but you can't have intelligence without data
> processing.
>

You keep coming back to intelligence in a conversation about consciousness.
That's fine, but when you do you're implicitly working with a theory of
consciousness. Then, you're demanding that I use your theory of
consciousness when you insist that I answer questions about consciousness
through the framing of evolution. It's a bit of a contradiction to be using
a theory of consciousness to point out how pointless theories of
consciousness are.


>
> *> do you agree that testimony of experience constitutes facts about
>> consciousness?*
>>
>
> Only if I first assume that intelligence implies consciousness, otherwise
> I'd have no way of knowing if the being giving the testimony about
> consciousness was itself conscious. And only if I am convinced that the
> being giving the testimony was as honest as he can be. And only if I feel
> confident we agree about the meeting of certain words, like "green" and
> "red" and "hot" and "cold" and you guessed it "consciousness".
>

OK, fine, let's say intelligence implies consciousness, the account given
was honest (as in, nobody witnessing the account would have a credible
reason to doubt it), and we can agree on all those terms.

Then do you agree that said account constitutes facts about consciousness?

Terren


> John K ClarkSee what's on my new list at  Extropolis
> .
> .
>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1e47JG3rfVnTCp6KxLRFnqmZRKQHNcNKVhrNBRaEkk5A%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA--EL1q98oqi%3D0ruRCXudn_O0phjRGBBxZ2Vh_-kg4u_w%40mail.gmail.com.


Re: A minimally conscious program

2021-04-29 Thread Terren Suydam
On Thu, Apr 29, 2021 at 1:57 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 4/28/2021 9:42 PM, Terren Suydam wrote:
>
>
>
> On Wed, Apr 28, 2021 at 8:15 PM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>
>> On 4/28/2021 4:40 PM, Terren Suydam wrote:
>>
>>
>> I agree with everything you said there, but all you're saying is that
>> intersubjective reality must be consistent to make sense of other peoples'
>> utterances. OK, but if it weren't, we wouldn't be here talking about
>> anything. None of this would be possible.
>>
>>
>> Which is why it's a fool's errand to say we need to explain qualia.  If
>> we can make an AI that responds to world the way we to, that's all there is
>> to saying it has the same qualia.
>>
>
> I don't think either of those claims follows. We need to explain suffering
> if we hope to make sense of how to treat AIs. If it were only about redness
> I'd agree. But creating entities whose existence is akin to being in hell
> is immoral. And we should know if we're doing that.
>
>
> John McCarthy wrote a paper in the '50s warning about the possibility of
> accidentally making a conscious AI and unknowingly treating it
> unethically.  But I don't see the difference from any other qualia, we can
> only judge by behavior.  In fact this whole thread started by JKC
> considering AI pain, which he defined in terms of behavior.
>
>
A theory would give you a way to predict what kinds of beings are capable
of feeling pain. We wouldn't have to wait to observe their behavior, we'd
say "given theory X, we know that if we create an AI with these
characteristics, it will be the kind of entity that is capable of
suffering".


>
> To your second point, I think you're too quick to make an equivalence
> between an AI's responses and their subjective experience. You sound like
> John Clark - the only thing that matters is behavior.
>
>
> Behavior includes reports. What else would you suggest we go on?
>

Again, in a theory of consciousness that explains how qualia come to be
within a system, you could make claims about their experience that go
beyond observing behavior. I know John Clark's head just exploded, but it's
the point of having a theory of consciousness.

>
>
> Bent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/577ce844-a528-4dcd-deab-3cf1e5e833e8%40verizon.net
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-joR0sTiicxUM7vpjcgw-wrGHv3Oa24AJigv7%2B5RHefA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-29 Thread PGC


On Thursday, April 29, 2021 at 10:31:05 AM UTC+2 telmo wrote:

>
>
> Am Mi, 28. Apr 2021, um 20:51, schrieb Brent Meeker:
>
>
>
> On 4/28/2021 9:54 AM, Telmo Menezes wrote:
>
>
>
> Am Di, 27. Apr 2021, um 04:07, schrieb 'Brent Meeker' via Everything List:
>
> It certainly seems likely that any brain or AI that can perceive sensory 
> events and form an inner narrative and memory of that is conscious in a 
> sense even if they are unable to act.  This is commonly the situation 
> during a dream.  One is aware of dreamt events but doesn't actually move in 
> response to them.
>
> And I think JKC is wrong when he says "few if any believe other people 
> are conscious all the time, only during those times that corresponds to the 
> times they behave intelligently."  I generally assume people are conscious 
> if their eyes are open and they respond to stimuli, even if they are doing 
> something dumb. 
>
> But I agree with his general point that consciousness is easy and 
> intelligence is hard.
>
>
> JFK insists on this point a lot, but I really do not understand how it 
> matters. Maybe so, maybe if idealism or panspychism are correct, 
> consciousness is the easiest thing there is, from an engineering 
> perspective. But what does the tehcnical challenge have to do with 
> searching for truth and understanding reality?
>
> Reminds me of something I heard a meditation teacher say once. He said 
> that for eastern people he has to say that "meditation is very hard, it 
> takes a lifetime to master!". Generalizing a lot, eastern culture values 
> the idea of mastering something that is very hard, it is thus a worthy 
> goal. For westerns he says: "meditation is the easiest thing in the world". 
> And thus it satisfies the (generalizing a lot) westerner taste for a magic 
> pill that immediately solves all problems.
>
> I think you are falling for similar traps.
>
>
> Which is what? 
>
>
> The trap of equating the perceived difficulty of a task with its merit. 
> Are we after the truth, or are we after bragging rights?
>

That ambiguity exists whenever people pair their genuine christian names to 
a post. Anonymity can at times be a form of politeness in the 'you can't 
take my posts seriously' sense. Everybody uses their real names to convince 
others on the net... as if anybody on the net or social media ever said: 
"ah, thank you for convincing me to depart from my flawed points of view 
with the truth! Now I am less dumb."
 

>
> I think you are falling into the trap of searching for the ding an sich.  
> Engineering is the measure of understanding. 
> That's JKC's point (JFK is dead),
>
>
> My apologies to JKC for my dyslexia, it was not on purpose.
>
> if your theory doesn't lead to engineering it's just philosophizing and 
> that's easy.
>
>
> Well, that is you philosophizing, isn't it? Saying that "engineering is 
> the measure of understanding" is a philosophical position that you are not 
> bothering to justify.
>

So is saying practically anything.
 

>
> If you propose a hypothesis, we can follow this hypothesis to its logical 
> conclusions. So let us say that brain activity generates consciousness. The 
> brain is a finite thing, so its state can be fully described by some finite 
> configuration. Furthermore, this configuration can be replicated in time 
> and space. So a consequence of claiming that the brain generates 
> consciousness is that a conscious state cannot be constrained by time or 
> space. If the exact configuration we are experiencing now is replicated 1 
> million years from now or in another galaxy, then it leads to the same 
> exact first person experience and the instantiations cannot be 
> distinguished. If you want pure physicalism then you have to add something 
> more to your hypothesis.
>

How about precision and effectiveness of that way of thinking as opposed to 
mathematical approaches? Physicists imho allow themselves a more relaxed 
attitude where they may set aside concerns of existing mathematical objects 
or make the kinds of approximations that mathematicians would never allow 
themselves to guess. It took some decades for example up to around 1950 for 
physicists to work out renormalization in quantum field theory, with 
calculating the perturbative expansion where all terms of second order and 
above yield divergent integrals. With more precision in spectroscopy, 
discovering the fine structure of atomic emission spectra etc. those 
physicists sought for a way to pull a finite result from the divergent 
integrals. Restricting the domain of integration to energies of order mc2 
and through unjustified subtractions they obtained a finite result very 
close to the experimental result. 

Then Tomonaga, Dyson, Feynman etc. improved the technique until the degree 
of precision became satisfying. Renormalization for calculation purposes by 
changing the mass of the electron and replacing it by a quantity, depending 
on the relevant magnitude of energies, and yet dive

Re: A minimally conscious program

2021-04-29 Thread John Clark
On Wed, Apr 28, 2021 at 6:18 PM Terren Suydam 
wrote:

>> If you believe in Darwinian evolution and if you believe you are
>> conscious then given that evolution can't select for what it can't see
>> and natural selection can see intelligent behavior but it can't see
>> consciousness, can you give me an explanation of how evolution managed to
>> produce a conscious being such as yourself if intelligence is not the
>> inevitable byproduct of intelligence?
>>
>
> *> It's not an inevitable byproduct of intelligence if consciousness is an
> epiphenomenon. *
>

That remark makes no sense, and you never answered my question. If
consciousness is an epiphenomenon, and from Evolutions point of you it
certainly is, then the only way natural selection could've produced
consciousness is if its the inevitable byproduct of something else that is
not an epiphenomenon, something like intelligence. And you know for a fact
that Evolution has produced consciousness at least once and probably many
billions of times.


> *> As you like to say, consciousness may just be how data feels as it's
> being processed. If so, that doesn't imply anything about intelligence per
> se, beyond the minimum intelligence required to process data at all.*
>

For the purposes of this argument it's irrelevant if any sort of data
processing can produce consciousness or if only the type that leads to
intelligence can because evolution doesn't select for data processing it
selects for intelligence, but you can't have intelligence without data
processing.

*> do you agree that testimony of experience constitutes facts about
> consciousness?*
>

Only if I first assume that intelligence implies consciousness, otherwise
I'd have no way of knowing if the being giving the testimony about
consciousness was itself conscious. And only if I am convinced that the
being giving the testimony was as honest as he can be. And only if I feel
confident we agree about the meeting of certain words, like "green" and
"red" and "hot" and "cold" and you guessed it "consciousness".

John K ClarkSee what's on my new list at  Extropolis
.
.

>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1e47JG3rfVnTCp6KxLRFnqmZRKQHNcNKVhrNBRaEkk5A%40mail.gmail.com.


Re: A minimally conscious program

2021-04-29 Thread Telmo Menezes


Am Mi, 28. Apr 2021, um 20:51, schrieb Brent Meeker:
> 
> 
> On 4/28/2021 9:54 AM, Telmo Menezes wrote:
>> 
>> 
>> Am Di, 27. Apr 2021, um 04:07, schrieb 'Brent Meeker' via Everything List:
>>> It certainly seems likely that any brain or AI that can perceive sensory 
>>> events and form an inner narrative and memory of that is conscious in a 
>>> sense even if they are unable to act.  This is commonly the situation 
>>> during a dream.  One is aware of dreamt events but doesn't actually move in 
>>> response to them.
>>> 
>>> And I think JKC is wrong when he says "few if any believe other people are 
>>> conscious all the time, only during those times that corresponds to the 
>>> times they behave intelligently."  I generally assume people are conscious 
>>> if their eyes are open and they respond to stimuli, even if they are doing 
>>> something dumb. 
>>> 
>>> But I agree with his general point that consciousness is easy and 
>>> intelligence is hard.
>> 
>> JFK insists on this point a lot, but I really do not understand how it 
>> matters. Maybe so, maybe if idealism or panspychism are correct, 
>> consciousness is the easiest thing there is, from an engineering 
>> perspective. But what does the tehcnical challenge have to do with searching 
>> for truth and understanding reality?
>> 
>> Reminds me of something I heard a meditation teacher say once. He said that 
>> for eastern people he has to say that "meditation is very hard, it takes a 
>> lifetime to master!". Generalizing a lot, eastern culture values the idea of 
>> mastering something that is very hard, it is thus a worthy goal. For 
>> westerns he says: "meditation is the easiest thing in the world". And thus 
>> it satisfies the (generalizing a lot) westerner taste for a magic pill that 
>> immediately solves all problems.
>> 
>> I think you are falling for similar traps.
> 
> Which is what? 

The trap of equating the perceived difficulty of a task with its merit. Are we 
after the truth, or are we after bragging rights?

> I think you are falling into the trap of searching for the ding an sich.  
> Engineering is the measure of understanding. 
> That's JKC's point (JFK is dead),

My apologies to JKC for my dyslexia, it was not on purpose.

> if your theory doesn't lead to engineering it's just philosophizing and 
> that's easy.
> 

Well, that is you philosophizing, isn't it? Saying that "engineering is the 
measure of understanding" is a philosophical position that you are not 
bothering to justify.

If you propose a hypothesis, we can follow this hypothesis to its logical 
conclusions. So let us say that brain activity generates consciousness. The 
brain is a finite thing, so its state can be fully described by some finite 
configuration. Furthermore, this configuration can be replicated in time and 
space. So a consequence of claiming that the brain generates consciousness is 
that a conscious state cannot be constrained by time or space. If the exact 
configuration we are experiencing now is replicated 1 million years from now or 
in another galaxy, then it leads to the same exact first person experience and 
the instantiations cannot be distinguished. If you want pure physicalism then 
you have to add something more to your hypothesis.

Telmo

> 
> Brent
> 
> 
>> 
>>> I think human consciousness, having an inner narrative,
>> 
>> This equivalence that you are smuggling in here is doing a lot of work... 
>> and it is the tricky part. "Inner narrative" in the sense of having a 
>> private simulation of external reality fits what you say below, but why are 
>> the lights on? I have no doubt that evolution can create the simulation, but 
>> what makes us live it in the first person?
>> 
>> Telmo
>> 
>>> is just an evolutionary trick the brain developed for learning and 
>>> accessing learned information to inform decisions. 
>>> 
>>> Julian Jaynes wrote a book about how this may have come about, "The Origin 
>>> of Consciousness in the Breakdown of the Bicameral Mind".  I don't know 
>>> that he got it exactly right, but I think he was on to the right idea.
>>> 
>>> Brent
>>> 
>>> 
>>> On 4/26/2021 4:07 PM, Terren Suydam wrote:
 So do you have nothing to say about coma patients who've later woken up 
 and said they were conscious?  Or people under general anaesthetic who 
 later report being gruesomely aware of the surgery they were getting?  
 Should we ignore those reports?  Or admit that consciousness is worth 
 considering independently from its effects on outward behavior?
 
 On Mon, Apr 26, 2021 at 11:16 AM John Clark  wrote:
> On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam  
> wrote:
> 
>> > It's impossible to refute solipsism
> 
> True, but it's equally impossible to refute the idea that everything 
> including rocks is conscious. And if both a theory and its exact opposite 
> can neither be proven nor disproven then neither speculation is of any 
> value in trying to 

Re: A minimally conscious program

2021-04-28 Thread 'Brent Meeker' via Everything List



On 4/28/2021 9:42 PM, Terren Suydam wrote:



On Wed, Apr 28, 2021 at 8:15 PM 'Brent Meeker' via Everything List 
> wrote:




On 4/28/2021 4:40 PM, Terren Suydam wrote:



On Wed, Apr 28, 2021 at 7:25 PM 'Brent Meeker' via Everything
List mailto:everything-list@googlegroups.com>> wrote:



On 4/28/2021 3:17 PM, Terren Suydam wrote:



On Wed, Apr 28, 2021 at 5:51 PM John Clark
mailto:johnkcl...@gmail.com>> wrote:

On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam
mailto:terren.suy...@gmail.com>> wrote:

/>>> testimony of experience constitutes
facts about consciousness./


>> Sure I agree, provided you firstaccept that
consciousness is the inevitable byproduct of
intelligence


/> I hope the irony is not lost on anyone that
you're insisting on your theory of consciousness to
make your case that theories of consciousness are a
waste of time./


If you believe in Darwinian evolution and if you believe
you are consciousthen given that evolution can't select
for what it can't see and natural selection can see
intelligent behavior but it can't see consciousness, can
you give me an explanation of how evolution managed to
produce a conscious being such as yourself if
intelligence is not the inevitable byproduct of
intelligence?


It's not an inevitable byproduct of intelligence if
consciousness is an epiphenomenon. As you like to say,
consciousness may just be how data feels as it's being
processed. If so, that doesn't imply anything about
intelligence per se, beyond the minimum intelligence
required to process data at all... the simplest example
being a thermostat.

That said, do you agree that testimony of experience
constitutes facts about consciousness?


It wouldn't if it were just random, like plucking passages
out of novels.  We only take it as evidence of consciousness
because there are consistent patterns of correlation with
what each of us experiences.  If every time you pointed to a
flower you said "red", regardless of the flower's color, a
child would learn that "red" meant a flower and his reporting
when he saw red wouldn't be testimony to the experience of 
red.  So the usefulness of reports already depends on
physical patterns in the world. Something I've been telling
Bruno...physics is necessary to consciousness.

Brent


I agree with everything you said there, but all you're saying is
that intersubjective reality must be consistent to make sense of
other peoples' utterances. OK, but if it weren't, we wouldn't be
here talking about anything. None of this would be possible.


Which is why it's a fool's errand to say we need to explain
qualia.  If we can make an AI that responds to world the way we
to, that's all there is to saying it has the same qualia.


I don't think either of those claims follows. We need to explain 
suffering if we hope to make sense of how to treat AIs. If it were 
only about redness I'd agree. But creating entities whose existence is 
akin to being in hell is immoral. And we should know if we're doing that.


John McCarthy wrote a paper in the '50s warning about the possibility of 
accidentally making a conscious AI and unknowingly treating it 
unethically.  But I don't see the difference from any other qualia, we 
can only judge by behavior.  In fact this whole thread started by JKC 
considering AI pain, which he defined in terms of behavior.




To your second point, I think you're too quick to make an equivalence 
between an AI's responses and their subjective experience. You sound 
like John Clark - the only thing that matters is behavior.


Behavior includes reports. What else would you suggest we go on?

Bent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/577ce844-a528-4dcd-deab-3cf1e5e833e8%40verizon.net.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 8:15 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 4/28/2021 4:40 PM, Terren Suydam wrote:
>
>
>
> On Wed, Apr 28, 2021 at 7:25 PM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>
>> On 4/28/2021 3:17 PM, Terren Suydam wrote:
>>
>>
>>
>> On Wed, Apr 28, 2021 at 5:51 PM John Clark  wrote:
>>
>>> On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam 
>>> wrote:
>>>
>>> *>>> testimony of experience constitutes facts about consciousness.*
>
>
> >> Sure I agree, provided you first accept that consciousness is the
> inevitable byproduct of intelligence
>

 *> I hope the irony is not lost on anyone that you're insisting on your
 theory of consciousness to make your case that theories of consciousness
 are a waste of time.*

>>>
>>> If you believe in Darwinian evolution and if you believe you are
>>> conscious then given that evolution can't select for what it can't see
>>> and natural selection can see intelligent behavior but it can't see
>>> consciousness, can you give me an explanation of how evolution managed to
>>> produce a conscious being such as yourself if intelligence is not the
>>> inevitable byproduct of intelligence?
>>>
>>
>> It's not an inevitable byproduct of intelligence if consciousness is an
>> epiphenomenon. As you like to say, consciousness may just be how data feels
>> as it's being processed. If so, that doesn't imply anything about
>> intelligence per se, beyond the minimum intelligence required to process
>> data at all... the simplest example being a thermostat.
>>
>> That said, do you agree that testimony of experience constitutes facts
>> about consciousness?
>>
>>
>> It wouldn't if it were just random, like plucking passages out of
>> novels.  We only take it as evidence of consciousness because there are
>> consistent patterns of correlation with what each of us experiences.  If
>> every time you pointed to a flower you said "red", regardless of the
>> flower's color, a child would learn that "red" meant a flower and his
>> reporting when he saw red wouldn't be testimony to the experience of  red.
>> So the usefulness of reports already depends on physical patterns in the
>> world.  Something I've been telling Bruno...physics is necessary to
>> consciousness.
>>
>> Brent
>>
>
> I agree with everything you said there, but all you're saying is that
> intersubjective reality must be consistent to make sense of other peoples'
> utterances. OK, but if it weren't, we wouldn't be here talking about
> anything. None of this would be possible.
>
>
> Which is why it's a fool's errand to say we need to explain qualia.  If we
> can make an AI that responds to world the way we to, that's all there is to
> saying it has the same qualia.
>

I don't think either of those claims follows. We need to explain suffering
if we hope to make sense of how to treat AIs. If it were only about redness
I'd agree. But creating entities whose existence is akin to being in hell
is immoral. And we should know if we're doing that.

To your second point, I think you're too quick to make an equivalence
between an AI's responses and their subjective experience. You sound like
John Clark - the only thing that matters is behavior.

Terren


>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/bfe08930-bf9a-c88b-be8b-f621e5488c4f%40verizon.net
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-ueOoJFyodF_7cpE6Ke_CEt14T7q7wbE-5RghrTPKgcA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread 'Brent Meeker' via Everything List



On 4/28/2021 4:40 PM, Terren Suydam wrote:



On Wed, Apr 28, 2021 at 7:25 PM 'Brent Meeker' via Everything List 
> wrote:




On 4/28/2021 3:17 PM, Terren Suydam wrote:



On Wed, Apr 28, 2021 at 5:51 PM John Clark mailto:johnkcl...@gmail.com>> wrote:

On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam
mailto:terren.suy...@gmail.com>> wrote:

/>>> testimony of experience constitutes facts
about consciousness./


>> Sure I agree, provided you firstaccept that
consciousness is the inevitable byproduct of
intelligence


/> I hope the irony is not lost on anyone that you're
insisting on your theory of consciousness to make your
case that theories of consciousness are a waste of time./


If you believe in Darwinian evolution and if you believe you
are consciousthen given that evolution can't select for what
it can't see and natural selection can see intelligent
behavior but it can't see consciousness, can you give me an
explanation of how evolution managed to produce a conscious
being such as yourself if intelligence is not the inevitable
byproduct of intelligence?


It's not an inevitable byproduct of intelligence if consciousness
is an epiphenomenon. As you like to say, consciousness may just
be how data feels as it's being processed. If so, that doesn't
imply anything about intelligence per se, beyond the minimum
intelligence required to process data at all... the simplest
example being a thermostat.

That said, do you agree that testimony of experience constitutes
facts about consciousness?


It wouldn't if it were just random, like plucking passages out of
novels.  We only take it as evidence of consciousness because
there are consistent patterns of correlation with what each of us
experiences.  If every time you pointed to a flower you said
"red", regardless of the flower's color, a child would learn that
"red" meant a flower and his reporting when he saw red wouldn't be
testimony to the experience of  red.  So the usefulness of reports
already depends on physical patterns in the world.  Something I've
been telling Bruno...physics is necessary to consciousness.

Brent


I agree with everything you said there, but all you're saying is that 
intersubjective reality must be consistent to make sense of other 
peoples' utterances. OK, but if it weren't, we wouldn't be here 
talking about anything. None of this would be possible.


Which is why it's a fool's errand to say we need to explain qualia. If 
we can make an AI that responds to world the way we to, that's all there 
is to saying it has the same qualia.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/bfe08930-bf9a-c88b-be8b-f621e5488c4f%40verizon.net.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 7:25 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 4/28/2021 3:17 PM, Terren Suydam wrote:
>
>
>
> On Wed, Apr 28, 2021 at 5:51 PM John Clark  wrote:
>
>> On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam 
>> wrote:
>>
>> *>>> testimony of experience constitutes facts about consciousness.*


 >> Sure I agree, provided you first accept that consciousness is the
 inevitable byproduct of intelligence

>>>
>>> *> I hope the irony is not lost on anyone that you're insisting on your
>>> theory of consciousness to make your case that theories of consciousness
>>> are a waste of time.*
>>>
>>
>> If you believe in Darwinian evolution and if you believe you are conscious
>> then given that evolution can't select for what it can't see and natural
>> selection can see intelligent behavior but it can't see consciousness, can
>> you give me an explanation of how evolution managed to produce a conscious
>> being such as yourself if intelligence is not the inevitable byproduct of
>> intelligence?
>>
>
> It's not an inevitable byproduct of intelligence if consciousness is an
> epiphenomenon. As you like to say, consciousness may just be how data feels
> as it's being processed. If so, that doesn't imply anything about
> intelligence per se, beyond the minimum intelligence required to process
> data at all... the simplest example being a thermostat.
>
> That said, do you agree that testimony of experience constitutes facts
> about consciousness?
>
>
> It wouldn't if it were just random, like plucking passages out of novels.
> We only take it as evidence of consciousness because there are consistent
> patterns of correlation with what each of us experiences.  If every time
> you pointed to a flower you said "red", regardless of the flower's color, a
> child would learn that "red" meant a flower and his reporting when he saw
> red wouldn't be testimony to the experience of  red.  So the usefulness of
> reports already depends on physical patterns in the world.  Something I've
> been telling Bruno...physics is necessary to consciousness.
>
> Brent
>

I agree with everything you said there, but all you're saying is that
intersubjective reality must be consistent to make sense of other peoples'
utterances. OK, but if it weren't, we wouldn't be here talking about
anything. None of this would be possible.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-8FEgVqJhJ6tgxwRtaSVrvQsCAZ9gY-kYvGG5Q8bi0oA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread 'Brent Meeker' via Everything List



On 4/28/2021 3:17 PM, Terren Suydam wrote:



On Wed, Apr 28, 2021 at 5:51 PM John Clark > wrote:


On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam
mailto:terren.suy...@gmail.com>> wrote:

/>>> testimony of experience constitutes facts about
consciousness./


>> Sure I agree, provided you firstaccept that consciousness is
the inevitable byproduct of intelligence


/> I hope the irony is not lost on anyone that you're
insisting on your theory of consciousness to make your case
that theories of consciousness are a waste of time./


If you believe in Darwinian evolution and if you believe you are
consciousthen given that evolution can't select for what it can't
see and natural selection can see intelligent behavior but it
can't see consciousness, can you give me an explanation of how
evolution managed to produce a conscious being such as yourself if
intelligence is not the inevitable byproduct of intelligence?


It's not an inevitable byproduct of intelligence if consciousness is 
an epiphenomenon. As you like to say, consciousness may just be how 
data feels as it's being processed. If so, that doesn't imply anything 
about intelligence per se, beyond the minimum intelligence required to 
process data at all... the simplest example being a thermostat.


That said, do you agree that testimony of experience constitutes facts 
about consciousness?


It wouldn't if it were just random, like plucking passages out of 
novels.  We only take it as evidence of consciousness because there are 
consistent patterns of correlation with what each of us experiences.  If 
every time you pointed to a flower you said "red", regardless of the 
flower's color, a child would learn that "red" meant a flower and his 
reporting when he saw red wouldn't be testimony to the experience of  
red.  So the usefulness of reports already depends on physical patterns 
in the world.  Something I've been telling Bruno...physics is necessary 
to consciousness.


Brent



Terren


John K Clark See what's on my new list at Extropolis




.

.

-

-- 
You received this message because you are subscribed to the Google

Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com
.
To view this discussion on the web visit

https://groups.google.com/d/msgid/everything-list/CAJPayv3fvsASAZoMJ_WLCLYXTD0hDaszq-CDjjixLN1FSsiGvw%40mail.gmail.com

.

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_VGG8qrMqnm-W-UGnPDL_4EdynRPuxnSWddz4OTrcm7g%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/b90b880a-fab3-baee-d395-cf4a3691ced8%40verizon.net.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 5:51 PM John Clark  wrote:

> On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam 
> wrote:
>
> *>>> testimony of experience constitutes facts about consciousness.*
>>>
>>>
>>> >> Sure I agree, provided you first accept that consciousness is the
>>> inevitable byproduct of intelligence
>>>
>>
>> *> I hope the irony is not lost on anyone that you're insisting on your
>> theory of consciousness to make your case that theories of consciousness
>> are a waste of time.*
>>
>
> If you believe in Darwinian evolution and if you believe you are conscious
> then given that evolution can't select for what it can't see and natural
> selection can see intelligent behavior but it can't see consciousness, can
> you give me an explanation of how evolution managed to produce a conscious
> being such as yourself if intelligence is not the inevitable byproduct of
> intelligence?
>

It's not an inevitable byproduct of intelligence if consciousness is an
epiphenomenon. As you like to say, consciousness may just be how data feels
as it's being processed. If so, that doesn't imply anything about
intelligence per se, beyond the minimum intelligence required to process
data at all... the simplest example being a thermostat.

That said, do you agree that testimony of experience constitutes facts
about consciousness?

Terren


>
> John K ClarkSee what's on my new list at  Extropolis
> 
>
>
>
> .
>
>> .
>>>
>>> -
>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3fvsASAZoMJ_WLCLYXTD0hDaszq-CDjjixLN1FSsiGvw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_VGG8qrMqnm-W-UGnPDL_4EdynRPuxnSWddz4OTrcm7g%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread John Clark
On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam 
wrote:

*>>> testimony of experience constitutes facts about consciousness.*
>>
>>
>> >> Sure I agree, provided you first accept that consciousness is the
>> inevitable byproduct of intelligence
>>
>
> *> I hope the irony is not lost on anyone that you're insisting on your
> theory of consciousness to make your case that theories of consciousness
> are a waste of time.*
>

If you believe in Darwinian evolution and if you believe you are conscious
then given that evolution can't select for what it can't see and natural
selection can see intelligent behavior but it can't see consciousness, can
you give me an explanation of how evolution managed to produce a conscious
being such as yourself if intelligence is not the inevitable byproduct of
intelligence?
John K ClarkSee what's on my new list at  Extropolis




.

> .
>>
>> -
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3fvsASAZoMJ_WLCLYXTD0hDaszq-CDjjixLN1FSsiGvw%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 4:08 PM John Clark  wrote:

> On Wed, Apr 28, 2021 at 3:50 PM Terren Suydam 
> wrote:
>
> *> testimony of experience constitutes facts about consciousness.*
>
>
> Sure I agree, provided you first accept that consciousness is the
> inevitable byproduct of intelligence
>

I hope the irony is not lost on anyone that you're insisting on your theory
of consciousness to make your case that theories of consciousness are a
waste of time.

I don't think it's necessary to accept that in order to make use of
testimony by a thousand different people in an experiment who all say:
"whatever you're doing, it's weird, I am smelling gasoline".


>
>
>> >> I am far more interested in understanding the mental activity of a
>>> person when he's awake then when he's asleep.
>>>
>>
>> *> We're talking about consciousness, not merely "mental activity". *
>>
>
> And as I mentioned in a previous post, if consciousness is NOT the
> inevitable byproduct of intelligence then when we're talking about
> consciousness we don't even know if we're talking about the same thing.
>

If you want to get pedantic you can say we don't know if we're talking
about the same thing even if we do accept consciousness as the inevitable
byproduct of intelligence. So that heuristic isn't helpful. Again, if
someone claims to be in pain, then that's a fact we can use, even if the
character of that pain isn't knowable publicly. Ditto for seeing red, or
any other claim about qualia.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> 
> .
>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0tA9-4GsUkWamsNSdo_O7cgjqpt6uCTFPvYxSz9Kg-iA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-tppexAD1GRsmWr%2Bv3MVEZutBp0kOgaiu9WjSB4zyTUg%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread 'Brent Meeker' via Everything List



On 4/28/2021 12:09 PM, Jason Resch wrote:



On Wed, Apr 28, 2021 at 2:02 PM 'Brent Meeker' via Everything List 
> wrote:




On 4/28/2021 11:39 AM, Terren Suydam wrote:
>
> I'm interested in a theory of consciousness that can tell me, among
> other things, how it is that we have conscious experiences when we
> dream. Don't you wonder about that?

No especially.  It's certainly consistent with consciousness being a
brain process.  And it's consistent with Jeff Hawkins theory that the
brain is continually trying to predict sensation and it is
predictions
that are endorsed by the most neurons that constitute conscious
thoughts.  in sleep, with little or no sensory input the predictions
wander, depending mainly on memory for input.


There was a neurologist (I forgot who) that said "Waking life is a 
dream modulated by the senses."


Paul Churchland.  And I think it's a good observation.

Brent

In other words, the brain's main function is effectively that of a 
dreaming machine (to generate a picture of reality centered on a 
subject). Normally, when we are awake, this dream is synched up to 
mostly follow along with an external world, given data input from the 
senses. But when we sleep, the brain is free to make things up in ways 
not synced up to the external world through the senses.


I don't know how true this idea is, but it makes sense and sounds 
plausible. If it's true, we can expect any creature that dreams likely 
also experiences a picture of a reality centered on a subject.


Jason
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUioAOW6msmzs0nqFdUyfMs8OJd%2BuftwiUxUqwfr%2BtOirw%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/40f5d0da-9a98-c436-3eba-1426ad68e475%40verizon.net.


Re: A minimally conscious program

2021-04-28 Thread John Clark
On Wed, Apr 28, 2021 at 3:50 PM Terren Suydam 
wrote:

*> testimony of experience constitutes facts about consciousness.*


Sure I agree, provided you first accept that consciousness is the
inevitable byproduct of intelligence


> >> I am far more interested in understanding the mental activity of a
>> person when he's awake then when he's asleep.
>>
>
> *> We're talking about consciousness, not merely "mental activity". *
>

And as I mentioned in a previous post, if consciousness is NOT the
inevitable byproduct of intelligence then when we're talking about
consciousness we don't even know if we're talking about the same thing.

John K ClarkSee what's on my new list at  Extropolis

.

> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0tA9-4GsUkWamsNSdo_O7cgjqpt6uCTFPvYxSz9Kg-iA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 3:15 PM John Clark  wrote:

> On Wed, Apr 28, 2021 at 2:39 PM Terren Suydam 
> wrote:
>
> >> Forget BF Skinner, this is more general than consciousness or
>>> behavior. If you want to explain Y at the most fundamental level from first
>>> principles you can't start with "X produces Y'' and then use X as part of
>>> your explanation of Y.
>>>
>>
>> *> OK, I want to explain consciousness from first principles, so Y =
>> consciousness. What is X?  *
>>
>
> Something that shows up on a brain scan machine according to you.
>

You're obfuscating. I was pretty clear that I was talking about peoples'
reports of their own subjective experience, but you clipped that out and
made it seem otherwise. Maybe you did that because your whole edifice
crumbles if you admit that testimony of experience constitutes facts about
consciousness.


>
> *> I'm interested in a theory of consciousness that can tell me, among
>> other things, how it is that we have conscious experiences when we dream.
>> Don't you wonder about that?*
>>
>
> I am far more interested in understanding the mental activity of a person when
> he's awake then when he's asleep.
>
>
We're talking about consciousness, not merely "mental activity".
Regardless, you have every right to be incurious about matters like these.
The mystery is why you involve yourself in conversations you have no
interest in.


> *> I'm very curious about how intelligence works too. *
>>
>
> Glad to hear it, but there's 10 times or 20 times more verbiage about
> consciousness than intelligence on this list.
>

Nobody's forcing you to read it.

Terren


>
> John K ClarkSee what's on my new list at  Extropolis
> 
> .
>
>
>
>>
>>> .
>>>
>>> .
>>>
>>>
>>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3GiVYXroetrK-EU1ai2Pg6nv8P%2B1RBrmKZ0yQGaX%2BfaA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-szOTMnengagOkZObdGW1-tAN7uUjU-UbgbJ0p33sVrA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread John Clark
On Wed, Apr 28, 2021 at 3:27 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> >>Consistency is not the same as identity. If what you and I mean by the
>> words "red" and "green" were inverted then both of us would still say
>> tomatoes are red and leaves are green, but those things would not look 
>> subjectively
>> the same to us.
>
>
> * > How do you know that? *
>

I don't know that you and I have opposite conceptions of red and green, and
the reason I don't know is because language would be consistent with them me
aning the same thing and for their meeting to have been inverted.

John K Clark









> If you can't know they're the same, you can't know whether they are
> different either.
>
> Notice that I referred to "reports".  You're worrying whether the qualia
> are the same...contrary to your own avowal that there's no there there.
>
> Brent
>
>
> John K Clark
>
>
>
>>
>> On 4/28/2021 9:06 AM, John Clark wrote:
>>
>> *> If we can explain why, for example, you see stars if you bash the back
>>> of your head,*
>>>
>>
>> It might be able to explain why I say "I see green stars" but that's not
>> what you're interested in, you want to know why I subjectively experience
>> the green qualia and if it's the same as your green qualia, but no theory
>> can even prove to you that I see any qualia at all.
>>
>>
>> No, but one can discover whether your reports of green qualia correspond
>> to something consistent in our shared world, as opposed say to your reports
>> of little green men when drunk.  That's why we have a word for qualia which
>> is different from Illusion.
>>
>> Brent
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/6de72d8c-4326-c86e-d69d-206edec25c98%40verizon.net
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1wtDbm2b_7ddN4CD4AL3%2BthLNFOximoZqGZ7BTw6d0QA%40mail.gmail.com
> 
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/136a386e-09e9-4f05-f11a-ab444bcc51b0%40verizon.net
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv25YC_NJKjUfh4gqvbo2VYbnduuymRT-goxUpnNSTJ4Gw%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread 'Brent Meeker' via Everything List



On 4/28/2021 12:03 PM, John Clark wrote:



On Wed, Apr 28, 2021 at 2:41 PM 'Brent Meeker' via Everything List 
> wrote:


/> one can discover whether your reports of green qualia
correspond to something consistent in our shared world,/


Consistency is not the same as identity.If what you and I mean by the 
words "red" and "green" were inverted then both of us would still say 
tomatoes are red and leaves are green, but those things would not look 
subjectively the same to us.


How do you know that?  If you can't know they're the same, you can't 
know whether they are different either.


Notice that I referred to "reports".  You're worrying whether the qualia 
are the same...contrary to your own avowal that there's no there there.


Brent



John K Clark




On 4/28/2021 9:06 AM, John Clark wrote:


/> If we can explain why, for example, you see stars if you
bash the back of your head,/


It might be able to explain why I say "I see green stars" but
that's not what you're interested in, you want to know why I
subjectively experience the green qualia and if it's the same as
your green qualia, but no theory can even prove to you that I see
any qualia at all. 


No, but one can discover whether your reports of green qualia
correspond to something consistent in our shared world, as opposed
say to your reports of little green men when drunk.  That's why we
have a word for qualia which is different from Illusion.

Brent
-- 
You received this message because you are subscribed to the Google

Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com
.
To view this discussion on the web visit

https://groups.google.com/d/msgid/everything-list/6de72d8c-4326-c86e-d69d-206edec25c98%40verizon.net

.

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1wtDbm2b_7ddN4CD4AL3%2BthLNFOximoZqGZ7BTw6d0QA%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/136a386e-09e9-4f05-f11a-ab444bcc51b0%40verizon.net.


Re: A minimally conscious program

2021-04-28 Thread John Clark
On Wed, Apr 28, 2021 at 2:39 PM Terren Suydam 
wrote:

>> Forget BF Skinner, this is more general than consciousness or behavior.
>> If you want to explain Y at the most fundamental level from first
>> principles you can't start with "X produces Y'' and then use X as part of
>> your explanation of Y.
>>
>
> *> OK, I want to explain consciousness from first principles, so Y =
> consciousness. What is X?  *
>

Something that shows up on a brain scan machine according to you.

*> I'm interested in a theory of consciousness that can tell me, among
> other things, how it is that we have conscious experiences when we dream.
> Don't you wonder about that?*
>

I am far more interested in understanding the mental activity of a person when
he's awake then when he's asleep.

*> I'm very curious about how intelligence works too. *
>

Glad to hear it, but there's 10 times or 20 times more verbiage about
consciousness than intelligence on this list.

John K ClarkSee what's on my new list at  Extropolis

.



>
>> .
>>
>> .
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3GiVYXroetrK-EU1ai2Pg6nv8P%2B1RBrmKZ0yQGaX%2BfaA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread Jason Resch
On Wed, Apr 28, 2021 at 2:02 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 4/28/2021 11:39 AM, Terren Suydam wrote:
> >
> > I'm interested in a theory of consciousness that can tell me, among
> > other things, how it is that we have conscious experiences when we
> > dream. Don't you wonder about that?
>
> No especially.  It's certainly consistent with consciousness being a
> brain process.  And it's consistent with Jeff Hawkins theory that the
> brain is continually trying to predict sensation and it is predictions
> that are endorsed by the most neurons that constitute conscious
> thoughts.  in sleep, with little or no sensory input the predictions
> wander, depending mainly on memory for input.
>
>
There was a neurologist (I forgot who) that said "Waking life is a dream
modulated by the senses." In other words, the brain's main function is
effectively that of a dreaming machine (to generate a picture of reality
centered on a subject). Normally, when we are awake, this dream is synched
up to mostly follow along with an external world, given data input from the
senses. But when we sleep, the brain is free to make things up in ways not
synced up to the external world through the senses.

I don't know how true this idea is, but it makes sense and sounds
plausible. If it's true, we can expect any creature that dreams likely also
experiences a picture of a reality centered on a subject.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUioAOW6msmzs0nqFdUyfMs8OJd%2BuftwiUxUqwfr%2BtOirw%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 3:02 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 4/28/2021 11:39 AM, Terren Suydam wrote:
> >
> > I'm interested in a theory of consciousness that can tell me, among
> > other things, how it is that we have conscious experiences when we
> > dream. Don't you wonder about that?
>
> No especially.  It's certainly consistent with consciousness being a
> brain process.  And it's consistent with Jeff Hawkins theory that the
> brain is continually trying to predict sensation and it is predictions
> that are endorsed by the most neurons that constitute conscious
> thoughts.  in sleep, with little or no sensory input the predictions
> wander, depending mainly on memory for input.
>
> Brent
>

What I read in that is that you don't wonder because you've got a workable
theory. This was intended for John Clark, who thinks theories of
consciousness are a waste of time.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-TN6Um-R-geAFPyJkB9ojsvDJT9wd%3DvgejeodSv7wWAQ%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread John Clark
On Wed, Apr 28, 2021 at 2:41 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

*> one can discover whether your reports of green qualia correspond to
> something consistent in our shared world,*


Consistency is not the same as identity. If what you and I mean by the
words "red" and "green" were inverted then both of us would still say
tomatoes are red and leaves are green, but those things would not look
subjectively
the same to us.

John K Clark



>
> On 4/28/2021 9:06 AM, John Clark wrote:
>
> *> If we can explain why, for example, you see stars if you bash the back
>> of your head,*
>>
>
> It might be able to explain why I say "I see green stars" but that's not
> what you're interested in, you want to know why I subjectively experience
> the green qualia and if it's the same as your green qualia, but no theory
> can even prove to you that I see any qualia at all.
>
>
> No, but one can discover whether your reports of green qualia correspond
> to something consistent in our shared world, as opposed say to your reports
> of little green men when drunk.  That's why we have a word for qualia which
> is different from Illusion.
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/6de72d8c-4326-c86e-d69d-206edec25c98%40verizon.net
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1wtDbm2b_7ddN4CD4AL3%2BthLNFOximoZqGZ7BTw6d0QA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread 'Brent Meeker' via Everything List




On 4/28/2021 11:39 AM, Terren Suydam wrote:


I'm interested in a theory of consciousness that can tell me, among 
other things, how it is that we have conscious experiences when we 
dream. Don't you wonder about that?


No especially.  It's certainly consistent with consciousness being a 
brain process.  And it's consistent with Jeff Hawkins theory that the 
brain is continually trying to predict sensation and it is predictions 
that are endorsed by the most neurons that constitute conscious 
thoughts.  in sleep, with little or no sensory input the predictions 
wander, depending mainly on memory for input.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/ebdc473f-060d-75db-dc80-7f9517dc3b58%40verizon.net.


Re: A minimally conscious program

2021-04-28 Thread 'Brent Meeker' via Everything List



On 4/28/2021 9:06 AM, John Clark wrote:


/> If we can explain why, for example, you see stars if you bash
the back of your head,/


It might be able to explain why I say "I see green stars" but that's 
not what you're interested in, you want to know why I subjectively 
experience the green qualia and if it's the same as your green qualia, 
but no theory can even prove to you that I see any qualia at all. 


No, but one can discover whether your reports of green qualia correspond 
to something consistent in our shared world, as opposed say to your 
reports of little green men when drunk.  That's why we have a word for 
qualia which is different from Illusion.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/6de72d8c-4326-c86e-d69d-206edec25c98%40verizon.net.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 12:06 PM John Clark  wrote:

> On Wed, Apr 28, 2021 at 11:17 AM Terren Suydam 
> wrote:
>
> >> We should always pay attention to all relevant *BEHAVIOR**,* including
>>> *BEHAVIOR* such as noises produced by the mouths of other people.
>>>
>>
>> *> Got it. Accounts of subjective experience are not the salient facts in
>> these experiments, it's the way they move their lips and tongue and pass
>> air through their vocal cords that matters. The rest of the world has moved
>> on from BF Skinner, but not you, apparently. *
>>
>
> Forget BF Skinner, this is more general than consciousness or behavior.
> If you want to explain Y at the most fundamental level from first
> principles you can't start with "X produces Y'' and then use X as part of
> your explanation of Y.
>

OK, I want to explain consciousness from first principles, so Y =
consciousness. What is X?  Testimony about subjective experience?  Nobody
is claiming that testimony about subjective experience produces
consciousness (X produces Y).


>
>
>> >>> *Why doesn't that represent progress?  *

>>>
>>> >> It may represent progress but not progress towards understanding
>>> consciousness.
>>>
>>
>> *> Why not?  Understanding how the brain maps or encodes different
>> subjective experiences *
>>
>
> Because understanding how the brain maps and encodes information will
> tell you lots about behavior and intelligence but absolutely nothing about
> consciousness.
>
> *> If we can explain why, for example, you see stars if you bash the back
>> of your head,*
>>
>
> It might be able to explain why I say "I see green stars" but that's not
> what you're interested in, you want to know why I subjectively experience
> the green qualia and if it's the same as your green qualia, but no theory
> can even prove to you that I see any qualia at all.
>

I think the question of whether my experience of green is the same as your
experience of green reflects confusion on behalf of the questioner. I'm not
interested in that.

I'm interested in a theory of consciousness that can tell me, among other
things, how it is that we have conscious experiences when we dream. Don't
you wonder about that?


> *> You make it sound as though there's nothing to be gleaned from
>> systematic investigation,*
>>
>
> It's impossible to systematically investigate everything therefor a
> scientist needs to use judgment to determine what is worth his time and
> what is not. Every minute you spend on consciousness research is a minute
> you could've spent on researching something far far more productive, which
> would be pretty much anything. Consciousness research has made ZERO
> progress over the last thousand years and I have every reason to believe it
> will make twice as much during the next thousand.
>

You refuse to acknowledge that one can produce evidence of consciousness,
namely in the form of subjects testifying to their experience. It doesn't
matter to you, apparently, if someone reports being in extreme pain.


>
> *> the thing I understand the least is how incurious you are about it.*
>
>
> The thing I find puzzling is how incurious you and virtually all internet
> consciousness mavens are about how intelligence works. Figuring out
> intelligence is a solvable problem, but figuring out consciousness is not,
> probably because it's just a brute fact that consciousness is the way data
> feels when it is being processed. If so then there's nothing more they can
> be said about consciousness, however I am well aware that after all is said
> and done more is always said and done.
>
>
I'm very curious about how intelligence works too. You're making
assumptions about me that don't bear out... perhaps that's true of your
thinking in general. And I never claimed consciousness is a solvable
problem. But there are better theories of consciousness than others,
because there are facts about consciousness that beg explanation (e.g.
dreaming, lucid dreaming), and some theories have better explanations than
others. But like any other domain, if we can come up with a relatively
simple theory that explains a relatively large set of phenomena, then
that's a good contender. But you know this, you've just got some kind of
odd hang up about consciousness.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> 
>
> .
>
> .
>
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0RTqv_FdnC6szHBEHO_gM%3DSeXJ86z9FEJmkW_Ba%2B7edg%40mail.gmail.com
> 
> .
>

-- 
You received this mess

Re: A minimally conscious program

2021-04-28 Thread Telmo Menezes


Am Di, 27. Apr 2021, um 04:07, schrieb 'Brent Meeker' via Everything List:
> It certainly seems likely that any brain or AI that can perceive sensory 
> events and form an inner narrative and memory of that is conscious in a sense 
> even if they are unable to act.  This is commonly the situation during a 
> dream.  One is aware of dreamt events but doesn't actually move in response 
> to them.
> 
> And I think JKC is wrong when he says "few if any believe other people are 
> conscious all the time, only during those times that corresponds to the times 
> they behave intelligently."  I generally assume people are conscious if their 
> eyes are open and they respond to stimuli, even if they are doing something 
> dumb. 
> 
> But I agree with his general point that consciousness is easy and 
> intelligence is hard.

JFK insists on this point a lot, but I really do not understand how it matters. 
Maybe so, maybe if idealism or panspychism are correct, consciousness is the 
easiest thing there is, from an engineering perspective. But what does the 
tehcnical challenge have to do with searching for truth and understanding 
reality?

Reminds me of something I heard a meditation teacher say once. He said that for 
eastern people he has to say that "meditation is very hard, it takes a lifetime 
to master!". Generalizing a lot, eastern culture values the idea of mastering 
something that is very hard, it is thus a worthy goal. For westerns he says: 
"meditation is the easiest thing in the world". And thus it satisfies the 
(generalizing a lot) westerner taste for a magic pill that immediately solves 
all problems.

I think you are falling for similar traps.

> I think human consciousness, having an inner narrative,

This equivalence that you are smuggling in here is doing a lot of work... and 
it is the tricky part. "Inner narrative" in the sense of having a private 
simulation of external reality fits what you say below, but why are the lights 
on? I have no doubt that evolution can create the simulation, but what makes us 
live it in the first person?

Telmo

> is just an evolutionary trick the brain developed for learning and accessing 
> learned information to inform decisions. 
> 
> Julian Jaynes wrote a book about how this may have come about, "The Origin of 
> Consciousness in the Breakdown of the Bicameral Mind".  I don't know that he 
> got it exactly right, but I think he was on to the right idea.
> 
> Brent 
> 
> 
> On 4/26/2021 4:07 PM, Terren Suydam wrote:
>> So do you have nothing to say about coma patients who've later woken up and 
>> said they were conscious?  Or people under general anaesthetic who later 
>> report being gruesomely aware of the surgery they were getting?  Should we 
>> ignore those reports?  Or admit that consciousness is worth considering 
>> independently from its effects on outward behavior?
>> 
>> On Mon, Apr 26, 2021 at 11:16 AM John Clark  wrote:
>>> On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam  
>>> wrote:
>>> 
 > It's impossible to refute solipsism
>>> 
>>> True, but it's equally impossible to refute the idea that everything 
>>> including rocks is conscious. And if both a theory and its exact opposite 
>>> can neither be proven nor disproven then neither speculation is of any 
>>> value in trying to figure out how the world works.
>>> 
 *> It's true that the only thing we know for sure is our own 
 consciousness,*
>>> And I know that even I am not conscious all the time, and there is no 
>>> reason for me to believe other people can do better. 
>>>  
 *> but there's nothing about what I said that makes it impossible for 
 there to be a reality outside of ourselves populated by other people. It 
 just requires belief.*
>>> 
>>> And few if any believe other people are conscious all the time, only during 
>>> those times that corresponds to the times they behave intelligently.  
>>> 
>>> John K ClarkSee what's on my new list at  Extropolis 
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to everything-list+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/everything-list/CAJPayv3NKKuSpfc0%3DemkA75U4rvEmS%2B_bBWtM%3D_Xhc5XnWOr0g%40mail.gmail.com
>>>  
>>> .
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/CAMy3ZA9BY%2BBVTmBqaMNtDwqCU

Re: A minimally conscious program

2021-04-28 Thread John Clark
On Wed, Apr 28, 2021 at 11:17 AM Terren Suydam 
wrote:

>> We should always pay attention to all relevant *BEHAVIOR**,* including
>> *BEHAVIOR* such as noises produced by the mouths of other people.
>>
>
> *> Got it. Accounts of subjective experience are not the salient facts in
> these experiments, it's the way they move their lips and tongue and pass
> air through their vocal cords that matters. The rest of the world has moved
> on from BF Skinner, but not you, apparently. *
>

Forget BF Skinner, this is more general than consciousness or behavior. If
you want to explain Y at the most fundamental level from first principles
you can't start with "X produces Y'' and then use X as part of your
explanation of Y.


> >>> *Why doesn't that represent progress?  *
>>>
>>
>> >> It may represent progress but not progress towards understanding
>> consciousness.
>>
>
> *> Why not?  Understanding how the brain maps or encodes different
> subjective experiences *
>

Because understanding how the brain maps and encodes information will tell
you lots about behavior and intelligence but absolutely nothing about
consciousness.

*> If we can explain why, for example, you see stars if you bash the back
> of your head,*
>

It might be able to explain why I say "I see green stars" but that's not
what you're interested in, you want to know why I subjectively experience
the green qualia and if it's the same as your green qualia, but no theory
can even prove to you that I see any qualia at all.

*> You make it sound as though there's nothing to be gleaned from
> systematic investigation,*
>

It's impossible to systematically investigate everything therefor a
scientist needs to use judgment to determine what is worth his time and
what is not. Every minute you spend on consciousness research is a minute
you could've spent on researching something far far more productive, which
would be pretty much anything. Consciousness research has made ZERO
progress over the last thousand years and I have every reason to believe it
will make twice as much during the next thousand.

*> the thing I understand the least is how incurious you are about it.*


The thing I find puzzling is how incurious you and virtually all internet
consciousness mavens are about how intelligence works. Figuring out
intelligence is a solvable problem, but figuring out consciousness is not,
probably because it's just a brute fact that consciousness is the way data
feels when it is being processed. If so then there's nothing more they can
be said about consciousness, however I am well aware that after all is said
and done more is always said and done.

John K ClarkSee what's on my new list at  Extropolis


.

.



>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0RTqv_FdnC6szHBEHO_gM%3DSeXJ86z9FEJmkW_Ba%2B7edg%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 10:15 AM John Clark  wrote:

> On Wed, Apr 28, 2021 at 8:32 AM Terren Suydam 
> wrote:
>
> *> John - do you have any response?*
>>
>
> If you insist.
>
> >> It's not hard to make progress in consciousness research, it's
 impossible.

>>>
>>> *So we should ignore experiments where you stimulate the brain and the
>>> subject reports experiencing some kind of qualia,*
>>>
>>
> We should always pay attention to all relevant *BEHAVIOR**,* including
> *BEHAVIOR* such as noises produced by the mouths of other people.
>

Got it. Accounts of subjective experience are not the salient facts in
these experiments, it's the way they move their lips and tongue and pass
air through their vocal cords that matters. The rest of the world has moved
on from BF Skinner, but not you, apparently.


>
>
>> >*Why doesn't that represent progress?  *
>>>
>>
> It may represent progress but not progress towards understanding
> consciousness.
>

Why not?  Understanding how the brain maps or encodes different subjective
experiences surely counts as progress towards understanding consciousness.
If we can explain why, for example, you see stars if you bash the back of
your head, but not the front, then that would count as progress towards
understanding consciousness.


>
>  > *Is it because you don't trust people's reports?*
>
>
> Trust but verify. When you and I talk about consciousness I don't even
> know if we're talking about the same thing; perhaps by your meaning of the
> word I am not conscious, maybe I'm conscious by my meaning of the word but
> not by yours, maybe my consciousness is just a pale pitiful thing compared
> to the grand glorious awareness that you have and what you mean by  the
> word "consciousness".  Maybe comparing your consciousness to mine is like
> comparing a firefly to a supernova. Or maybe it's the other way around.
> Neither of us will ever know.
>

You make it sound as though there's nothing to be gleaned from systematic
investigation, and the thing I understand the least is how incurious you
are about it. I mean, to each their own, but trying to grasp how objective
systems (like brains) and consciousness interrelate is perhaps the most
fascinating thing I can think of. The mystery of it is incredible to behold
when you really get into it.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_pZMROyqdu1FbM7FuyhMrSOyif54_B%3DqhE2NSFRDY88Q%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread John Clark
On Wed, Apr 28, 2021 at 10:37 AM Jason Resch  wrote:

*> But this doesn't mean we can't develop theories of consciousness*


Truer words were never spoken! You can find 6.02 * 10^23  different
consciousness theories on the internet .

 > *and gather empirical evidence for them. *


How? What empirical evidence is there that one consciouness theory is
better than another?

*> If we simulate brains in computers or develop functional brain scanners
> that measure individual neurons, we can answer questions about what makes a
> philosophers of mind talk about qualia or pose or answer questions about
> consciousness. *



Studying a neuron with a brain scan or any other device can tell me how the
brain works and how behavior works and how intelligence works but can tell
me nothing about consciousness. It seems to me that if people have been
asking the same question for thousands of years but have not come one
nanometer closer to a solution then it may be time to consider the
possibility that the wrong question is being asked.

John K ClarkSee what's on my new list at  Extropolis


.

.

>
>

>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0LFa-CC70hmbJPWyXcHy4zMEURGXoTC3qCeXh8xwo%3DcQ%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread Jason Resch
On Wed, Apr 28, 2021, 9:15 AM John Clark  wrote:

> On Wed, Apr 28, 2021 at 8:32 AM Terren Suydam 
> wrote:
>
> *> John - do you have any response?*
>>
>
> If you insist.
>
> >> It's not hard to make progress in consciousness research, it's
 impossible.

>>>
>>> *So we should ignore experiments where you stimulate the brain and the
>>> subject reports experiencing some kind of qualia,*
>>>
>>
> We should always pay attention to all relevant *BEHAVIOR**,* including
> *BEHAVIOR* such as noises produced by the mouths of other people.
>
>
>> >*Why doesn't that represent progress?  *
>>>
>>
> It may represent progress but not progress towards understanding
> consciousness.
>
>  > *Is it because you don't trust people's reports?*
>
>
> Trust but verify. When you and I talk about consciousness I don't even
> know if we're talking about the same thing; perhaps by your meaning of the
> word I am not conscious, maybe I'm conscious by my meaning of the word but
> not by yours, maybe my consciousness is just a pale pitiful thing compared
> to the grand glorious awareness that you have and what you mean by  the
> word "consciousness".  Maybe comparing your consciousness to mine is like
> comparing a firefly to a supernova. Or maybe it's the other way around.
> Neither of us will ever know.
>
> * > in an FMRI has lead to some interesting facts.*
>>>
>>
> An FMRI may help us understand how the brain works and perhaps even how
> intelligence works, but I think behavior will tell us twice as much as a
> squiggle on a FMRI graph can about consciousness,  and that would be
> exactly twice as much unless we make use of the unproven and unprovable
> axiom that intelligent behavior is a sign of consciousness.
>
>
>> *> You seem to think progress can only mean being able to prove
>>> conclusively how consciousness works.*
>>>
>>
> I don't demand that consciousness researchers do anything as ambitious as
> explaining how consciousness is produced, all I ask is a proof that I am
> not the only conscious entity in the universe. But they can't even do
> that and never will be able to.
>

Consciousness is not unique here.

Nothing can be proved without assuming some theory and working within it.

But how do we ever prove the theory itself is right? We can't.

Even mathematicians face this problem in trying to prove 2+2=4. Any such
proof will rely on a theory which itself cannot be proved.

But this doesn't mean we can't develop theories of consciousness and gather
empirical evidence for them. If we simulate brains in computers or develop
functional brain scanners that measure individual neurons, we can answer
questions about what makes a philosophers of mind talk about qualia or pose
or answer questions about consciousness. Whatever it is that causes the
philospher's brain to ask or talk about consciousness is consciousness.

Having a complete causal trace of the brain  doing these behaviors will
finally allow us to make such an identification.

Jason



> John K ClarkSee what's on my new list at  Extropolis
> 
>
> .
>
> .
>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv39t6-uC6z55HBerygGSWPBMKZKEsNiX%3D3kQ7FVyfTLTw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi2OYtWFn05ko_gefQ%3D1ZSfkgE2strbEGHk3zZ4%2BKsWhA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread John Clark
On Wed, Apr 28, 2021 at 8:32 AM Terren Suydam 
wrote:

*> John - do you have any response?*
>

If you insist.

>> It's not hard to make progress in consciousness research, it's
>>> impossible.
>>>
>>
>> *So we should ignore experiments where you stimulate the brain and the
>> subject reports experiencing some kind of qualia,*
>>
>
We should always pay attention to all relevant *BEHAVIOR**,* including
*BEHAVIOR* such as noises produced by the mouths of other people.


> >*Why doesn't that represent progress?  *
>>
>
It may represent progress but not progress towards understanding
consciousness.

 > *Is it because you don't trust people's reports?*


Trust but verify. When you and I talk about consciousness I don't even know
if we're talking about the same thing; perhaps by your meaning of the word
I am not conscious, maybe I'm conscious by my meaning of the word but not
by yours, maybe my consciousness is just a pale pitiful thing compared to
the grand glorious awareness that you have and what you mean by  the word
"consciousness".  Maybe comparing your consciousness to mine is like
comparing a firefly to a supernova. Or maybe it's the other way around.
Neither of us will ever know.

* > in an FMRI has lead to some interesting facts.*
>>
>
An FMRI may help us understand how the brain works and perhaps even how
intelligence works, but I think behavior will tell us twice as much as a
squiggle on a FMRI graph can about consciousness,  and that would be
exactly twice as much unless we make use of the unproven and unprovable
axiom that intelligent behavior is a sign of consciousness.


> *> You seem to think progress can only mean being able to prove
>> conclusively how consciousness works.*
>>
>
I don't demand that consciousness researchers do anything as ambitious as
explaining how consciousness is produced, all I ask is a proof that I am
not the only conscious entity in the universe. But they can't even do that
and never will be able to.

John K ClarkSee what's on my new list at  Extropolis


.

.

>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv39t6-uC6z55HBerygGSWPBMKZKEsNiX%3D3kQ7FVyfTLTw%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
John - do you have any response?


On Tue, Apr 27, 2021 at 9:38 AM Terren Suydam 
wrote:

>
>
> On Tue, Apr 27, 2021 at 7:22 AM John Clark  wrote:
>
>> On Tue, Apr 27, 2021 at 1:08 AM Terren Suydam 
>> wrote:
>>
>> *> consciousness is harder to work with than intelligence, because it's
>>> harder to make progress.*
>>
>>
>> It's not hard to make progress in consciousness research, it's
>> impossible.
>>
>
> So we should ignore experiments where you stimulate the brain and the
> subject reports experiencing some kind of qualia, in a repeatable way. Why
> doesn't that represent progress?  Is it because you don't trust people's
> reports?
>
>
>>
>> *> Facts that might slay your theory are much harder to come by.*
>>
>>
>> Such facts are not hard to come by. they're impossible to come by. So
>> for a consciousness scientist being lazy works just as well as being
>> industrious, so consciousness research couldn't be any easier, just face a
>> wall, sit on your hands, and contemplate your navel.
>>
>
> There are fruitful lines of research happening. Research on patients
> undergoing meditation, and psychedelic experiences, while in an FMRI has
> lead to some interesting facts. You seem to think progress can only mean
> being able to prove conclusively how consciousness works. Progress can mean
> deepening our understanding of the relationship between the brain and the
> mind.
>
> Terren
>
>
>> John K ClarkSee what's on my new list at  Extropolis
>> 
>>
>> .
>>
>>
>> .
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAJPayv3BkpBXW%3Dq-nX--Dss4ogXXACeswwCnkiwcaWu-un01cg%40mail.gmail.com
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-tNsDCAUz-G-ErkmnSREEQ9zd6ui_er_XyA4MZTR92xQ%40mail.gmail.com.


Re: A minimally conscious program

2021-04-27 Thread Philip Thrift

A bit long, but this interview of of the very lucid Hedda Mørch (pronounced 
"Mark") is very good:(for consciousness "realists"):

https://www.youtube.com/watch?v=gilsMtCPHyw

via 

https://twitter.com/onemorebrown/status/1386970910230523906





On Tuesday, April 27, 2021 at 8:38:32 AM UTC-5 Terren Suydam wrote:

> On Tue, Apr 27, 2021 at 7:22 AM John Clark  wrote:
>
>> On Tue, Apr 27, 2021 at 1:08 AM Terren Suydam  
>> wrote:
>>
>> *> consciousness is harder to work with than intelligence, because it's 
>>> harder to make progress.*
>>
>>
>> It's not hard to make progress in consciousness research, it's 
>> impossible.  
>>
>
> So we should ignore experiments where you stimulate the brain and the 
> subject reports experiencing some kind of qualia, in a repeatable way. Why 
> doesn't that represent progress?  Is it because you don't trust people's 
> reports?
>  
>
>>
>> *> Facts that might slay your theory are much harder to come by.*
>>
>>
>> Such facts are not hard to come by. they're impossible to come by. So 
>> for a consciousness scientist being lazy works just as well as being 
>> industrious, so consciousness research couldn't be any easier, just face a 
>> wall, sit on your hands, and contemplate your navel.
>>
>
> There are fruitful lines of research happening. Research on patients 
> undergoing meditation, and psychedelic experiences, while in an FMRI has 
> lead to some interesting facts. You seem to think progress can only mean 
> being able to prove conclusively how consciousness works. Progress can mean 
> deepening our understanding of the relationship between the brain and the 
> mind.
>
> Terren
>  
>
>> John K ClarkSee what's on my new list at  Extropolis 
>> 
>>
>> .
>>
>>  
>> .
>>
>> -- 
>>
> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-li...@googlegroups.com.
>>
> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/CAJPayv3BkpBXW%3Dq-nX--Dss4ogXXACeswwCnkiwcaWu-un01cg%40mail.gmail.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/d963be0a-9393-4680-b77f-6f32011a44b0n%40googlegroups.com.


Re: A minimally conscious program

2021-04-27 Thread Terren Suydam
On Tue, Apr 27, 2021 at 7:22 AM John Clark  wrote:

> On Tue, Apr 27, 2021 at 1:08 AM Terren Suydam 
> wrote:
>
> *> consciousness is harder to work with than intelligence, because it's
>> harder to make progress.*
>
>
> It's not hard to make progress in consciousness research, it's impossible.
>
>

So we should ignore experiments where you stimulate the brain and the
subject reports experiencing some kind of qualia, in a repeatable way. Why
doesn't that represent progress?  Is it because you don't trust people's
reports?


>
> *> Facts that might slay your theory are much harder to come by.*
>
>
> Such facts are not hard to come by. they're impossible to come by. So for
> a consciousness scientist being lazy works just as well as being
> industrious, so consciousness research couldn't be any easier, just face a
> wall, sit on your hands, and contemplate your navel.
>

There are fruitful lines of research happening. Research on patients
undergoing meditation, and psychedelic experiences, while in an FMRI has
lead to some interesting facts. You seem to think progress can only mean
being able to prove conclusively how consciousness works. Progress can mean
deepening our understanding of the relationship between the brain and the
mind.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> 
>
> .
>
>
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3BkpBXW%3Dq-nX--Dss4ogXXACeswwCnkiwcaWu-un01cg%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8SRB1TUivNfBrAdNKF1CR8yJRfXXAq%3DXvnBFZG93vM%2Bg%40mail.gmail.com.


Re: A minimally conscious program

2021-04-27 Thread Telmo Menezes


Am Mo, 26. Apr 2021, um 17:16, schrieb John Clark:
> On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam  
> wrote:
> 
>> > It's impossible to refute solipsism
> 
> True, but it's equally impossible to refute the idea that everything 
> including rocks is conscious. And if both a theory and its exact opposite can 
> neither be proven nor disproven then neither speculation is of any value in 
> trying to figure out how the world works.

When I was a little kid I would ask adults if rocks were conscious. They tried 
to train me to stop asking such questions, because they were worried about what 
other people would think. To this day, I never stopped asking these questions. 
I see three options here:

(1) They were correct to worry and I have a mental issue.

(2) I am really dumb and don't see something obvious.

(3) Beliefs surrounding consciousness are socially normative, and asking 
question outside of such boundaries is a taboo.

>> *> It's true that the only thing we know for sure is our own consciousness,*
> 
> And I know that even I am not conscious all the time, and there is no reason 
> for me to believe other people can do better. 
>  
>> *> but there's nothing about what I said that makes it impossible for there 
>> to be a reality outside of ourselves populated by other people. It just 
>> requires belief.*
> 
> And few if any believe other people are conscious all the time, only during 
> those times that corresponds to the times they behave intelligently.  
> 
> John K ClarkSee what's on my new list at  Extropolis 
> 
> 

> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv3NKKuSpfc0%3DemkA75U4rvEmS%2B_bBWtM%3D_Xhc5XnWOr0g%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/e503aebe-34e1-425c-9c2c-fc92b472eff6%40www.fastmail.com.


Re: A minimally conscious program

2021-04-27 Thread Terren Suydam
On Tue, Apr 27, 2021 at 2:27 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 4/26/2021 11:11 PM, Terren Suydam wrote:
>
>
> Sure - although it seems possible that there could be intelligences that
> are not conscious. We're pretty biased to think of intelligence as we have
> it - situated in a meat body, and driven by evolutionary programming in a
> social context. There may be forms of intelligence so alien we could never
> conceive of them, and there's no guarantee about consciousness.
>
>
> I don't see how an entity could be really intelligent without being able
> to consider its actions by a kind of internal simulation.
>

Neither do I, but it may be a failure of imagination. The book Blindsight
by Peter Watts explores this idea.


>
> Take corporations. A corporation is its own entity and it acts
> intelligently in the service of its own interests. They can certainly be
> said to "prospectively consider scenarios in which they are actors in which
> the scenario is informed by past experience". Is a corporation conscious?
>
>
> I think so.  And the Supreme Court agrees. :-)
>

How about cities? Countries? Religions?  Each of which can be said to
"prospectively consider scenarios..."

Terren


>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/adcec69e-5c78-f355-9d35-a503d8d12d5f%40verizon.net
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8%2BpwJe448JksJQCWnRFp5_S6Z8%3DQV5HHesfiMOtiChZQ%40mail.gmail.com.


Re: A minimally conscious program

2021-04-27 Thread John Clark
On Tue, Apr 27, 2021 at 1:08 AM Terren Suydam 
wrote:

*> consciousness is harder to work with than intelligence, because it's
> harder to make progress.*


It's not hard to make progress in consciousness research, it's impossible.

*> Facts that might slay your theory are much harder to come by.*


Such facts are not hard to come by. they're impossible to come by. So for a
consciousness scientist being lazy works just as well as being industrious,
so consciousness research couldn't be any easier, just face a wall, sit on
your hands, and contemplate your navel.
John K ClarkSee what's on my new list at  Extropolis


.


.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3BkpBXW%3Dq-nX--Dss4ogXXACeswwCnkiwcaWu-un01cg%40mail.gmail.com.


Re: A minimally conscious program

2021-04-27 Thread John Clark
On Mon, Apr 26, 2021 at 10:08 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

* > I think JKC is wrong when he says "few if any believe other people are
> conscious all the time, only during those times that corresponds to the
> times they behave intelligently."  I generally assume people are conscious
> if their eyes are open and they respond to stimuli, even if they are doing
> something dumb.*


That is not an unreasonable assumption, but responding to stimuli is
behavior. So being a practical man I'll bet you wouldn't still think
they're conscious when their eyes are open but their heart hasn't had a
beat in several hours nor have they taken a breath during that time and
rigor mortis has set in and they've started to smell bad.

John K ClarkSee what's on my new list at  Extropolis


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2ySypS-0sdAb1cz3Q76KgvuTN0kSFqe2tayPG8SAe2FQ%40mail.gmail.com.


Re: A minimally conscious program

2021-04-27 Thread John Clark
On Mon, Apr 26, 2021 at 7:07 PM Terren Suydam 
wrote:

*> So do you have nothing to say about coma patients who've later woken up
> and said they were conscious?  Or people under general anaesthetic who
> later report being gruesomely aware of the surgery they were getting?
> Should we ignore those reports?  Or admit that consciousness is worth
> considering independently from its effects on outward behavior?*
>

If something is behaving intelligently I am very confident (although not
100% confident) that it is conscious, however if something is not behaving
intelligently I am far less certain it is not conscious because it may be
incapable of moving or it may simply be trying to deceive me for reasons of
its own. Observing behavior is not a perfect tool for assessing
consciousness but is the best we have and the best we'll ever have so it
will just have to do. Even the examples you present in your post all come
from observing behavior, so a certain degree of uncertainty will always be
with us.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv34ToPW5AtfWK%3D3d1rNnK5FSKG_4VDh-%2BR-0%2B%2BRxpmqEg%40mail.gmail.com.


Re: A minimally conscious program

2021-04-26 Thread 'Brent Meeker' via Everything List



On 4/26/2021 11:11 PM, Terren Suydam wrote:



On Tue, Apr 27, 2021 at 1:27 AM 'Brent Meeker' via Everything List 
> wrote:




However, in a certain sense, intelligence is easier because it's
constrained. Intelligence can be tested. It's certainly more
practical, which makes intelligence easier to study as well.
You're much more likely to be able to profit from advances in
understanding of intelligence. In that sense, consciousness is
harder to work with than intelligence, because it's harder to
make progress. Facts that might slay your theory are much harder
to come by.


What I mean by it is that if you can engineer intelligence at a
high level it will necessarily entail consciousness. An entity
cannot be human-level intelligent without being able to
prospectively consider scenarios in which they are actors in which
the scenario is informed by past experience...and I think that is
what constitutes the core of consciousness.


Sure - although it seems possible that there could be intelligences 
that are not conscious. We're pretty biased to think of intelligence 
as we have it - situated in a meat body, and driven by evolutionary 
programming in a social context. There may be forms of intelligence so 
alien we could never conceive of them, and there's no guarantee about 
consciousness.


I don't see how an entity could be really intelligent without being able 
to consider its actions by a kind of internal simulation.


Take corporations. A corporation is its own entity and it acts 
intelligently in the service of its own interests. They can certainly 
be said to "prospectively consider scenarios in which they are actors 
in which the scenario is informed by past experience". Is a 
corporation conscious?


I think so.  And the Supreme Court agrees. :-)

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/adcec69e-5c78-f355-9d35-a503d8d12d5f%40verizon.net.


Re: A minimally conscious program

2021-04-26 Thread Terren Suydam
On Tue, Apr 27, 2021 at 1:27 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
> However, in a certain sense, intelligence is easier because it's
> constrained. Intelligence can be tested. It's certainly more practical,
> which makes intelligence easier to study as well. You're much more likely
> to be able to profit from advances in understanding of intelligence. In
> that sense, consciousness is harder to work with than intelligence, because
> it's harder to make progress. Facts that might slay your theory are much
> harder to come by.
>
>
> What I mean by it is that if you can engineer intelligence at a high level
> it will necessarily entail consciousness.  An entity cannot be human-level
> intelligent without being able to prospectively consider scenarios in which
> they are actors in which the scenario is informed by past experience...and
> I think that is what constitutes the core of consciousness.
>

Sure - although it seems possible that there could be intelligences that
are not conscious. We're pretty biased to think of intelligence as we have
it - situated in a meat body, and driven by evolutionary programming in a
social context. There may be forms of intelligence so alien we could never
conceive of them, and there's no guarantee about consciousness. Take
corporations. A corporation is its own entity and it acts intelligently in
the service of its own interests. They can certainly be said to
"prospectively consider scenarios in which they are actors in which the
scenario is informed by past experience". Is a corporation conscious?

Terren


>
> Brent
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-aGhHXh4VDpZ7Jc%2B4esbL4ubYUAJ3TgoP3bRwDSkkBTA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-26 Thread 'Brent Meeker' via Everything List



On 4/26/2021 10:07 PM, Terren Suydam wrote:



On Mon, Apr 26, 2021 at 10:08 PM 'Brent Meeker' via Everything List 
> wrote:


It certainly seems likely that any brain or AI that can perceive
sensory events and form an inner narrative and memory of that is
conscious in a sense even if they are unable to act.  This is
commonly the situation during a dream.  One is aware of dreamt
events but doesn't actually move in response to them.


And I think JKC is wrong when he says "few if any believe other
people are conscious all the time, only during those times that
corresponds to the times they behave intelligently."  I generally
assume people are conscious if their eyes are open and they
respond to stimuli, even if they are doing something dumb.


Or rather, even if they're doing nothing at all. Someone meditating 
for hours on end, or someone lying on a couch with eyeshades and 
headphones on tripping on psilocybin, may be having extraordinary 
internal experiences and display absolutely no outward behavior.


But I agree with his general point that consciousness is easy and
intelligence is hard.


It depends how you look at it. JC's point is that it's impossible to 
prove much of anything about consciousness, so you can imagine many 
ways to explain consciousness without ever suffering the pain of your 
theory being slain by a fact.


However, in a certain sense, intelligence is easier because it's 
constrained. Intelligence can be tested. It's certainly more 
practical, which makes intelligence easier to study as well. You're 
much more likely to be able to profit from advances in understanding 
of intelligence. In that sense, consciousness is harder to work with 
than intelligence, because it's harder to make progress. Facts that 
might slay your theory are much harder to come by.


What I mean by it is that if you can engineer intelligence at a high 
level it will necessarily entail consciousness.  An entity cannot be 
human-level intelligent without being able to prospectively consider 
scenarios in which they are actors in which the scenario is informed by 
past experience...and I think that is what constitutes the core of 
consciousness.


Brent


I think human consciousness, having an inner narrative, is just an
evolutionary trick the brain developed for learning and accessing
learned information to inform decisions. Julian Jaynes wrote a
book about how this may have come about, "The Origin of
Consciousness in the Breakdown of the Bicameral Mind". I don't
know that he got it exactly right, but I think he was on to the
right idea.


I agree!

Terren


Brent

On 4/26/2021 4:07 PM, Terren Suydam wrote:

So do you have nothing to say about coma patients who've later
woken up and said they were conscious? Or people under general
anaesthetic who later report being gruesomely aware of the
surgery they were getting?  Should we ignore those reports?  Or
admit that consciousness is worth considering independently from
its effects on outward behavior?

On Mon, Apr 26, 2021 at 11:16 AM John Clark mailto:johnkcl...@gmail.com>> wrote:

On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam
mailto:terren.suy...@gmail.com>> wrote:

> It's impossible to refute solipsism


True, but it's equally impossible to refute the idea that
everything including rocks is conscious. And if both a theory
and its exact opposite can neither be proven nor disproven
then neither speculation is of any value in trying to figure
out how the world works.

/> It's true that the only thing we know for sure is our
own consciousness,/

And I know that even I am not conscious all the time, and
there is no reason for me to believe other people can do better.

/> but there's nothing about what I said that makes it
impossible for there to be a reality outside of ourselves
populated by other people. It just requires belief./


And few if any believe other people are conscious all the
time, only during those times that corresponds to the times
they behave intelligently.

John K Clark See what's on my new list at Extropolis

-- 
You received this message because you are subscribed to the

Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from
it, send an email to
everything-list+unsubscr...@googlegroups.com
.
To view this discussion on the web visit

https://groups.google.com/d/msgid/everything-list/CAJPayv3NKKuSpfc0%3DemkA75U4rvEmS%2B_bBWtM%3D_Xhc5XnWOr0g%40mail.gmail.com



Re: A minimally conscious program

2021-04-26 Thread smitra

On 25-04-2021 22:29, Jason Resch wrote:

It is quite easy, I think, to define a program that "remembers"
(stores and later retrieves ( information.

It is slightly harder, but not altogether difficult, to write a
program that "learns" (alters its behavior based on prior inputs).

What though, is required to write a program that "knows" (has
awareness or access to information or knowledge)?

Does, for instance, the following program "know" anything about the
data it is processing?

if (pixel.red > 128) then {
// knows pixel.red is greater than 128
} else {
// knows pixel.red <= 128
}

If not, what else is required for knowledge?

Does the program behavior have to change based on the state of some
information? For example:

if (pixel.red > 128) then {
// knows pixel.red is greater than 128
doX();
} else {
// knows pixel.red <= 128
doY():
}

Or does the program have to possess some memory and enter a different
state based on the state of the information it processed?

if (pixel.red > 128) then {
// knows pixel.red is greater than 128
enterStateX():
} else {
// knows pixel.red <= 128
enterStateY();
}

Or is something else altogether needed to say the program knows?

If a program can be said to "know" something then can we also say it
is conscious of that thing?

Jason


I think it's better to approach the problem from the other end, i.e. you 
consider a certain consciousness described in terms of the content of 
the consciousness, e.g. you have the conscious experience of reading 
this sentence, and here it's important that part of this conscious 
experience is that it is you and not someone else reading this. So, a 
lot more information is involved here than just processing the small 
amount of information for reading this text. Then for that consciousness 
one can ask what physical system could implement this particular 
consciousness. But this is then to a large degree fixed by the conscious 
experience itself, as that already includes a sense if identity. But 
this is not fixed 100%, there exists a self-localization ambiguity.


The simpler the system that generates a consciousness, the larger this 
self-location ambiguity will become. This will become too large for very 
simple systems (if they are conscious at all) to pin that conscious 
experience down to any particular physical device that is running the 
algorithm that supposedly generates it. The conscious experience of a 
spider in my house may not be sufficiently detailed to locate itself in 
my house, it's consciousness is spread out over a vast number of 
different physical systems, some of which may be located on Earth as it 
existed 300 million years ago.


Saibal

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/5a65c47b306fb4b53d8f53b5925d776d%40zonnet.nl.


Re: A minimally conscious program

2021-04-26 Thread Terren Suydam
On Mon, Apr 26, 2021 at 10:08 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> It certainly seems likely that any brain or AI that can perceive sensory
> events and form an inner narrative and memory of that is conscious in a
> sense even if they are unable to act.  This is commonly the situation
> during a dream.  One is aware of dreamt events but doesn't actually move in
> response to them.
>

> And I think JKC is wrong when he says "few if any believe other people
> are conscious all the time, only during those times that corresponds to the
> times they behave intelligently."  I generally assume people are conscious
> if their eyes are open and they respond to stimuli, even if they are doing
> something dumb.
>

Or rather, even if they're doing nothing at all. Someone meditating for
hours on end, or someone lying on a couch with eyeshades and headphones on
tripping on psilocybin, may be having extraordinary internal experiences
and display absolutely no outward behavior.


> But I agree with his general point that consciousness is easy and
> intelligence is hard.
>

It depends how you look at it. JC's point is that it's impossible to prove
much of anything about consciousness, so you can imagine many ways to
explain consciousness without ever suffering the pain of your theory being
slain by a fact.

However, in a certain sense, intelligence is easier because it's
constrained. Intelligence can be tested. It's certainly more practical,
which makes intelligence easier to study as well. You're much more likely
to be able to profit from advances in understanding of intelligence. In
that sense, consciousness is harder to work with than intelligence, because
it's harder to make progress. Facts that might slay your theory are much
harder to come by.


> I think human consciousness, having an inner narrative, is just an
> evolutionary trick the brain developed for learning and accessing learned
> information to inform decisions. Julian Jaynes wrote a book about how this
> may have come about, "The Origin of Consciousness in the Breakdown of the
> Bicameral Mind".  I don't know that he got it exactly right, but I think he
> was on to the right idea.
>

I agree!

Terren


>
> Brent
>
> On 4/26/2021 4:07 PM, Terren Suydam wrote:
>
> So do you have nothing to say about coma patients who've later woken up
> and said they were conscious?  Or people under general anaesthetic who
> later report being gruesomely aware of the surgery they were getting?
> Should we ignore those reports?  Or admit that consciousness is worth
> considering independently from its effects on outward behavior?
>
> On Mon, Apr 26, 2021 at 11:16 AM John Clark  wrote:
>
>> On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam 
>> wrote:
>>
>> > It's impossible to refute solipsism
>>>
>>
>> True, but it's equally impossible to refute the idea that everything
>> including rocks is conscious. And if both a theory and its exact opposite
>> can neither be proven nor disproven then neither speculation is of any
>> value in trying to figure out how the world works.
>>
>> * > It's true that the only thing we know for sure is our own
>>> consciousness,*
>>>
>> And I know that even I am not conscious all the time, and there is no
>> reason for me to believe other people can do better.
>>
>>
>>> * > but there's nothing about what I said that makes it impossible for
>>> there to be a reality outside of ourselves populated by other people. It
>>> just requires belief.*
>>>
>>
>> And few if any believe other people are conscious all the time, only
>> during those times that corresponds to the times they behave
>> intelligently.
>>
>> John K ClarkSee what's on my new list at  Extropolis
>> 
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAJPayv3NKKuSpfc0%3DemkA75U4rvEmS%2B_bBWtM%3D_Xhc5XnWOr0g%40mail.gmail.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAMy3ZA9BY%2BBVTmBqaMNtDwqCUC%3DcZ7H%2BCx_ihmr_Dy5prjn7WQ%40mail.gmail.com
> 
> .
>
>
> --
> You received this message because you are s

Re: A minimally conscious program

2021-04-26 Thread 'Brent Meeker' via Everything List
It certainly seems likely that any brain or AI that can perceive sensory 
events and form an inner narrative and memory of that is conscious in a 
sense even if they are unable to act. This is commonly the situation 
during a dream.  One is aware of dreamt events but doesn't actually move 
in response to them.


And I think JKC is wrong when he says "few if any believe other people 
are conscious all the time, only during those times that corresponds to 
the times they behave intelligently."  I generally assume people are 
conscious if their eyes are open and they respond to stimuli, even if 
they are doing something dumb.


But I agree with his general point that consciousness is easy and 
intelligence is hard.  I think human consciousness, having an inner 
narrative, is just an evolutionary trick the brain developed for 
learning and accessing learned information to inform decisions. Julian 
Jaynes wrote a book about how this may have come about, "The Origin of 
Consciousness in the Breakdown of the Bicameral Mind".  I don't know 
that he got it exactly right, but I think he was on to the right idea.


Brent

On 4/26/2021 4:07 PM, Terren Suydam wrote:
So do you have nothing to say about coma patients who've later woken 
up and said they were conscious?  Or people under general anaesthetic 
who later report being gruesomely aware of the surgery they were 
getting?  Should we ignore those reports?  Or admit that consciousness 
is worth considering independently from its effects on outward behavior?


On Mon, Apr 26, 2021 at 11:16 AM John Clark > wrote:


On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam
mailto:terren.suy...@gmail.com>> wrote:

> It's impossible to refute solipsism


True, but it's equally impossible to refute the idea that
everything including rocks is conscious. And if both a theory and
its exact opposite can neither be proven nor disproven then
neither speculation is of any value in trying to figure out how
the world works.

/> It's true that the only thing we know for sure is our own
consciousness,/

And I know that even I am not conscious all the time, and there is
no reason for me to believe other people can do better.

/> but there's nothing about what I said that makes it
impossible for there to be a reality outside of ourselves
populated by other people. It just requires belief./


And few if any believe other people are conscious all the time,
only during those times that corresponds to the times they behave
intelligently.

John K Clark See what's on my new list at Extropolis

-- 
You received this message because you are subscribed to the Google

Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com
.
To view this discussion on the web visit

https://groups.google.com/d/msgid/everything-list/CAJPayv3NKKuSpfc0%3DemkA75U4rvEmS%2B_bBWtM%3D_Xhc5XnWOr0g%40mail.gmail.com

.

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9BY%2BBVTmBqaMNtDwqCUC%3DcZ7H%2BCx_ihmr_Dy5prjn7WQ%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/72cf6136-17df-2be5-bdba-5dadf036e08e%40verizon.net.


Re: A minimally conscious program

2021-04-26 Thread Terren Suydam
So do you have nothing to say about coma patients who've later woken up and
said they were conscious?  Or people under general anaesthetic who later
report being gruesomely aware of the surgery they were getting?  Should we
ignore those reports?  Or admit that consciousness is worth considering
independently from its effects on outward behavior?

On Mon, Apr 26, 2021 at 11:16 AM John Clark  wrote:

> On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam 
> wrote:
>
> > It's impossible to refute solipsism
>>
>
> True, but it's equally impossible to refute the idea that everything
> including rocks is conscious. And if both a theory and its exact opposite
> can neither be proven nor disproven then neither speculation is of any
> value in trying to figure out how the world works.
>
> * > It's true that the only thing we know for sure is our own
>> consciousness,*
>>
> And I know that even I am not conscious all the time, and there is no
> reason for me to believe other people can do better.
>
>
>> * > but there's nothing about what I said that makes it impossible for
>> there to be a reality outside of ourselves populated by other people. It
>> just requires belief.*
>>
>
> And few if any believe other people are conscious all the time, only
> during those times that corresponds to the times they behave intelligently
> .
>
> John K ClarkSee what's on my new list at  Extropolis
> 
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3NKKuSpfc0%3DemkA75U4rvEmS%2B_bBWtM%3D_Xhc5XnWOr0g%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9BY%2BBVTmBqaMNtDwqCUC%3DcZ7H%2BCx_ihmr_Dy5prjn7WQ%40mail.gmail.com.


Re: A minimally conscious program

2021-04-26 Thread 'Brent Meeker' via Everything List



On 4/26/2021 8:16 AM, John Clark wrote:
On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam 
mailto:terren.suy...@gmail.com>> wrote:


> It's impossible to refute solipsism


True, but it's equally impossible to refute the idea that everything 
including rocks is conscious. And if both a theory and its exact 
opposite can neither be proven nor disproven then neither speculation 
is of any value in trying to figure out how the world works.


/> It's true that the only thing we know for sure is our own
consciousness,/

And I know that even I am not conscious all the time, and there is no 
reason for me to believe other people can do better.


/> but there's nothing about what I said that makes it impossible
for there to be a reality outside of ourselves populated by other
people. It just requires belief./


And few if any believe other people are conscious all the time, only 
during those times that corresponds to the times they behave 
intelligently.


Not only that, but they also oftentimes behave intelligently without 
being conscious of it.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/b95a852d-145f-3e93-fa1b-918eaef0e53a%40verizon.net.


Re: A minimally conscious program

2021-04-26 Thread 'Brent Meeker' via Everything List



On 4/26/2021 8:03 AM, Jason Resch wrote:



On Mon, Apr 26, 2021, 5:29 AM John Clark > wrote:


On Mon, Apr 26, 2021 at 6:06 AM Telmo Menezes
mailto:te...@telmomenezes.net>> wrote:

>> And for anemotion like pain write a program such that the
closer the number in the X register comes to the integer P
the more computational resources will be devoted to
changing that number, and if it ever actually equals P
then the program should stop doing everything else and do
nothing but try to change that number to something far
enough away from P until it's no longer an urgent matter
and the program can again do things that have nothing to
do with P.


> /If you truly believe this is the case, then it follows that
anyone writing such a program and subjecting it to X=P should
be considered guilty of torture. Do you agree?/


Yes. If I'm right, and I think I am, then anyone writing such a
program not only should be but logically MUST be considered to
have been engaging in torture.What conclusion can be drawn from
that bizarre conclusion? Assuming a level of consciousness to
something while ignoring all information about its intelligent
behavior is not a useful tool for assessing the morality of an action.

John K Clark


What's the difference between / how do we know, the program is 
experiencing pain when P is high, versus the program is experiencing 
bliss when P is low?


Bliss is defined as the state the program doesn't invest time/effort to 
change.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/92d48ade-67de-c69d-0ba5-7f20e065c26f%40verizon.net.


Re: A minimally conscious program

2021-04-26 Thread John Clark
On Mon, Apr 26, 2021 at 1:48 PM smitra  wrote:


>
> *> I have an analogue computer that implements this: Two magnets. If I
> pushtwo equal poles toward each other, does this cause the system of the
> twomagnets to feel pain?*


I don't know, I don't even know if you feel pain, but I do know that those
two magnets you speak of do not behave very intelligently so I'm not going
to worry about it.

John K Clark

>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1Lb16dyThe_HjOti81fnFbprMgDHKCVj47iDgHok3f3A%40mail.gmail.com.


Re: A minimally conscious program

2021-04-26 Thread smitra

On 26-04-2021 10:49, John Clark wrote:

On Sun, Apr 25, 2021 at 4:29 PM Jason Resch 
wrote:


_> It is quite easy, I think, to define a program that "remembers"
(stores and later retrieves ( information._


I agree. And for an emotion like pain write a program such that the
closer the number in the X register comes to the integer P the more
computational resources will be devoted to changing that number, and
if it ever actually equals P then the program should stop doing
everything else and do nothing but try to change that number to
something far enough away from P until it's no longer an urgent matter
and the program can again do things that have nothing to do with P.

Artificial Intelligence is hard but Artificial Consciousness Is easy.
John K ClarkSee what's on my new list at  Extropolis [1]



I have an analogue computer that implements this: Two magnets. If I push 
two equal poles toward each other, does this cause the system of the two 
magnets to feel pain?


Saibal

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/5bb98dc233328b6d3acc1bf96b87fa61%40zonnet.nl.


Re: A minimally conscious program

2021-04-26 Thread John Clark
On Mon, Apr 26, 2021 at 11:03 AM Jason Resch  wrote:

*> What's the difference between / how do we know, the program is
> experiencing pain when P is high, versus the program is experiencing bliss
> when P is low?*
>

Behavior. Actions are taken to minimize pain and to maximize bliss.

> Or is bliss merely the complete absence of pain and the distinction in my
> prior question is meaningless?
>

If that question is meaningless then so is the question "what's the
difference between positive electrical charge and negative?". They both
seem like reasonable questions to me and the answer to both is,  "they move
things in opposite directions".

John K Clark



>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2ZKirkX%3Da1sox08sjDLEtxfOJvJDB5yvOUd8bsCmg31Q%40mail.gmail.com.


Re: A minimally conscious program

2021-04-26 Thread John Clark
On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam 
wrote:

> It's impossible to refute solipsism
>

True, but it's equally impossible to refute the idea that everything
including rocks is conscious. And if both a theory and its exact opposite
can neither be proven nor disproven then neither speculation is of any
value in trying to figure out how the world works.

* > It's true that the only thing we know for sure is our own
> consciousness,*
>
And I know that even I am not conscious all the time, and there is no
reason for me to believe other people can do better.


> * > but there's nothing about what I said that makes it impossible for
> there to be a reality outside of ourselves populated by other people. It
> just requires belief.*
>

And few if any believe other people are conscious all the time, only during
those times that corresponds to the times they behave intelligently.

John K ClarkSee what's on my new list at  Extropolis


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3NKKuSpfc0%3DemkA75U4rvEmS%2B_bBWtM%3D_Xhc5XnWOr0g%40mail.gmail.com.


Re: A minimally conscious program

2021-04-26 Thread Jason Resch
On Mon, Apr 26, 2021, 5:29 AM John Clark  wrote:

> On Mon, Apr 26, 2021 at 6:06 AM Telmo Menezes 
> wrote:
>
> >> And for an emotion like pain write a program such that the closer the
>>> number in the X register comes to the integer P the more computational
>>> resources will be devoted to changing that number, and if it ever actually
>>> equals P then the program should stop doing everything else and do nothing
>>> but try to change that number to something far enough away from P until
>>> it's no longer an urgent matter and the program can again do things that
>>> have nothing to do with P.
>>
>>
>> > *If you truly believe this is the case, then it follows that anyone
>> writing such a program and subjecting it to X=P should be considered guilty
>> of torture. Do you agree?*
>>
>
> Yes. If I'm right, and I think I am, then anyone writing such a program
> not only should be but logically MUST be considered to have been engaging
> in torture. What conclusion can be drawn from that bizarre conclusion?
> Assuming a level of consciousness to something while ignoring all
> information about its intelligent behavior is not a useful tool for
> assessing the morality of an action.
>
> John K Clark
>

What's the difference between / how do we know, the program is experiencing
pain when P is high, versus the program is experiencing bliss when P is low?

Or is bliss merely the complete absence of pain and the distinction in my
prior question is meaningless?

Jason



> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2j0rtQBuiZoJxy0%3DL9hradH42s%3DRQBJnGwE9qORhjUMA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgS5yBWLfjZXHBJ5Bx9orNF%2BKfbE142XPZfXLvgeu6yhA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-26 Thread Terren Suydam
It's impossible to refute solipsism, but that's true regardless of your
metaphysics. It's true that the only thing we know *for sure* is our own
consciousness, but there's nothing about what I said that makes it
impossible for there to be a reality outside of ourselves populated by
other people. It just requires belief.


On Mon, Apr 26, 2021 at 10:39 AM Henrik Ohrstrom 
wrote:

> That would be, it is quite solipsistic?
> /henrik
>
>
> Den mån 26 apr. 2021 kl 14:31 skrev Terren Suydam  >:
>
>> Assuming the program has a state and that state changes in response to
>> its inputs, then it seems reasonable to say the program is conscious in
>> some elemental way. What is it conscious "of", though? I'd say it's not
>> conscious of anything outside of itself, in the same way we are not
>> conscious of anything outside of ourselves. We are only conscious of the
>> model of the world we build. You might then say it's conscious of its
>> internal representation, or its state.
>>
>> On Sun, Apr 25, 2021 at 4:29 PM Jason Resch  wrote:
>>
>>> It is quite easy, I think, to define a program that "remembers" (stores
>>> and later retrieves ( information.
>>>
>>> It is slightly harder, but not altogether difficult, to write a program
>>> that "learns" (alters its behavior based on prior inputs).
>>>
>>> What though, is required to write a program that "knows" (has awareness
>>> or access to information or knowledge)?
>>>
>>> Does, for instance, the following program "know" anything about the data
>>> it is processing?
>>>
>>> if (pixel.red > 128) then {
>>> // knows pixel.red is greater than 128
>>> } else {
>>> // knows pixel.red <= 128
>>> }
>>>
>>> If not, what else is required for knowledge?
>>>
>>> Does the program behavior have to change based on the state of some
>>> information? For example:
>>>
>>> if (pixel.red > 128) then {
>>> // knows pixel.red is greater than 128
>>> doX();
>>> } else {
>>> // knows pixel.red <= 128
>>> doY():
>>> }
>>>
>>> Or does the program have to possess some memory and enter a different
>>> state based on the state of the information it processed?
>>>
>>> if (pixel.red > 128) then {
>>> // knows pixel.red is greater than 128
>>> enterStateX():
>>> } else {
>>> // knows pixel.red <= 128
>>> enterStateY();
>>> }
>>>
>>> Or is something else altogether needed to say the program knows?
>>>
>>> If a program can be said to "know" something then can we also say it is
>>> conscious of that thing?
>>>
>>> Jason
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-list+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgmPiCz5v4p91LAs0jN_2dCBocvnh4OO8sE7c-0JG%3DuwQ%40mail.gmail.com
>>> 
>>> .
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAMy3ZA_kzDBO7wXrgXJ36bOJmFo6cNv8pxYzt6GujRcc92Vy%3DQ%40mail.gmail.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAF0GBnjtnJ1S%2BS6uhENxBey%2BT7X3%3D4Z9V1FVSiUe4ODYpiCn9Q%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9buDgz0RrSd43NFXq_V-ta1VJhB0orw8mJcABdeyTzEg%40mail.gmail.com.


Re: A minimally conscious program

2021-04-26 Thread Henrik Ohrstrom
That would be, it is quite solipsistic?
/henrik


Den mån 26 apr. 2021 kl 14:31 skrev Terren Suydam :

> Assuming the program has a state and that state changes in response to its
> inputs, then it seems reasonable to say the program is conscious in some
> elemental way. What is it conscious "of", though? I'd say it's not
> conscious of anything outside of itself, in the same way we are not
> conscious of anything outside of ourselves. We are only conscious of the
> model of the world we build. You might then say it's conscious of its
> internal representation, or its state.
>
> On Sun, Apr 25, 2021 at 4:29 PM Jason Resch  wrote:
>
>> It is quite easy, I think, to define a program that "remembers" (stores
>> and later retrieves ( information.
>>
>> It is slightly harder, but not altogether difficult, to write a program
>> that "learns" (alters its behavior based on prior inputs).
>>
>> What though, is required to write a program that "knows" (has awareness
>> or access to information or knowledge)?
>>
>> Does, for instance, the following program "know" anything about the data
>> it is processing?
>>
>> if (pixel.red > 128) then {
>> // knows pixel.red is greater than 128
>> } else {
>> // knows pixel.red <= 128
>> }
>>
>> If not, what else is required for knowledge?
>>
>> Does the program behavior have to change based on the state of some
>> information? For example:
>>
>> if (pixel.red > 128) then {
>> // knows pixel.red is greater than 128
>> doX();
>> } else {
>> // knows pixel.red <= 128
>> doY():
>> }
>>
>> Or does the program have to possess some memory and enter a different
>> state based on the state of the information it processed?
>>
>> if (pixel.red > 128) then {
>> // knows pixel.red is greater than 128
>> enterStateX():
>> } else {
>> // knows pixel.red <= 128
>> enterStateY();
>> }
>>
>> Or is something else altogether needed to say the program knows?
>>
>> If a program can be said to "know" something then can we also say it is
>> conscious of that thing?
>>
>> Jason
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgmPiCz5v4p91LAs0jN_2dCBocvnh4OO8sE7c-0JG%3DuwQ%40mail.gmail.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAMy3ZA_kzDBO7wXrgXJ36bOJmFo6cNv8pxYzt6GujRcc92Vy%3DQ%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAF0GBnjtnJ1S%2BS6uhENxBey%2BT7X3%3D4Z9V1FVSiUe4ODYpiCn9Q%40mail.gmail.com.


Re: A minimally conscious program

2021-04-26 Thread Terren Suydam
Assuming the program has a state and that state changes in response to its
inputs, then it seems reasonable to say the program is conscious in some
elemental way. What is it conscious "of", though? I'd say it's not
conscious of anything outside of itself, in the same way we are not
conscious of anything outside of ourselves. We are only conscious of the
model of the world we build. You might then say it's conscious of its
internal representation, or its state.

On Sun, Apr 25, 2021 at 4:29 PM Jason Resch  wrote:

> It is quite easy, I think, to define a program that "remembers" (stores
> and later retrieves ( information.
>
> It is slightly harder, but not altogether difficult, to write a program
> that "learns" (alters its behavior based on prior inputs).
>
> What though, is required to write a program that "knows" (has awareness or
> access to information or knowledge)?
>
> Does, for instance, the following program "know" anything about the data
> it is processing?
>
> if (pixel.red > 128) then {
> // knows pixel.red is greater than 128
> } else {
> // knows pixel.red <= 128
> }
>
> If not, what else is required for knowledge?
>
> Does the program behavior have to change based on the state of some
> information? For example:
>
> if (pixel.red > 128) then {
> // knows pixel.red is greater than 128
> doX();
> } else {
> // knows pixel.red <= 128
> doY():
> }
>
> Or does the program have to possess some memory and enter a different
> state based on the state of the information it processed?
>
> if (pixel.red > 128) then {
> // knows pixel.red is greater than 128
> enterStateX():
> } else {
> // knows pixel.red <= 128
> enterStateY();
> }
>
> Or is something else altogether needed to say the program knows?
>
> If a program can be said to "know" something then can we also say it is
> conscious of that thing?
>
> Jason
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgmPiCz5v4p91LAs0jN_2dCBocvnh4OO8sE7c-0JG%3DuwQ%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_kzDBO7wXrgXJ36bOJmFo6cNv8pxYzt6GujRcc92Vy%3DQ%40mail.gmail.com.


Re: A minimally conscious program

2021-04-26 Thread John Clark
On Mon, Apr 26, 2021 at 6:06 AM Telmo Menezes 
wrote:

>> And for an emotion like pain write a program such that the closer the
>> number in the X register comes to the integer P the more computational
>> resources will be devoted to changing that number, and if it ever actually
>> equals P then the program should stop doing everything else and do nothing
>> but try to change that number to something far enough away from P until
>> it's no longer an urgent matter and the program can again do things that
>> have nothing to do with P.
>
>
> > *If you truly believe this is the case, then it follows that anyone
> writing such a program and subjecting it to X=P should be considered guilty
> of torture. Do you agree?*
>

Yes. If I'm right, and I think I am, then anyone writing such a program not
only should be but logically MUST be considered to have been engaging in
torture. What conclusion can be drawn from that bizarre conclusion?
Assuming a level of consciousness to something while ignoring all
information about its intelligent behavior is not a useful tool for
assessing the morality of an action.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2j0rtQBuiZoJxy0%3DL9hradH42s%3DRQBJnGwE9qORhjUMA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-26 Thread Telmo Menezes


Am Mo, 26. Apr 2021, um 10:49, schrieb John Clark:
> On Sun, Apr 25, 2021 at 4:29 PM Jason Resch  wrote:
> 
>> *> It is quite easy, I think, to define a program that "remembers" (stores 
>> and later retrieves ( information.*
> 
> I agree. And for an emotion like pain write a program such that the closer 
> the number in the X register comes to the integer P the more computational 
> resources will be devoted to changing that number, and if it ever actually 
> equals P then the program should stop doing everything else and do nothing 
> but try to change that number to something far enough away from P until it's 
> no longer an urgent matter and the program can again do things that have 
> nothing to do with P.

If you truly believe this is the case, then it follows that anyone writing such 
a program and subjecting it to X=P should be considered guilty of torture. Do 
you agree?

Telmo

> Artificial Intelligence is hard but Artificial Consciousness Is easy.
> John K Clark   See what's on my new list at  Extropolis 
> 
> 
> 

> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv2dufQQNA2B6WGp5_LHPYry%3DoZDKLuwxWfg%3DeQuGT%2Be1g%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/e689e58e-4dd9-49ec-8eaa-86bfed3e8248%40www.fastmail.com.


Re: A minimally conscious program

2021-04-26 Thread John Clark
On Sun, Apr 25, 2021 at 4:29 PM Jason Resch  wrote:

*> It is quite easy, I think, to define a program that "remembers" (stores
> and later retrieves ( information.*
>

I agree. And for an emotion like pain write a program such that the closer
the number in the X register comes to the integer P the more computational
resources will be devoted to changing that number, and if it ever actually
equals P then the program should stop doing everything else and do nothing
but try to change that number to something far enough away from P until
it's no longer an urgent matter and the program can again do things that
have nothing to do with P.

Artificial Intelligence is hard but Artificial Consciousness Is easy.
John K ClarkSee what's on my new list at  Extropolis


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2dufQQNA2B6WGp5_LHPYry%3DoZDKLuwxWfg%3DeQuGT%2Be1g%40mail.gmail.com.


A minimally conscious program

2021-04-25 Thread Jason Resch
It is quite easy, I think, to define a program that "remembers" (stores and
later retrieves ( information.

It is slightly harder, but not altogether difficult, to write a program
that "learns" (alters its behavior based on prior inputs).

What though, is required to write a program that "knows" (has awareness or
access to information or knowledge)?

Does, for instance, the following program "know" anything about the data it
is processing?

if (pixel.red > 128) then {
// knows pixel.red is greater than 128
} else {
// knows pixel.red <= 128
}

If not, what else is required for knowledge?

Does the program behavior have to change based on the state of some
information? For example:

if (pixel.red > 128) then {
// knows pixel.red is greater than 128
doX();
} else {
// knows pixel.red <= 128
doY():
}

Or does the program have to possess some memory and enter a different state
based on the state of the information it processed?

if (pixel.red > 128) then {
// knows pixel.red is greater than 128
enterStateX():
} else {
// knows pixel.red <= 128
enterStateY();
}

Or is something else altogether needed to say the program knows?

If a program can be said to "know" something then can we also say it is
conscious of that thing?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgmPiCz5v4p91LAs0jN_2dCBocvnh4OO8sE7c-0JG%3DuwQ%40mail.gmail.com.


  1   2   >