Re: The hard problem of matter

2018-10-10 Thread Pierz


On Wednesday, October 10, 2018 at 9:41:39 PM UTC+11, Philip Thrift wrote:
>
>
>
> On Wednesday, October 10, 2018 at 12:41:04 AM UTC-5, Brent wrote:
>>
>>
>>
>> On 10/9/2018 9:18 PM, Philip Thrift wrote:
>>
>>
>>
>> On Tuesday, October 9, 2018 at 6:45:55 PM UTC-5, Brent wrote: 
>>>
>>>
>>>
>>> On 10/9/2018 11:01 AM, Philip Thrift wrote:
>>>
>>>
 If you reject intelligent behavior as a tool for detecting 
 consciousness then how did you determine that? And how can you figure out 
 anything else about any consciousness except for your own?  I don't think 
 there is any way, I think the only alternative is solipsism. 

>>>
>>>
>>> That is a good question. I still think that we will have lots of 
>>> intelligent robots running around - really smart, can win on Jeopardy!, can 
>>> drive cars, can "fake" emotions ... - but we will not consider them 
>>> conscious. We can (hopefully) turn them off and destroy them whenever we 
>>> want. We do have something like a consciousness test in the case of medical 
>>> decisions at end-of-life. So I think a consciousness test will be different 
>>> than an intelligence test.
>>>
>>>
>>> Sure.  Garden slugs are conscious at the level of perception, that's how 
>>> they find food and mates.  But they're not very intelligent.
>>>
>>> Brent
>>>
>>
>> Do slugs perceive, or do they just react? Does a slug say to itself, "I 
>> like the taste of that"?
>>
>>
>> Is consciousness just the use of language?  Dogs and chimps don't have 
>> language either.  Why aren't perception and awareness forms of 
>> consciousness?
>>
>> Brent
>>
>
>
> Some say humans didn't become fully conscious until they had (recursive) 
> language.
>
> Yair Neumana, Ophir Nave: Why the brain needs language in order to be 
> self-conscious 
> 
>
> You need to distinguish between raw qualia - the presence of experience or 
a "what it's like to be"  - and self-consciousness, or awareness of being a 
self. The latter is a kind of meta-quale, an awareness of the fact of 
having qualia which must surely come much further up the neural complexity 
hierarchy than just pure experience itself. I always find it hard to 
understand why these completely different things get confused. I'm pretty 
sure a fly experiences suffering of some kind when I spray it with 
insecticide. I'm also pretty sure it has no consciousness of being a self.

>
> Consciousness = Linguisticity+Experientiality   (both), it would seem.
>
> - https://codicalist.wordpress.com/2018/08/09/the-matter-of-consciousness/
>
> - pt
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: The hard problem of matter

2018-10-10 Thread Pierz


On Wednesday, October 10, 2018 at 12:16:59 PM UTC+11, John Clark wrote:
>
> On Tue, Oct 9, 2018 at 7:54 PM Pierz > 
> wrote:
>
> >*I refuse to accept that "axiom", and I also do not feel compelled to 
>> embrace solipsism.*
>>
>
> You are able to function is the world so you must have some method of 
> deciding when something is conscious and when it is not, if its not 
> intelligent behavior what is it? 
>  
>
>> > *I think it is entirely possible - and indeed sensible - to believe 
>> that some entities that behave "intelligently", like the chess app on my 
>> iPhone, are insentient.*
>>
>
> I don't know what the quotation marks in the above means but if something 
> acts intelligently then it is sensible to say it has some degree of 
> sentience.  
>
 
The quotation marks are there because a lot of what passes for intelligent 
in the domain of machines is in fact dumb as dogshit. I know this because 
it's the field I work in. Computers are literal to a mind-numbingly stupid 
degree. People who expect robots to take over my job (software developer) 
any time soon have no idea what they are talking about. This is the one 
thing that AI *should* be good at, but it is utterly incompetent because of 
its complete lack of flexibility and understanding of the actual domain in 
question. I'm not holding my breath on a truly human-level artificial 
intelligence any time soon.

>   
>  
>
>> > *Whereas some entities that behave unintelligently (like Donald Trump 
>> (sorry, I really shouldn't)) are sentient.*
>>
>
> I admit it's a imperfect tool but it's all we've got and all we'll ever 
> have so we just have to make good with what we have. A failure to act 
> intelligently does not necessarily mean its non-sentient, perhaps both a 
> rock and Donald Trump are really brilliant but are just pretending to be 
> stupid. If so then both are conscious and both are very good actors. 
>   
>
>> > *The absence of an objective test for third-party sentience does not 
>> force one into solipsism. It may point to 1) a problem with your ontology 
>> (qualia aren't "real")*
>>
>
> That means nothing. I detect qualia from direct experience and that 
> outranks everything, it even outranks the scientific method; so if qualia 
> isn't real then nothing is real which would be equivalent to everything 
> being real which is equivalent to "real" having no meaning because meaning 
> needs contrast. 
>

I wasn't saying qualia aren't real. I was suggesting that might be *your* 
ontology. 
I mistook you for an eliminativist. Glad to stand corrected on that point 
at least.

>  
>  
>
>> > *or 2) a deficient state of knowledge wth respect to the (pre) 
>> conditions of consciousness.*
>>
>
> I don't know what that means either. 
>

We don't know for shit what consciousness is. Perhaps there are some 
preconditions for it to arise. Even in an information/data processing based 
conception, we seem to need some notion of preconditions for consciousness, 
seeing as some complex brain processing occurs in the absence of qualia.

>  
>
>> > Seeing as you have no theory of consciousness at all,
>>
>
> Yes I do. My theory is that consciousness is the way data feels when it is 
> being processed and that is a brute fact, meaning it terminates a chain of 
> "why is that?" questions.  
>

Great theory! I love a theory that says, "because". I have sooo many 
questions. Like what relations in the data correspond to what qualia. Like 
how data which is inherently just an aggregate of bits somehow experiences 
itself as a whole. Like why some data being processed have no qualia - like 
unconscious mental processes.And so on and so forth. But fortunately your 
theory answers all this. It's... because.

Now I know that you may claim that any better theory is impossible in 
principle. I think it's technically extraordinarily difficult but not 
impossible in principle. We would need two preconditions: the use of 
conscious reports of qualia as an accepted datum in science, and highly 
sophisticated technology to interface with the brain, a known conscious 
entity with the ability to report its experiences. At least in principle I 
believe experiments of this sort could cast light on the relationship 
between material structures and qualia and the preconditions of conscious 
awareness.

>  
>
>> > *statements like "you have no alternative but to..." don't have much 
>> force. There are plenty of alternatives,*
>>
>
> Name one! I ask once more, in you everyday life when you're not being 
> philosophical you must have some method of determining when something is 
> conscious, if its not intelligent behavior what on earth is it? 
>
> It's not intelligent behaviour. There are tons of things (human artifacts 
that have been created to automate certain complex input-output systems) 
that exhibit complex, intelligent-ish behaviour that I seriously doubt have 
any more sentience than a rock, though I'm open to the possibility of some 
sentience in rocks. My

Re: The hard problem of matter

2018-10-10 Thread Philip Thrift


On Wednesday, October 10, 2018 at 12:10:57 PM UTC-5, John Clark wrote:
>
> On Wed, Oct 10, 2018 at 12:45 AM Philip Thrift  > wrote:
>
> >One could look at it that way. In terms of biological evolution, what 
>> has turned out to be intelligent beings (us!) are also conscious beings.
>
>
> Yes but ask yourself why would Evolution do that. Natural Selection can 
> see intelligence but it can't see consciousness any better than we can see 
> it in others, and yet it produced at least one conscious being (me) and 
> probably more. Why? The only conclusion I can come up with is that 
> consciousness is an unavoidable byproduct of intelligence.  Evolution 
> selected for intelligence and consciousness just road in free on 
> intelligence's coattails.
>  
>
>> > it got a little confusing. Is IBM Watson [ 
>> https://en.wikipedia.org/wiki/Watson_(computer) ] "intelligent"?
>>
>
> What's confusing about that? If a man did what Watson did you wouldn't 
> hesitate to say that what the man did was smart, so to insist that if a 
> machine does the exact same thing it is not smart would make no more sense 
> than saying if a white man does something it shows intelligence but if a 
> black man does the same thing it does not. 
>  
>
>> >There are some AI scientists (or SI - Synthetic Intelligence, to 
>> contrast with AI [ https://en.wikipedia.org/wiki/Synthetic_intelligence 
>> ] who say to make truly intelligent artifacts they must be conscious.
>>
>
> I believe that too because you can't have intelligent behavior without 
> consciousness (although the reverse may not always be true). And that's why 
> I also believe the Turing Test must work not just for intelligence but for 
> consciousness too because like Evolution by Natural Selection the Turing 
> Test deals exclusively with observable behavior. It may not be a perfect 
> test but its all we have and all we'll ever have so it will have to do.
>
> >  How do you make a conscious robot?
>>
>
> Easy, just make it intelligent. After that I would have no more reason to 
> doubt its conscious than I have to doubt my fellow human beings are 
> conscious. 
>
> John K Clark 
>
>
>  
>

Biological evolution had its own path to intelligent, conscious beings, but 
what are humans doing with technology?


There is something to there being two fields with their own conferences: AI 
(Artificial *Intelligence*) and AC (Artificial *Consciousness* - sometimes 
lumped with Consciousness Science, or The Science of Consciousness - it's 
own interdisciplinary science). 

I put Intelligence under the term *linguisticity *- having the ability to 
converse in language, having knowledge, like some Sophia (@RealSophiaRobot) 
is supposed to have. You would sit down and have an intelligent 
conversation with "her" about any subject.

But the likes of Philip Goff (@Philip_Goff) and Galen Strawson say that 
*experientiality* is what is missing, and it is that that has to be 
incorporated into the picture of matter.

I think the latter are right, so one must take the challenge head on: What 
is the language of experiential modalities and how does it relate to 
conscious objects?

- pt
 
 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: The hard problem of matter

2018-10-10 Thread John Clark
On Wed, Oct 10, 2018 at 1:19 AM Brent Meeker  wrote:

>
>
>
> >>My theory is that consciousness is the way data feels when it is being
>> processed and that is a brute fact, meaning it terminates a chain of "why
>> is that?" questions.
>
>
>
> * > It has to be something more specific than that.  There is lots of data
> being processed in your brain of which you are not conscious,*
>

If my brain stopped its continuous calculation to determine how fast my
blood needs to be moving and sending the results of that calculation to my
heart I would become conscious that something very important has changed in
a very short amount of time.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: The hard problem of matter

2018-10-10 Thread John Clark
On Wed, Oct 10, 2018 at 12:45 AM Philip Thrift 
wrote:

>One could look at it that way. In terms of biological evolution, what has
> turned out to be intelligent beings (us!) are also conscious beings.


Yes but ask yourself why would Evolution do that. Natural Selection can see
intelligence but it can't see consciousness any better than we can see it
in others, and yet it produced at least one conscious being (me) and
probably more. Why? The only conclusion I can come up with is that
consciousness is an unavoidable byproduct of intelligence.  Evolution
selected for intelligence and consciousness just road in free on
intelligence's coattails.


> > it got a little confusing. Is IBM Watson [
> https://en.wikipedia.org/wiki/Watson_(computer) ] "intelligent"?
>

What's confusing about that? If a man did what Watson did you wouldn't
hesitate to say that what the man did was smart, so to insist that if a
machine does the exact same thing it is not smart would make no more sense
than saying if a white man does something it shows intelligence but if a
black man does the same thing it does not.


> >There are some AI scientists (or SI - Synthetic Intelligence, to
> contrast with AI [ https://en.wikipedia.org/wiki/Synthetic_intelligence ]
> who say to make truly intelligent artifacts they must be conscious.
>

I believe that too because you can't have intelligent behavior without
consciousness (although the reverse may not always be true). And that's why
I also believe the Turing Test must work not just for intelligence but for
consciousness too because like Evolution by Natural Selection the Turing
Test deals exclusively with observable behavior. It may not be a perfect
test but its all we have and all we'll ever have so it will have to do.

>  How do you make a conscious robot?
>

Easy, just make it intelligent. After that I would have no more reason to
doubt its conscious than I have to doubt my fellow human beings are
conscious.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: The hard problem of matter

2018-10-10 Thread John Clark
On Wed, Oct 10, 2018 at 12:15 AM Philip Thrift 
wrote:

*>As a practical matter, a conscious robot raises ethical issues that an
> intelligent robot doesn't, Killing a  phenomenal self-aware being could be
> murder.*
>

But the ethical question of killing a super intelligent robot is moot
because you will never be in a position to do such a thing, but the
question of a super intelligent robot killing you is not moot and that's
why as a practical matter it doesn't matter if you think the robot is not
conscious but it does matter if the robot thinks you are not conscious. The
robot will be the one in a position of power not you.

*>The MRI is used (I think) in deciding whether to remove someone from life
> support. Intelligent behavior has nothing to do with that decision.*
>

Yes it does. The only reason doctors suspect that a red splotch in a
particular place on a brain scan picture means the brain is conscious is
because they noticed in healthy people there was a red splotch there when
healthy people displayed intelligent behavior but it was not there when
they displayed no such behavior as when they were sleeping. Doctors then
 extrapolated that correlation from healthy people to very sick people who
could not display any sort of behavior because they couldn't move. But it
always comes back to intelligent behavior.

John K Clark





>
> - pt
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: The hard problem of matter

2018-10-10 Thread Philip Thrift


On Wednesday, October 10, 2018 at 12:50:44 AM UTC-5, Brent wrote:
>
>
>
> On 10/9/2018 9:45 PM, Philip Thrift wrote:
>
>
>
> On Tuesday, October 9, 2018 at 8:16:59 PM UTC-5, John Clark wrote: 
>>
>> On Tue, Oct 9, 2018 at 7:54 PM Pierz  wrote:
>>
>> >*I refuse to accept that "axiom", and I also do not feel compelled to 
>>> embrace solipsism.*
>>>
>>
>> You are able to function is the world so you must have some method of 
>> deciding when something is conscious and when it is not, if its not 
>> intelligent behavior what is it? 
>>  
>>
>>> > *I think it is entirely possible - and indeed sensible - to believe 
>>> that some entities that behave "intelligently", like the chess app on my 
>>> iPhone, are insentient.*
>>>
>>
>> I don't know what the quotation marks in the above means but if something 
>> acts intelligently then it is sensible to say it has some degree of 
>> sentience. 
>>  
>>
>>> > *Whereas some entities that behave unintelligently (like Donald Trump 
>>> (sorry, I really shouldn't)) are sentient.*
>>>
>>
>> I admit it's a imperfect tool but it's all we've got and all we'll ever 
>> have so we just have to make good with what we have. A failure to act 
>> intelligently does not necessarily mean its non-sentient, perhaps both a 
>> rock and Donald Trump are really brilliant but are just pretending to be 
>> stupid. If so then both are conscious and both are very good actors. 
>>   
>>
>>> > *The absence of an objective test for third-party sentience does not 
>>> force one into solipsism. It may point to 1) a problem with your ontology 
>>> (qualia aren't "real")*
>>>
>>
>> That means nothing. I detect qualia from direct experience and that 
>> outranks everything, it even outranks the scientific method; so if qualia 
>> isn't real then nothing is real which would be equivalent to everything 
>> being real which is equivalent to "real" having no meaning because meaning 
>> needs contrast.   
>>  
>>
>>> > *or 2) a deficient state of knowledge wth respect to the (pre) 
>>> conditions of consciousness.*
>>>
>>
>> I don't know what that means either. 
>>  
>>
>>> > Seeing as you have no theory of consciousness at all,
>>>
>>
>> Yes I do. My theory is that consciousness is the way data feels when it 
>> is being processed and that is a brute fact, meaning it terminates a chain 
>> of "why is that?" questions.  
>>  
>>
>>> > *statements like "you have no alternative but to..." don't have much 
>>> force. There are plenty of alternatives,*
>>>
>>
>> Name one! I ask once more, in you everyday life when you're not being 
>> philosophical you must have some method of determining when something is 
>> conscious, if its not intelligent behavior what on earth is it? 
>>
>> > a refusal to engage it as a problem, in spite of the increasingly 
>>> widespread acceptance among scientists that it *is *a real problem, and 
>>> possibly the biggest problem of all in our current state of knowledge
>>
>>
>> I think intelligence implies consciousness but consciousness does not 
>> necessarily 
>> imply intelligence, so the problem I want answered is abut how intelligence 
>> works not consciousness.
>>
>> John K Clark  
>>
>>
>>
> One could look at it that way. In terms of biological evolution, what has 
> turned out to be intelligent beings (us!) are also conscious beings. When 
> we started making computers and programming languages and such (inventing a 
> field called Artificial Intelligence), it got a little confusing. Is IBM 
> Watson [ https://en.wikipedia.org/wiki/Watson_(computer) ] "intelligent"? 
> Some might say yes, others, no. There are some AI scientists (or SI - 
> Synthetic Intelligence, to contrast with AI [ 
> https://en.wikipedia.org/wiki/Synthetic_intelligence ] who say to make 
> truly intelligent artifacts they must be conscious.
>
> So the question remains no matter how one parses intelligence and 
> consciousness: How do you make a conscious robot?
>
>
> I'm obviously not sure, but here's an idea of how consciousness might 
> occur based on Jeff Hawkins ideas in his book “On Intelligence”.  I refer 
> to the intuition pump of an AI Mars Rover:
>
>
>
> The sensors of the MR would define the current status, both internal and 
> external.  This goes into a predictor that estimates how the current status 
> will change if there's no change in the current plan.  The prediction from 
> the previous cycle is compared to the new current status.  If there's not 
> significant difference, it's “Ho Hum” and action proceeds as planned.  But 
> if the comparison shows a deviation from expectation That is something to 
> take note of.  It's noted in long-term memory which is a searchable 
> database which can be used to learn from.  And it initiates a need to 
> update the plan. So what rises to the level of  consciousness is something 
> that is surprising and may need a change of plan.  And if you ask the MR 
> what happened, it will refer to it's long term memory to give an account 
> bas

Re: The hard problem of matter

2018-10-10 Thread Philip Thrift


On Wednesday, October 10, 2018 at 12:41:04 AM UTC-5, Brent wrote:
>
>
>
> On 10/9/2018 9:18 PM, Philip Thrift wrote:
>
>
>
> On Tuesday, October 9, 2018 at 6:45:55 PM UTC-5, Brent wrote: 
>>
>>
>>
>> On 10/9/2018 11:01 AM, Philip Thrift wrote:
>>
>>
>>> If you reject intelligent behavior as a tool for detecting consciousness 
>>> then how did you determine that? And how can you figure out anything else 
>>> about any consciousness except for your own?  I don't think there is any 
>>> way, I think the only alternative is solipsism. 
>>>
>>
>>
>> That is a good question. I still think that we will have lots of 
>> intelligent robots running around - really smart, can win on Jeopardy!, can 
>> drive cars, can "fake" emotions ... - but we will not consider them 
>> conscious. We can (hopefully) turn them off and destroy them whenever we 
>> want. We do have something like a consciousness test in the case of medical 
>> decisions at end-of-life. So I think a consciousness test will be different 
>> than an intelligence test.
>>
>>
>> Sure.  Garden slugs are conscious at the level of perception, that's how 
>> they find food and mates.  But they're not very intelligent.
>>
>> Brent
>>
>
> Do slugs perceive, or do they just react? Does a slug say to itself, "I 
> like the taste of that"?
>
>
> Is consciousness just the use of language?  Dogs and chimps don't have 
> language either.  Why aren't perception and awareness forms of 
> consciousness?
>
> Brent
>


Some say humans didn't become fully conscious until they had (recursive) 
language.

Yair Neumana, Ophir Nave: Why the brain needs language in order to be 
self-conscious 



Consciousness = Linguisticity+Experientiality   (both), it would seem.

- https://codicalist.wordpress.com/2018/08/09/the-matter-of-consciousness/

- pt

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.