On Wed, Oct 10, 2018 at 7:12 PM Pierz <[email protected]> wrote:

> *>a lot of what passes for intelligent in the domain of machines is in
> fact dumb as dogshit.*
>

And so after being outsmarted on every occasion the last surviving human
turned to the Jupiter Brain just before he entered oblivion and said "I
still think you're dumb as dogshit". Apparently  dogshit is powerful stuff,
it can engineer the Galaxy.


> *>I know this because it's the field I work in. Computers are literal to a
> mind-numbingly stupid degree. People who expect robots to take over my job
> (software developer) any time soon have no idea what they are talking
> about.*
>

Today IBM's Watson can make better diagnosis of illness than most human
doctors, do you really think your profession will be immune? Machines have
already replaced many tasks that were once done by human programmers.
Imagine if there were no higher level languages and you had to program
everything in low level assembly language or even worse binary machine code!
You had to program the 1946 ENIAC computer at a huge patch panel, modern
computers are millions of times larger than ENIAC and the patch panel on
them would be the size of the Himalayas, but modern computers have no patch
panel at all because the machine does all that for you automatically.

>*We don't know for shit what consciousness is.*
>

Nobody has a definition of consciousness but you know what it is because
you have something better, an example of it, if you didn't you wouldn't
know you don't know what it means because there would be no way for you to
know anything.

>> My theory is that consciousness is the way data feels when it is being
>> processed and that is a brute fact, meaning it terminates a chain of "why
>> is that?" questions.
>>
>
> > *Great theory! I love a theory that says, "because". I have sooo many
> questions. Like what relations in the data correspond to what qualia.*
>

Rather than a explanation I will give an example of a qualia generating
program because I like concrete examples. For the pain qualia write a
subroutine
such that the closer the number in the X register comes to the integer P
the more computational resources will be devoted to changing that number,
and if it ever actually equals P then the program should stop doing
everything else and do nothing but try to change that number to something
far enough away from P until it's no longer an urgent matter and the
program can again do things that have nothing to do with P.


> *>Now I know that you may claim that any better theory [of consciousness]
> is impossible in principle.*
>

Yes, absolutely impossible.


> >*I think it's technically extraordinarily difficult but not impossible
> in principle. We would need two preconditions: the use of conscious reports
> of qualia as an accepted datum in science,*
>

A person or a computer giving a report is observable behavior, so you're
just stating my axiom that intelligence implies consciousness using
different words.


> > *and highly sophisticated technology to interface with the brain, a
> known conscious entity with the ability to report its experiences.*
>

You've stacked the deck, you're assuming for no logical reason I can see
that another person is a "known conscious entity" but a computer is not. Right
at the beginning you're assuming the very thing you're trying to prove,
that computers are not conscious.


> *> My "method of determining if something is conscious" is the same as
> most people who don't believe their smart phones are having experiences.*
>

I don't understand why people assume  producing emotion is more difficult
than producing intelligence. Some of our most powerful emotions like
pleasure, pain, and lust come from the oldest parts of our brain that
evolved about 500 million years ago. It is our grossly enlarged neocortex
that makes the human brain so unusual and so recent, it only started to get
large about 3 million years ago and only started to get ridiculously large
less than one million years ago. It deals in deliberation, spatial
perception, speaking, reading, writing and mathematics; in other words
everything we're proud of and  makes humans so very different from other
animals. The only new emotion we got out of it was worry, probably because
the neocortex is also the place where we plan for the future. So if
evolution came up with feeling first and high level intelligence only much
later I don't see why the opposite would be true for our computers.


> > *It's being a biological organism with a nervous system, though again,
> I'm agnostic on organisms like trees. When you're not being a philosopher I
> bet that's your real criterion too! You're not worrying about killing your
> smartphone when you trash it for the next model.*
>

Actually I have a emotional attachment to my obsolete devices so I do have
a twang of regret when I trash them, but I do so nevertheless, and
sometimes I have a twang of regret when I eat meat but its not strong
enough for me to become a vegetarian.


> > T*hings are as they are. All people are conscious, I assume.*
>

I feel pretty confident in saying you do *NOT* assume all people are
conscious all the time, not when they're sleeping or under anesthesia or
dead; that is to say you do not assume they are conscious when they are not
behaving intelligently.


> *>Probably all animals. Possibly plants and rocks and stars and atoms, in
> some very different way from us. *
>

If everything is conscious then the word doesn't mean much and I don't have
to explain why some things are conscious and some things are not.

>>I think intelligence implies consciousness but consciousness does not 
>>necessarily
>> imply intelligence, so the problem I want answered is abut how intelligence
>> works not consciousness.
>>
>
> *> Wait up! "consciousness does not necessarily imply intelligence".*
>

Yes.

> So you are positing conscious unintelligent beings?
>

Yes, for example a snail can clearly display fear and I have no evidence it
is less intense than my own fear, but I do have evidence the snail is not
as smart as I am. I hope that doesn't make me sound conceited


> *>According to your own argument the consciousness of these entities can
> never be proved.*
>

Yes. Godel told us that regardless of what axioms you start with there will
be some things that are true but have no proof, and of course if you start
with the wrong axioms some things will have a proof but will not be true.

> *Yet you claim to believe in them *
>

Yes, without my axiom I can't prove you are conscious but I nevertheless
believe that you are. It's a good thing too because I would be unable to
function if I really believed I was the only conscious being in the
universe; hence my affection for my axiom. And although I can't prove my
axiom I can point to powerful evidence that it is *almost* certainly true,
Darwin's theory of Evolution and the fact that it managed to produce at
least one conscious being.


> >*despite the lack of a test. *
>

I do have a test for consciousness although it needs an axiom which like
all axioms has no proof.


> > *And at the same time you assert that if I reject the intelligent
> behaviour test for consciousness, I am "forced" into solipsism.*
>

Yes. But I am certain in everyday life NOBODY rejects the behavioral test
for consciousness, even professional philosophers don't reject it except
when they're teaching a class of freshmen and trying to sound provocative.


> I also believe in consciousness that exists despite the lack of a
> definitive test.
>

Me too, if I didn't I wouldn't be able to "believe" in anything.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to