On Feb 3, 11:22 am, John Clark <johnkcl...@gmail.com> wrote:
> On Fri, Feb 3, 2012 Craig Weinberg <whatsons...@gmail.com> wrote:
>
>
>
> > > An abacus is a computer. Left to it's own devices it's just a rectangle
> > of wood and bamboo or whatever.
>
> That is true so although it certainly needs to be huge we can't just make
> our very very big abacuses even bigger and expect them to be intelligent,
> its not just a hardware problem of wiring together more microchips, we must
> teach (program) the abacus to learn on its own.

Huge abacuses are a really good way to look at this, although it's
pretty much the same as the China Brain. Your position is that if we
made a gigantic abacus the size of our solar system, and had a person
manning each sliding bead, that there would be a possibility that even
though each person could only slide their bead to the left or right
according and tell others or be told by others to slide their beads
left or right, that with a well crafted enough sequences of
instructions, the abacus itself would begin to have a conscious
experience. The abacus could literally be made to think that it had
been born and come to life as a human being, a turtle, a fictional
character...anything we choose to convert into a sequence we feels
reflects the functionality of these beings will actually cause the
abacus to experience being that thing.

Do you see why I am incredulous about this? I understand that if you
assume comp this seems feasible, after all, we can make CleverBot and
Siri, etc. The problem is that CleverBot and Siri live in our very
human minds. They do not have lives and dream of being Siri II. I
understand that you and others here are convinced that logic would
dictate that we can't prove that Siri 5000 won't feel our feelings and
understand what we understand, but I am crystal clear in my own
understanding that no matter how good the program seems, Siri 5000
will feel exactly the same thing as Siri. Nothing. No more than the
instructions being called out on the scaffolds of the Mega Abacus will
begin to feel something some day. It doesn't work that way. Why? As I
continue to try to explain, awareness is not a function of objects, it
is the symmetrically anomalous counterpart of objects. Experiences
accumulate semantic charge - significance - over time, which, in a
natural circumstance, is reflected in the objective shadow, as
external agendas are reflected in the subjective experience. The
abacus and computer's instructions will never be native to them. The
beads will never learn anything. They are only beads.

>That is a very difficult
> task but enormous progress has been made in the last few years; as I said
> before, in 1998 nobody knew how to program even the largest 30 million
> dollar super abacus in the world to perform acts of intelligence that today
> can be done by that $399 iPhone abacus in your pocket.

I know. I've heard. As I say, no version is any closer to awareness of
any kind than the first version.

>
> I admit that it could turn out that humans just aren't smart enough to know
> how to teach a computer to be as smart or smarter than they are,

It's not a matter of being smart enough. You can't turn up into down.
Machines are made of unconsciousness. All machines are unconscious.
That is how we can control them. Consciousness and mechanism are
mutually exclusive by definition and always will be.

> but that
> doesn't mean it won't happen because humans have help, computers
> themselves. In a sense that's already true, a computer program needs to be
> in zeros and ones but nobody could write the Siri program that way, but we
> have computer assemblers and compilers to do that so we can write in a much
> higher level language than zeros and ones.

That would not be necessary if the machine had any capacity to learn.
Like the neurons of our brain, the microprocessors would adapt and
begin to understand natural human language.

> So at a fundamental level no
> human being could write a computer program like Siri and nobody knows how
> it works.

I wouldn't say we don't know how it works. Binary logic is pretty
straightforward.

> But programs like that get written nevertheless. And as computers
> get better the tools for writing programs get better and intelligent
> programs even more complex than Siri will get written with even less human
> understanding of their operation. The process builds on itself and thus
> accelerates.

That's the theory. Meanwhile, in reality, we are using the same basic
interface for computers since 1995.

>
> >  > people in a vegetative state do sometimes have an inner life despite
> > their behavior.
>
> In the course of our conversations you have made declarative statements
> like the above dozens if not hundreds of times but you never seriously ask
> yourself "HOW DO I KNOW THIS?".

There is a lot of anecdotal evidence. People come out of comas.
Recently a study proved it with MRI scans where the comatose patient
was able to stimulate areas of their brain associated with coordinated
physical activity in response to the scientists request for them to
imagine playing tennis.

>
> > > we certainly don't owe a trashcan lid any such benefit of the doubt.
>
> Why "certainly", why are you so certain?

Because I understand how communications work. I know that the meaning
of THANK YOU does not radiate out from the plastic, but rather is
understood by a literate person.

> I know why I am but I can't figure
> out why you are. Like you I also think the idea that a plastic trashcan can
> have a inner life is ridiculous but unlike you I can give a clear logical
> reason WHY I think it's ridiculous: a trash can does not behave
> intelligently.

Why doesn't it behave intelligently though? Why are the computations
in the plastic non-intelligent but the computations inside the brain
tissue intelligent? Why hasn't the trash can developed sentience by
now. I have given my clear logical reason above.

>
> > Like a computer, it is manufactured out of materials selected
> > specifically for their stable, uniform, inanimate properties.
>
> Just exactly like human beings that are manufactured out of stable,
> uniform, inanimate materials like amino acids.

I disagree. Organic chemistry is volatile. It reeks. It explodes. It
lives and dies. Besides, human beings may not exist below the cellular
level. Molecules may be too primitive to be described as part of us.

>
> > I understand what you mean though, and yes, our perception of something's
> > behavior is a primary tool to how we think of it, but not the only one.
> > More important is the influence of conventional wisdom in a given society
> > or group.
>
> At one time the conventional wisdom in society was that black people didn't
> have much of a inner life, certainly nothing like that of white people, so
> they could own and do whatever they wanted to people of a darker hue
> without guilt. Do you really expect Mr. Joe Blow and his conventional
> wisdom can teach us anything about the future of computers?
>
I like how you start out grandstanding against prejudice and
superficial assumptions and end with completely blowing off Mr. Joe
Blow. Yeah, what can he teach us, he has no inner life, certainly
nothing like that of sophisticated scientists. Funny. But no, I was
not trying to endorse conventional wisdom, I was trying to point out
that the logic of 'we can only recognize the qualities of things by
observing their behavior' is a false and incomplete picture of how we
perceive the world. We make sense many different ways, and they vary
from person to person, group to group, species to species, etc...

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to