On Oct 5, 11:54 am, Quentin Anciaux <allco...@gmail.com> wrote:
> 2011/10/5 Craig Weinberg <whatsons...@gmail.com>
> > On Oct 5, 10:15 am, Quentin Anciaux <allco...@gmail.com> wrote:
> > > No they are not saying that. They are saying that a model of the brain
> > fed
> > > with the same inputs as a real brain will act as the real brain... if it
> > was
> > > not the case, the model would be wrong so you could not label it as a
> > model
> > > of the brain.
> > That would require that the model of the brain be closer than
> > genetically identical, since identical twins and conjoined twins do
> > not always respond the same way to the same inputs.
> They aren't in the same state.

That's what I'm saying. If copies at the genetic level do not produce
the same states, then what suggests to us that anything could produce
the same state?

> > That may not be
> > possible, since the epigenetic variation and developmental influences
> > may not be knowable or reproducible. It's a 'Boys From Brazil' theory.
> > Cool sci-fi, but I don't think we will ever have to worry about
> > considering it as a real possibility. We know nothing about what the
> > substitution level of the 'same inputs' would be either. Can you say
> > that making a brain of a 10 year old would not require 10 years of
> > sequential neural imprinting or that the imprinting would be any less
> > complex to develop than it would be to create than the world itself?
> > > They never said they could know which inputs you could have and they
> > don't
> > > have to. They just have to know the transition rule
> > (biochemichal/physical)
> > > of each neurons and as the brain respect physics so as the model, and so
> > it
> > > will react the same way.
> > Reacting is not experiencing though. A picture of a brain can react
> > like a brain, but it doesn't mean there is an experiential correlate
> > there. Just because the picture is 3D and has some computation behind
> > it instead of just a recording, why would that make it suddenly have
> > an experience?
> Because if you ask it something (feed input) you'll get an answer which
> would be the same as a real person... you can't ask anything to a recording.

But you can ask something to a recording. (Please stay on the line,
your call is important to us... For technical support please say the
name of the product or press one...)

If I ask a ventriloquist dummy a question I will get an answer that
would be the same as a real person too. The computation is nothing but
recordings strung together with a lot of IF > THEN logic to
synchronize the output with the input. It's correlation, not
causation. The computations aren't understanding any questions or
answers, they are just matching pre-selected criteria against an a-
signifying database. You can't mistake a player piano for a human
pianist just because the end result is the same notes.

> > > You do the same mistake with your tv pixel analogy. If I know all the
> > > transition rule of *a pixel* according to input... I can build a model of
> > a
> > > TV that will *exactly* display the same thing as the real TV for the same
> > > inputs without knowing anything about movies/show/whatever... I don't
> > care
> > > about movies at that level. They never said that they would
> > explain/predict
> > > the input to the tv, just replicate the tv.
> > You have to care about the movies at that level because that's what
> > consciousness is in the metaphor. If you don't have an experience of
> > watching a movie, then you just have an a-signifying non-pattern of
> > unrelated pixels. You need a perceiver, and audience to turn the image
> > into something that makes sense. It's like saying that you could write
> > a piece of software that could be used as a replacement for a monitor.
> > It doesn't matter if you have a video card in the computer and drivers
> > to run it, without the actual hardware screen plugged into it there is
> > no way for us to see it. A computer does not come with it's own screen
> > built into the interior of it's microprocessors
> But a human does... what a magical feature don't you think ?

It's a helluva feature, definitely. I don't think it has to be magical
personally, but it definitely makes us different than a machine based
solely on physical function.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to