"upload the human brain?????"

I suppose (and hope) you are talking about the wider meaning of "brain", not
the physiological tissue (fless) figment the 2002 medical science tackles
with in our crania. THAT extended brain which is ready to monitor (report?)
unexpect(able)ed mental functions, as I wrote: e.g. the difference in
meaning between "I missed you yesterday" vs. "I hate broccoli".
Not just mAmp-s and tissue-encephalograms.
We know so little about our (extendable?) mental functions, every second may
bring novelty into it, so where would you draw the line for the 'upload'? at
yesterday's inventory?

Then again your statement
*"...I don't accept that computers cannot have the same qualia as brains..."
makes sense to me if we postulate that THOSE computers MUST HAVE the same
Unknown ones, undetected ones, but ALL OF THEM.
I find this condition beyond reason.

Or would you restrict our science to yesterday?

John M

On Sat, Feb 5, 2011 at 4:19 AM, Stathis Papaioannou <>wrote:

> On Sat, Feb 5, 2011 at 12:27 PM, Colin Hales
> <> wrote:
> >  I think perhaps the key to this can be seen in your requirement...
> >
> > " Doing this is equivalent to constructing a human level AI, since the
> > simulation could be given information and would respond just as a human
> > would given the same information."
> >
> > I would say this is not a circumstance that exemplified human level
> > intellect. Consider a human encounter with something totally unknown but
> > human and AI. Who is there to provide 'information'? If the machine is
> like
> > a human it shouldn't need someone there to spoon feed it answers. We let
> the
> > AGI loose to encounter something neither human nor AGI has encountered
> > before. That is a real AGI. The AGI can't "be given" the answers. You may
> be
> > able to provide a software model of how to handle novelty. This requires
> a
> > designers to say, in software, "everything you don't know is to be known
> > like this ....". This, however, is not AGI (human). It is merely AI. It
> may
> > suffice for a planetary rover with a roughly known domain of unknowns of
> a
> > certain kind. But when it encounters a cylon that rips it widgets off it
> > won't be able to characterize it like a human does. Such behaviour is not
> an
> > instance of a human encounter with the unknown.
> I am considering a special type of AI, an upload of a human brain. A
> "bottom up" AI, if you like. SPICE allows you to simulate a complex
> circuit by adding together simpler components. You can construct a
> simulated amplifier out of simulated transistors, resistors,
> capacitors etc., then input a simulated signal, and observe the
> simulated output. You construct an uploaded brain out of simulated
> neurons, input a simulated signal to simulated sense organs, and
> observe the simulated output to simulated muscles. The input signal
> could be a question and the output signal couldbe verbal output in
> response to the question. If the SPICE model is a good one its output
> would be the same as the output of a real circuit given the same
> input. If the brain upload model is a good one its response to
> questions would be the same as the responses of a biological brain.
> For example, you could tell it the result of experiments, it would
> come up with a hypothesis, propose further experiments for you to do,
> then modify the hypothesis depending on the result of those
> experiments. There is no specific novelty-handling model: the upload
> is merely an accurate model of brain behaviour, and the
> novelty-handling emerges from this. The analogy is that the SPICE
> software does not have a specific model for what to when the input is
> sine wave, what to do when the input is a square wave, and so on, but
> rather the appropriate output is produced for any input given just the
> models of the components and their connections.
> > Humans literally encounter the unknown in our qualia - an intracranial
> > phenomenon. Qualia are the observation. We don't encounter the unknown in
> > the single or collective behaviour of our peripheral nerve activity.
> Instead
> > we get a unified assembly of perceptual fields erected intra-cranially
> from
> > the peripheral feeds, within which the actual distal world is faithfully
> > represented well enough to do science.
> >
> > These perceptual fields are not always perfect. The perceptual fields can
> be
> > fooled. You can perhaps say that a software-black-box-scientist could
> guess
> > (Bayesian stabs in the dark). But those stabs in the dark are guesses at
> (a)
> > how the peripheral siganlling measurement activity will behave, or
> perhaps
> > (b) a guess at the contents of a human-model-derived software
> representation
> > of the external world. Neither (a) or (b) can be guaranteed identical to
> the
> > human qualia version of the the external distal world _in a situation of
> > encounter with radical novelty (that a human AI designer has never
> > encountered). This observational needs of a scientist are a rather useful
> > way to cut through to the core of these issues.
> >
> > The existence or nature of 'qualia' that give rise to (by grounding in
> > observation) empirical laws, are not predicted by any of those empirical
> > laws. However, the CONTENTS of qualia _are_ predicted. The system
> > presupposes their existence in an observer. The only way to get qualia,
> > then, is to do what a brain actually does, not what a model (empirical
> laws)
> > of the brain does. Even if we 100% locate and model the atomic-level
> > correlates of consciousness (qualia), that MODEL of correlates is NOT the
> > thing that makes qualia. It's just a model of it. We are  not a model of
> a
> > thing. We are, literally, something else, the actual natural world that
> > merely appears to behave (in our qualia) like a model. In the case of the
> > brilliant model "electromagnetism", our brains have electromagnetism
> > predicted by the model. My PhD thesis is all about it. Humans get to BE
> > whatever it is that behaves electromagnetically. Conflating these two
> things
> > (BEing and APPEARing) is the bottom line of the
> > COMP is true belief.
> >
> > Note that none of this discussion need even broach issue of the natural
> > world AS computation (of or not of the kind of computation happening in a
> > digital computer made of the natural world). This latter issue might be
> > useful to understand the 'why' of qualia origins. But it changes nothing
> in
> > a decision process leading to real AGI.
> >
> > BTW my obsession with AGI originates in the practical need to make a
> > 'generic AI making machine'. One AGI makes all domain specific AI.
> I don't accept that computers cannot have the same qualia as brains,
> but even if that is true, it is possible to do science without direct
> observation. A written description of the results of an experiment is
> enough.
> --
> Stathis Papaioannou
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to
> To unsubscribe from this group, send email to
> For more options, visit this group at

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at

Reply via email to