On Feb 5, 1:27 am, Colin Hales <c.ha...@pgrad.unimelb.edu.au> wrote:
> Stathis Papaioannou wrote:
> > On Fri, Feb 4, 2011 at 12:05 PM, Colin Hales
> > <c.ha...@pgrad.unimelb.edu.au> wrote:
>
> >>> Can the behaviour of the neurons including the electric fields be
> >>> simulated? For example, is it possible to model what will happen in
> >>> the brain (and what output will ultimately go to the muscles via
> >>> peripheral nerves) if a particular sequence of photons hits the
> >>> retina? If that is a theoretical impossibility then where exactly is
> >>> the non-computable physics, and what evidence do you have that it is
> >>> non-computable?
>
> >> Lots of aspects to your questions.... and I'll try and answer Bruno at the
> >> same time.
>
> >> 1) I am in the process of upgrading neural modelling to include the fields
> >> in the traditional sense of simulation of the fields. The way to think of 
> >> it
> >> is that the little capacitor in the Hodgkin-Huxley equilvalent circuit is
> >> about to get a whole new role.
>
> > Great! That is another step towards simulating brains.
>
> >> 2) Having done that, one can do simulations of single unit,  multiple unit,
> >> populations etc etc...You may be able to extract something verifiable in 
> >> the
> >> wet-lab.
>
> >> 3) However, I would hold that no matter how comprehensive the models, no
> >> matter how many neurons ... even the whole brain and the peripheral
> >> nerves...they will NOT behave like the real thing in the sense that such a
> >> brain model cannot ever 'be' a mind. The reason is that we 'BE' the fields.
> >> We do not 'BE' a description of the fields. The information delivered by
> >> 'BE'ing the field acts in addition to that described by the
> >> 3rd-person-validated system of classical partial differential equations 
> >> that
> >> are Maxwell's equations.
>
> > I understand that this is your position but I would like you to
> > consider a poor, dumb engineer who neither knows nor cares about
> > philosophy of mind. All he cares about is making an accurate model
> > which will predict the pattern of motor neuron firings for a human
> > brain given a certain initial state. Doing this is equivalent to
> > constructing a human level AI, since the simulation could be given
> > information and would respond just as a human would given the same
> > information. Now, I take it that you don't believe that such
> > predictions can be made using a mathematical model. Is that right?
>
> I am also a poor dumb engineer (that has examined far too much
> philosophy of mind. Enough to be quite irritated by it :-). I started as
> an engineer with the 'black box' idea and eventually found enough
> evidence in human behaviour (specifically scientific behaviour) to doubt
> we can make an AGI that can do science like us when the black box is
> full of computer running software. I use the scientist as my target
> because its behaviour is testable. I conclude that I am more likely to
> succeed if the 'black box' includes more than mere software models of a
> brain in it.
>
>  I think perhaps the key to this can be seen in your requirement...
>
> " Doing this is equivalent to constructing a human level AI, since the 
> simulation could be given information and would respond just as a human would 
> given the same information."
>
> I would say this is not a circumstance that exemplified human level 
> intellect. Consider a human encounter with something totally unknown but 
> human and AI. Who is there to provide 'information'? If the machine is like a 
> human it shouldn't need someone there to spoon feed it answers. We let the 
> AGI loose to encounter something neither human nor AGI has encountered 
> before. That is a real AGI. The AGI can't "be given" the answers. You may be 
> able to provide a software model of how to handle novelty. This requires a 
> designers to say, in software, "everything you don't know is to be known like 
> this ....". This, however, is not AGI (human). It is merely AI.

Why not? Because humans  don't have a finite, limited list of meta-
methods for learning from experience and solving problems? You don't
know that. It is an imponderable. If there are things we can never
know,  we'll never know
that...it would be an unknown unknown

>It may suffice for a planetary rover with a roughly known domain of unknowns 
>of a certain kind. But when it encounters a cylon that rips it widgets off it 
>won't be able to characterize it like a human does. Such behaviour is not an 
>instance of a human encounter with the unknown.                                
                           
That would depend on how sophisticated it is.

> Humans literally encounter the unknown in our qualia -

How can we not know our own qualia?

>an intracranial phenomenon. Qualia are the observation. We don't encounter the 
>unknown in the single or collective behaviour of our peripheral nerve 
>activity. Instead we get a unified assembly of perceptual fields erected 
>intra-cranially from the peripheral feeds, within which the actual distal 
>world is faithfully represented well enough to do science.
>
> These perceptual fields are not always perfect. The perceptual fields can be 
> fooled. You can perhaps say that a software-black-box-scientist could guess 
> (Bayesian stabs in the dark). But those stabs in the dark are guesses at (a) 
> how the peripheral siganlling measurement activity will behave, or perhaps 
> (b) a guess at the contents of a human-model-derived software representation 
> of the external world. Neither (a) or (b) can be guaranteed identical to the 
> human qualia version of the the external distal world _in a situation of 
> encounter with radical novelty (that a human AI designer has never 
> encountered).

That an AI will see things differently does not guarantee that its
problem-solving and
novelty-handling abilities will be inferior. Different does not
mean.worse

>This observational needs of a scientist are a rather useful way to cut through 
>to the core of these issues.
>
> The existence or nature of 'qualia' that give rise to (by grounding in 
> observation) empirical laws, are not predicted by any of those empirical laws.

>However, the CONTENTS of qualia _are_ predicted. The system presupposes their 
>existence in an observer. The only way to get qualia, then, is to do what a 
>brain actually does, not what a model (empirical laws) of the brain does

Why would and AI need the "look and feel" of qualia, rather than the
content? If human worked the way Daniel Dennett thinks, so that when
you taste a delicious red wine, the information "you
are tasting a delicious red wine" appears in your brain, that would
still be enough to form
a basis for theory-building. It's just that human don't work that way.

>. Even if we 100% locate and model the atomic-level correlates of 
>consciousness (qualia), that MODEL of correlates is NOT the thing that makes 
>qualia. It's just a model of it. We are  not a model of a thing. We are, 
>literally, something else, the actual natural world that merely appears to 
>behave (in our qualia) like a model. In the case of the brilliant model 
>"electromagnetism", our brains have electromagnetism predicted by the model. 
>My PhD thesis is all about it. Humans get to BE whatever it is that behaves 
>electromagnetically. Conflating these two things (BEing and APPEARing) is the 
>bottom line of the COMP is true belief.

Whatever. That an AI wouldn't have qualia does not mean it would be
hindered. (well,
an artificial artist might be hindered, but it is difficult to see why
an artificial scientist
would be).

> Note that none of this discussion need even broach issue of the natural world 
> AS computation (of or not of the kind of computation happening in a digital 
> computer made of the natural world). This latter issue might be useful to 
> understand the 'why' of qualia origins. But it changes nothing in a decision 
> process leading to real AGI.
>
> BTW my obsession with AGI originates in the practical need to make a 'generic 
> AI making machine'. One AGI makes all domain specific AI.
>
> cheers
> colin

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to