Coincidental post I wrote yesterday:

It may not be possible to imitate a human mind computationally, because 
awareness may be driven by aesthetic qualities rather than mathematical 
logic alone. The problem, which I call the Presentation Problem, is what 
several outstanding issues in science and philosophy have in common, namely 
the Explanatory Gap, the Hard Problem, the Symbol Grounding problem, the 
Binding problem, and the symmetries of mind-body dualism. Underlying all of 
these is the map-territory distinction; the need to recognize the 
difference between presentation and representation.

Because human minds are unusual phenomena in that they are presentations 
which specialize in representation, they have a blind spot when it comes to 
examining themselves. The mind is blind to the non-representational. It 
does not see that it feels, and does not know how it sees. Since its 
thinking is engineered to strip out most direct sensory presentation in 
favor of abstract sense-making representations, it fails to grasp the role 
of presence and aesthetics in what it does. It tends toward overconfidence 
in the theoretical.The mind takes worldly realism for granted on one hand, 
but conflates it with its own experiences as a logic processor on the 
other. It’s a case of the fallacy of the instrument, where the mind’s 
hammer of symbolism sees symbolic nails everywhere it looks. Through this 
intellectual filter, the notion of disembodied algorithms which somehow 
generate subjective experiences and objective bodies, (even though 
experiences or bodies would serve no plausible function for purely 
mathematical entities) becomes an almost unavoidably seductive solution.

So appealing is this quantitative underpinning for the Western mind’s 
cosmology, that many people (especially Strong AI enthusiasts) find it easy 
to ignore that the character of mathematics and computation reflect 
precisely the opposite qualities from those which characterize 
consciousness. To act like a machine, robot, or automaton, is not merely an 
alternative personal lifestyle, it is the common style of all unpersons and 
all that is evacuated of feeling. Mathematics is inherently amoral, unreal, 
and intractably self-interested – a windowless universality of 

A computer has no aesthetic preference. It makes no difference to a program 
whether its output is displayed on a monitor with millions of colors, or 
buzzing out of speaker, or streaming as electronic pulses over a wire. This 
is the primary utility of computation. This is why digital is not locked 
into physical constraints of location. Since programs don’t deal with 
aesthetics, we can only use the program to format values in such a way that 
corresponds with the expectations of our sense organs. That format of 
course, is alien and arbitrary to the program. It is semantically 
ungrounded data, fictional variables. 

Something like the Mandelbrot set may look profoundly appealing to us when 
it is presented optically as plotted as colorful graphics, but the same 
data set has no interesting qualities when played as audio tones. The 
program generating the data has no desire to see it realized in one form or 
another, no curiosity to see it as pixels or voxels. The program is 
absolutely content with a purely quantitative functionality – with 
algorithms that correspond to nothing except themselves.

In order for the generic values of a program to be interpreted 
experientially, they must first be re-enacted through controllable physical 
functions. It must be perfectly clear that this re-enactment is not a 
‘translation’ or a ‘porting’ of data to a machine, rather it is more like a 
theatrical adaptation from a script. The program works because the physical 
mechanisms have been carefully selected and manufactured to match the 
specifications of the program. The program itself is utterly impotent as 
far as manifesting itself in any physical or experiential way. The program 
is a menu, not a meal. Physics provides the restaurant and food, 
subjectivity provides the patrons, chef, and hunger. It is the physical 
interactions which are interpreted by the user of the machine, and it is 
the user alone who cares what it looks like, sounds like, tastes like etc. 
An algorithm can comment on what is defined as being liked, but it cannot 
like anything itself, nor can it understand what anything is like.

If I’m right, all natural phenomena have a public-facing mechanistic range 
and a private-facing animistic range. An algorithm bridges the gap between 
public-facing, space-time extended mechanisms, but it has no access to the 
private-facing aesthetic experiences which vary from subject to subject. By 
definition, an algorithm represents a process generically, but how that 
process is interpreted is inherently proprietary.


On Friday, August 16, 2013 3:21:11 PM UTC-4, cdemorsella wrote:
> Telmo ~ I agree, all the Turing test does is indicate that a computer, 
> operating independently -- that is without a human operator supplying any 
> answers during the course of the test -- can fool a human (on average) that 
> they are dialoging with another person and not with a computer. While this 
> is an important milestone in AI research -- it is just a stand in for any 
> actual potential real intelligence or awareness. 
> Increasingly computers are not programmed in the sense of being provided 
> with a deterministic instruction set - no matter how complex and deep. 
> Increasingly computer code is being put through its own Darwinian process 
> using techniques such as genetic algorithms, automata etc. Computers are in 
> the process of being turned into self learning code generation engines that 
> increasingly are able to write their own operational code.
> An AI entity would probably be able to easily pass the Turing test - not 
> that hard of a challenge after all for an entity with almost immediate 
> access to a huge cultural memory it can contain. However it may not care 
> that much to try.
> Another study -- I think by Stanford researchers, but I don't have the 
> link handy though -- has found that the world's top super computers 
> (several of which they were able to test) are currently scoring around the 
> same as an average human four year old. The scores were very uneven across 
> various areas of intelligence that the standardized IQ tests or four year 
> olds tries to measure, as would be expected (after all a super computer is 
> not a four year old person). 
> Personally I think that AI will let us know when it has arisen by whatever 
> means it chooses to let us know. That it will know itself what it wants to 
> do, and that this knowing for itself and acting for itself will be the 
> hallmark event that AI has arrived on the scene.
> Cheers,
> -Chris D
> .
>    *From:* Telmo Menezes < <javascript:>>
> *To:* <javascript:> 
> *Sent:* Friday, August 16, 2013 8:04 AM
> *Subject:* Re: When will a computer pass the Turing Test?
> On Fri, Aug 16, 2013 at 3:42 PM, John Clark <<javascript:>> 
> wrote:
> > On Wed, Aug 14, 2013 at 7:09 PM, Chris de Morsella 
> > <<javascript:>
> >
> > wrote:
> >
> >> > When will a computer pass the Turing Test? Are we getting close? Here 
> is
> >> > what the CEO of Google says: “Many people in AI believe that we’re 
> close to
> >> > [a computer passing the Turing Test] within the next five years,” 
> said Eric
> >> > Schmidt, Executive Chairman, Google, speaking at The Aspen Institute 
> on July
> >> > 16, 2013.
> >
> > It could be. Five years ago I would have said we were a very long way 
> from
> > any computer passing the Turing Test, but then I saw Watson and its
> > incredible performance on Jeopardy.  And once a true AI comes into 
> existence
> > it will turn ALL scholarly predictions about what the future will be like
> > into pure nonsense, except for the prediction that we can't make 
> predictions
> > that are worth a damn after that point.
> I don't really find the Turing Test that meaningful, to be honest. My
> main problem with it is that it is a test on our ability to build a
> machine that deceives humans into believing it is another human. This
> will always be a digital Frankenstein because it will not be the
> outcome of the same evolutionary context that we are. So it will have
> to pretend to care about things that it is not reasonable for it to
> care.
> I find it a much more worthwhile endeavour to create a machine that
> can understand what we mean like a human does, without the need to
> convince us that it has human emotions and so on. This machine would
> actually be _more_ useful and _more_ interesting by virtue of not
> passing the Turing test.
> Telmo.
> >  John K Clark
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Everything List" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to <javascript:>
> > To post to this group, send email to 
> ><javascript:>
> > Visit this group at
> > For more options, visit
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to <javascript:>
> To post to this group, send email to<javascript:>
> Visit this group at
> For more options, visit

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
To post to this group, send email to
Visit this group at
For more options, visit

Reply via email to