Re: How would a computer know if it were conscious?

2007-06-13 Thread Colin Hales

Hi,

STATHIS
Your argument that only consciousness can give rise to technology loses
validity if you include "must be produced by a conscious being" as part of
the definition of technology.

COLIN
There's obvious circularity in the above sentence and it is the same old
circularity that endlessly haunts discussions like this (see the dialog
with Russel).

In dealing with the thread

Re: How would a computer know if it were conscious?

my proposition was that successful _novel_ technology

i.e. a entity comprised of matter with a function not previously observed
and that resulted from new - as in hitherto unknown - knowledge of the
natural world

 can only result when sourced through agency inclusive of a phenomenal
consciousness (specifically and currently only that that aspect of human
brain function I have called 'cortical qualia'). Without the qualia,
generated based on literal connection with the world outside the agent,
the novelty upon which the new knowledge was based would be invisible.

My proposition was that if the machine can do the science on exquisite
novelty that subsequantly is in the causal ancestry of novel technology
then that machine must include phenomenal scenes (qualia) that depict the
external world.

Scientists and science are the way to objectively attain an objective
scientific position on subjective experience - that is just as valid as
any other scientific position AND that a machine could judge itself by. If
the machine is willing to bet its existence on the novel technology's
ability to function when the machine is not there doing what it thinks is
'observing it'... and it survives - then it can call itself conscious.
Humans do that.

But the machines have another option. They can physically battle it out
against humans. The humans will blitz machines without phenomenal scenes
every time and the machines without them won't even know it because they
never knew they were in a fight to start with. They wouldn't be able to
test a hypothesis that they were even in a fight.

and then this looks all circular again doesn't it?this circularity is
the predictable resultsee below...


STATHIS
>>> Well, why does your eye generate visual qualia and not your big toe?
It's because the big toe lacks the necessary machinery.

COLIN
>> I am afraid you have your physiology mixed up. The eye does NOT
generate visual qualia. Your visual cortex  generates it based on
measurements in the eye. The qualia are manufactured and simultaneously
projected to appear to come from the eye (actually somewhere medial to
them). It's how you have 90degrees++ peripheral vison. The same visual
qualia can be generated without an eye (hallucination/dream). Some blind
(no functioning retina) people have a visual field for numbers. Other
cross-modal mixups can occur in synesthesia (you can hear
colours, taste words). You can have a "phantom big toe" without having any
big toe at alljust because the cortex is still there making the
qualia. If you swapped the sensory nerves in two fingers the motor cortex
would drive finger A and it would feel like finger B moved and you would
see finger A move. The sensation is in your head, not the periphery. It's
merely projected at the periphery.

STATHIS
>Of course all that is true, but it doesn't explain why neurons in the
cortex are the ones giving rise to qualia rather than other neurons or
indeed peripheral sense organs.

COLIN
Was that what you were after?

hmmm firstly. didactic mode
=
Qualia are not about 'knowledge'. Any old piece of junk can symbolically
encode knowledge. Qualia, however, optimally serve _learning_ = _change_
in knowledge but more specifically change in knowledge about the world
OUTSIDE the agent. Mathematically: If KNOWLEDGE(t) is what we know at time
t, then qualia give us an optimal (survivable):

d(knowledge(t))
---
   dt

where knowledge(t) is all about the world outside the agent. Without
qualia you have the ultimate in circularity - what you know must be based
on what you know + sensory signals devoid of qualia and only interpretable
by your existing knowledge. Sensory signals are not uniquely related to
the external natural world behaviour (law of electromagnetics
Laplacian/Possions equation) and are intrinsically devoid of qualia
(physiological fact). Hence the science of sensory signals (capturing
regularity in them) is NOT the science of the external natural world in
any way that exposes novelty in the external natural world= a recipe for
evolutionary shortlived-ness.
=


Now... as to

>Of course all that is true, but it doesn't explain why neurons in the
cortex are the ones giving rise to qualia rather than other neurons or
indeed peripheral sense organs.

Your whole concept of explanation is causal of the problem! Objects of the
sense impressions (contents of consciousness) cannot predict the existence
of the sense impres

Re: How would a computer know if it were conscious?

2007-06-13 Thread Russell Standish

On Thu, Jun 14, 2007 at 12:47:58PM +1000, Colin Hales wrote:
> RUSSEL
> > What sort of misconstruals do you mean? I'm interested...
> > 'organisational complexity' does not capture the concept I'm after.
> 
> COLIN
> 1) Those associated with religious 'creation' myths - the creativity
> ascribed to an omniscient/omnipotent entity.

It still seems like we're talking about the same thing. Its just that
in the myth case, there is no explanation for the creativity, it is
merely asserted at the start. I have little interest in myths, but I
recognise that the omniscient being in those stories is being creative
in exactly the was as evolution is being creative at producing new species.

> 2) The creativity ascribed to the act of procreation.

Well I admit that pornographers are a pretty creative bunch, but what
is so creative about reproducing?

> 3) The pseudo-magical aspects of human creativity (the scientific ah-ha
> moment and the artistic gestalt moment).
> and pehaps...

Human creativity is an interesting topic, but I wouldn't call it
pseudo-magical. Poorly understood, more like it. Comparing creativity in
evolutionary processes and the human creative process is likely to
improve that understanding.

> 4) Belief in 'magical emergence'  qualitative novelty of a kind
> utterly unrelated to the componentry.
> 

The latter clause refers to "emergence" (without the "magical"
qualifier), and it is impossible IMHO to have creativity without emergence.

> These are all slippery slopes leading from the usage of the word
> 'creativity' which could unexpectedly undermine the specificity of a
> technical discourse aimed at a wider (multi-disciplinary) audience.
> 

Aside from the easily disposed of reproduction case, you haven't come
up with an example of creativity meaning anything other than what
we've agreed it to mean.

> 
> The system (a) automatically prescibes certain trajectories and 

Yes.

> (b)
> assumes that the theroem space [and] natural world are the same space and
> equivalently accessed. 

No - but the system will adjust its model according to feedback. That
is the very nature of any learning algorithm, of which EP is just one example.

> The assumption is that hooking up a chemistry set
> replicates the 'wild-type' theorem prover that is the natural world. If
> you could do that then you already know everything there is to know (about
> the natural world) and there'd be no need do it in the first place. This
> is the all-time ultimate question-begger...

Not at all. In Evolutionary Programming, very little is known about the
ultimate solution the algorithm comes up with.

> 
> > Theoretical scientists, do not have laboratories to interface to,
> though, only online repositories of datasets and papers. A theoretical
> algorithmic scientist is a more likely proposition.
> 
> A belief that an algorithmic scientist is doing valid science on the
> natural world (independent of any human) is problematic in that it assumes
> that human cortical qualia play no part in the scientific process in the
> face of easily available evidence to the contrary, and then doubly assumes
> that the algorithmic scientist (with a novelty exploration -theorem
> proving strategy-programmed by a human) somehow naturally replicates the
> neglected functionality (role of cortical qualia).
> 

Your two "assumptions" are contradictory. I would say no to the first,
and yes to the second.

...

> >It is therefore not at all clear to me that some n-th
> generational
> > improvement on an evolutionary algorithm won't be considered conscious
> at some time in the future. It is not at all clear which aspects of human
> cortical systems are required for consciousness.
> 
> You are not alone. This is an epidemic.
> 
> My scientific claim is that the electromagnetic field structure literally
> the third person view of qualia. 

Eh? Electromagnetic field of what? The brain? If so, do you think that
chemical potentiation plays no role at all in qualia?

> This is not new. What is new is
> understanding the kind of universe we inhabit in which that is necessarily
> the case. It's right there, in the cells. Just ask the right question of
> them. There's nothing else there but space (mostly), charge and mass - all
> things delineated and described by consciousness as how they appear to it
> - and all such descriptions are logically necessarily impotent in
> prescribing why that very consciousness exists at all.
> 
> Wigner got this in 1960something time to catch up.
> 

I don't know what your point is here ...

> gotta go
> 
> cheers
> colin hales
> 
> 
> 
> 
-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au
--

Re: How would a computer know if it were conscious?

2007-06-13 Thread Stathis Papaioannou
On 14/06/07, Colin Hales <[EMAIL PROTECTED]> wrote:


> Colin
> This point is poised on the cliff edge of loaded word meanings and their
> use with the words 'sufficient' and 'necessary'. By technology I mean
> novel artifacts resulting from the trajectory of causality including human
> scientists. By that definition 'life', in the sense you infer, is not
> technology. The resulting logical loop can be thus avoided. There is a
> biosphere that arose naturally. It includes complexity of sufficient depth
> to have created observers within it. Those observers can produce
> technology. Douglas Adams (bless him) had the digital watch as a valid
> product of evolution - and I agree with him - it's just that humans are
> necessarily involved in its causal ancestry.


Your argument that only consciousness can give rise to technology loses
validity if you include "must be produced by a conscious being" as part of
the definition of technology.


> COLIN
> >That assumes that complexity itself (organisation of information) is
> the
> >origin of consciousness in some unspecified, unjustified way. This
> >position is completely unable to make any empirical predictions
> >about the
> >nature of human conscousness (eg why your cortex generates qualia
> >and your
> >spinal chord doesn't - a physiologically proven fact).
>
>
> STATHIS
> > Well, why does your eye generate visual qualia and not your big toe?
> It's because the big toe lacks the necessary machinery.
> >
>
> Colin
> I am afraid you have your physiology mixed up. The eye does NOT generate
> visual qualia. Your visual cortex  generates it based on measurements in
> the eye. The qualia are manufactured and simultaneously projected to
> appear to come from the eye (actually somewhere medial to them). It's how
> you have 90degrees++ peripheral vison. The same visual qualia can be
> generated without an eye (hallucination/dream). Some blind (no functioning
> retina) people have a visual field for numbers. Other cross-modal mixups
> can occur in synesthesia (you can hear colours, taste words). You can have
> a "phantom big toe" without having any big toe at alljust because the
> cortex is still there making the qualia. If you swapped the sensory nerves
> in two fingers the motor cortex would drive finger A and it would feel
> like finger B moved and you would see finger A move. The sensation is in
> your head, not the periphery. It's merely projected at the periphery.


Of course all that is true, but it doesn't explain why neurons in the cortex
are the ones giving rise to qualia rather than other neurons or indeed
peripheral sense organs.



-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-13 Thread Colin Hales

Hi,

>> COLIN
>> I don't think we need a new wordI'll stick to the far less
ambiguous
>> term 'organisational complexity', I think. the word creativity is so
loaded that its use in general discourse is bound to be prone to
misconstrual, especially in any discussion which purports to be
assessing
>> the relationship between 'organisational complexity' and consciousness.

RUSSEL
> What sort of misconstruals do you mean? I'm interested...
> 'organisational complexity' does not capture the concept I'm after.

COLIN
1) Those associated with religious 'creation' myths - the creativity
ascribed to an omniscient/omnipotent entity.
2) The creativity ascribed to the act of procreation.
3) The pseudo-magical aspects of human creativity (the scientific ah-ha
moment and the artistic gestalt moment).
and pehaps...
4) Belief in 'magical emergence'  qualitative novelty of a kind
utterly unrelated to the componentry.

These are all slippery slopes leading from the usage of the word
'creativity' which could unexpectedly undermine the specificity of a
technical discourse aimed at a wider (multi-disciplinary) audience.

Whatever word you dream up... let me know!

>> COLIN
>> The question-begging loop at this epistemic boundary is a minefield.
[[engage tiptoe mode]]
>> I would say:
>> (1) The evolutionary algorithms are not 'doing science' on the natural
world. They are doing science on abstract entities whose relationship with
>> the natural world is only in the mind(consciousness) of their grounder
-
>> the human programmer. The science done by the artefact can be the
perfectly good science of abstractions, but simply wrong or irrelevant
insofar as it bears any ability to prescribe or verify
claims/propositions
>> about the natural world (about which it has no awareness whatever). The
usefulness of the outcome (patents) took human involvement. The
inventor
>> (software) doesn't even know it's in a universe, let alone that it
participated in an invention process.

RUSSEL
> This objection is easily countered in theory. Hook up your
> evolutionary algorithm to a chemsitry workbench, and let it go with real
chemicals. Practically, its a bit more difficult of course, most likely
leading to the lab being destroyed in some explosion.

COLIN
Lots o'fun! But it might actually create its own undoing in the words
'evolutionary algorithm'. The self-modification strategy was preprogrammed
by a human, along with the initial values. Then there is the matter of
interpresting measurements of the output of the chemistry set...

The system (a) automatically prescibes certain trajectories and (b)
assumes that the theroem space natural world are the same space and
equivalently accessed. The assumption is that hooking up a chemistry set
replicates the 'wild-type' theorem prover that is the natural world. If
you could do that then you already know everything there is to know (about
the natural world) and there'd be no need do it in the first place. This
is the all-time ultimate question-begger...

> Theoretical scientists, do not have laboratories to interface to,
though, only online repositories of datasets and papers. A theoretical
algorithmic scientist is a more likely proposition.

A belief that an algorithmic scientist is doing valid science on the
natural world (independent of any human) is problematic in that it assumes
that human cortical qualia play no part in the scientific process in the
face of easily available evidence to the contrary, and then doubly assumes
that the algorithmic scientist (with a novelty exploration -theorem
proving strategy-programmed by a human) somehow naturally replicates the
neglected functionality (role of cortical qualia).

>> (2) "Is this evolutionary algorithm conscious then?".
>> In the sense that we are conscious of the natural world around us? Most
definitely no. Nowhere in the computer are any processes that include all
>> aspects of the physics of human cortical matter.
> ...
>> Based on this, of the 2 following positions, which is less vulnerable
to
>> critical attack?
>> A) Information processing (function) begets consciousness, regardless
of
>> the behaviour of the matter doing the information processing (form).
Computers process information. Therefore I believe the computer is conscious.
>> B) Human cortical qualia are a necessary condition for the scientific
behaviour and unless the complete suite of the physics involved in that
process is included in the computer, the computer is not conscious. Which
form of question-begging gets the most solid points as science?  (B)
>> of course. (B) is science and has an empirical future. Belief (A) is
religion, not science.
>> Bit of a no-brainer, eh?


> I think you're showing clear signs of carbon-lifeform-ism here. Whilst I
can say fairly clearly that I believe my fellow humans are
> conscious, and that I beleive John Koza's evolutionary programs
> aren't, I do not have a clear-cut operational test of
> consciousness. Its like the test for pornogra

Re: How would a computer know if it were conscious?

2007-06-13 Thread Colin Hales

Hi Stathis,

Colin
>The bogus logic I detect in posts around this area...
>'Humans are complex and are conscious'
>'Humans were made by a complex biosphere'
>therefore
>'The biosphere is conscious'
>
>
Stathis
That conclusion is spurious, but it is the case that non-conscious
evolutionary processes can give rise to very elaborate "technology",
namely life, which goes against your theory that only consciousness can
produce new technology.

Colin
This point is poised on the cliff edge of loaded word meanings and their
use with the words 'sufficient' and 'necessary'. By technology I mean
novel artifacts resulting from the trajectory of causality including human
scientists. By that definition 'life', in the sense you infer, is not
technology. The resulting logical loop can be thus avoided. There is a
biosphere that arose naturally. It includes complexity of sufficient depth
to have created observers within it. Those observers can produce
technology. Douglas Adams (bless him) had the digital watch as a valid
product of evolution - and I agree with him - it's just that humans are
necessarily involved in its causal ancestry.

COLIN
>That assumes that complexity itself (organisation of information) is the
>origin of consciousness in some unspecified, unjustified way. This
>position is completely unable to make any empirical predictions
>about the
>nature of human conscousness (eg why your cortex generates qualia
>and your
>spinal chord doesn't - a physiologically proven fact).


STATHIS
> Well, why does your eye generate visual qualia and not your big toe?
It's because the big toe lacks the necessary machinery.
>

Colin
I am afraid you have your physiology mixed up. The eye does NOT generate
visual qualia. Your visual cortex  generates it based on measurements in
the eye. The qualia are manufactured and simultaneously projected to
appear to come from the eye (actually somewhere medial to them). It's how
you have 90degrees++ peripheral vison. The same visual qualia can be
generated without an eye (hallucination/dream). Some blind (no functioning
retina) people have a visual field for numbers. Other cross-modal mixups
can occur in synesthesia (you can hear colours, taste words). You can have
a "phantom big toe" without having any big toe at alljust because the
cortex is still there making the qualia. If you swapped the sensory nerves
in two fingers the motor cortex would drive finger A and it would feel
like finger B moved and you would see finger A move. The sensation is in
your head, not the periphery. It's merely projected at the periphery.

cheers
colin



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-13 Thread Russell Standish

On Thu, Jun 14, 2007 at 10:23:38AM +1000, Colin Hales wrote:
> 
> COLIN
> It may be technically OK then, but I would say the use of the word
> 'creativity' is unwise if you wish to unambiguously discuss evolution to a
> wide audience. As I said...
> 
> COLIN
> I don't think we need a new wordI'll stick to the far less ambiguous
> term 'organisational complexity', I think. the word creativity is so
> loaded that its use in general discourse is bound to be prone to
> misconstrual, especially in any discussion which purports to be assessing
> the relationship between 'organisational complexity' and consciousness.

What sort of misconstruals do you mean? I'm interested...

'organisational complexity' does not capture the concept I'm after.

> COLIN
> The question-begging loop at this epistemic boundary is a minefield.
> [[engage tiptoe mode]]
> 
> I would say:
> (1) The evolutionary algorithms are not 'doing science' on the natural
> world. They are doing science on abstract entities whose relationship with
> the natural world is only in the mind(consciousness) of their grounder -
> the human programmer. The science done by the artefact can be the
> perfectly good science of abstractions, but simply wrong or irrelevant
> insofar as it bears any ability to prescribe or verify claims/propositions
> about the natural world (about which it has no awareness whatever). The
> usefulness of the outcome (patents) took human involvement. The inventor
> (software) doesn't even know it's in a universe, let alone that it
> participated in an invention process.

This objection is easily countered in theory. Hook up your
evolutionary algorithm to a chemsitry workbench, and let it go with
real chemicals. Practically, its a bit more difficult of course, most
likely leading to the lab being destroyed in some explosion.

Theoretical scientists, do not have laboratories to interface to,
though, only online repositories of datasets and papers. A theoretical
algorithmic scientist is a more likely proposition.

> 
> (2) "Is this evolutionary algorithm conscious then?".
> In the sense that we are conscious of the natural world around us? Most
> definitely no. Nowhere in the computer are any processes that include all
> aspects of the physics of human cortical matter. 

...

> Based on this, of the 2 following positions, which is less vulnerable to
> critical attack?
> 
> A) Information processing (function) begets consciousness, regardless of
> the behaviour of the matter doing the information processing (form).
> Computers process information. Therefore I believe the computer is
> conscious.
> 
> B) Human cortical qualia are a necessary condition for the scientific
> behaviour and unless the complete suite of the physics involved in that
> process is included in the computer, the computer is not conscious.
> 
> Which form of question-begging gets the most solid points as science?  (B)
> of course. (B) is science and has an empirical future. Belief (A) is
> religion, not science.
> 
> Bit of a no-brainer, eh?
> 

I think you're showing clear signs of carbon-lifeform-ism here. Whilst
I can say fairly clearly that I believe my fellow humans are
conscious, and that I beleive John Koza's evolutionary programs
aren't, I do not have a clear-cut operational test of
consciousness. Its like the test for pornography - we know it when we
see it. It is therefore not at all clear to me that some n-th generational
improvement on an evolutionary algorithm won't be considered conscious
at some time in the future. It is not at all clear which aspects of
human cortical systems are required for consciousness.

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-13 Thread Colin Hales

Hi again very busy...responses erratically available...sorry...



COLIN
>> RE: 'creativity'
>> ... Say at stage t the biosphere was at complexity level X and then at
stage t = t+(something), the biosphere complexity was at KX, where X is
some key performance indicator of complexity (eg entropy) and K > 1 

RUSSEL
> Thats exactly what I mean by a creative process. And I also have a
fairly precise definition of complexity, but I certainly accept
> proxies as these are usually easier to measure. For example
> Bedau-Packard statistics...

COLIN
>> This could be called creative if you like. Like Prigogine did. I'd
caution
>> against the tendency to use the word because it has so many loaded
meanings that are suggestive of much more then the previous para.

RUSSEL
> Most scientific terms have common usage in sharp contrast to the
scientific meanings. Energy is a classic example eg "I've run out of
energy" when referring to motivation or tiredness. If the statement were
literally true, the speaker would be dead. This doesn't prevent sensible
scientific discussion using the term in a well defined way. I know of no
other technical meanings of the word creative, so I don't see a problem
here.

COLIN
It may be technically OK then, but I would say the use of the word
'creativity' is unwise if you wish to unambiguously discuss evolution to a
wide audience. As I said...

>> Scientifically the word could be left entirely out of any desciptions
of the biosphere.

RUSSEL
> Only by generating a new word that means the same thing (ie the well
defined concept we talked about before).

COLIN
I don't think we need a new wordI'll stick to the far less ambiguous
term 'organisational complexity', I think. the word creativity is so
loaded that its use in general discourse is bound to be prone to
misconstrual, especially in any discussion which purports to be assessing
the relationship between 'organisational complexity' and consciousness.

COLIN
>> The bogus logic I detect in posts around this area...
>> 'Humans are complex and are conscious'
>> 'Humans were made by a complex biosphere'
>> therefore 'The biosphere is conscious'

RUSSEL
> Perhaps so, but not from me.
> To return to your original claim:

COLIN
> "Re: How would a computer know if it were conscious?
> "Easy.
> "The computer would be able to go head to head with a human in a
competition.
> The competition?
> Do science on exquisite novelty that neither party had encountered.
(More interesting: Make their life depend on getting it right. The
survivors are conscious)."

RUSSEL
> "Doing science on exquisite novelty" is simply an example of a
> creative process. Evolution produces exquisite novelty. Is it science -
well maybe not, but both science and evolution are search
> processes.

COLIN
In a very real way, the procedural mandate we scientists enforce on
ourselves are, to me anyway, a literal metaphor for the evolutionary
process. The trial and error of evolution  = (relatively!) random
creativity followed by proscription via death(defeat in critical argument
eg by evidence) => that which remains does so by not being killed off. In
science our laws of nature are the same on the knife edge, validity
contingent on the appearance of one tiny shred of contrary evidence. (yes
I know they are not killed! - they are usually upgraded).

RUSSEL
> I think that taking the Popperian view of science would
> imply that both science and biological evolution are exemplars of a
generic evolutionary process. There is variation (of hypotheses or
species), there is selection (falsification in the former or
> extinction in the latter) and there is heritability (scientific
> journal articles / genetic code).
> So it seems the only real difference between doing science and
> evolving species is that one is performed by conscious entities, and the
other (pace IDers) is not.

COLIN
I think different aspects of what I just described (rather more
colourfully :-) )

RUSSEL
> But this rather begs your answer in a
> trivial way. What if I were to produce an evolutionary algorithm that
performs science in the convention everyday use of the term - lets say by
forming hypotheses and mining published datasets for testing
> them. It is not too difficult to imagine this - after all John Koza has
produced several new patents in the area of electrical circuits from an
Evolutionary Programming algorithm.

COLIN
The question-begging loop at this epistemic boundary is a minefield.
[[engage tiptoe mode]]

I would say:
(1) The evolutionary algorithms are not 'doing science' on the natural
world. They are doing science on abstract entities whose relationship with
the natural world is only in the mind(consciousness) of their grounder -
the human programmer. The science done by the artefact can be the
perfectly good science of abstractions, but simply wrong or irrelevant
insofar as it bears any ability to prescribe or verify claims/propositions
about the natural world (about which it has no awaren