Long time lurker here, very intrigued by all the discussions here when
I have time for them!
Earlier in response to Colin Hales you wrote: "Actually, comp prevents
Can you elaborate on this? If we assume comp (I say yes to the
doctor) then I can be simulated... doesn't that imply the possibility
of an artificial intelligence?
On Thu, Jun 9, 2011 at 4:53 PM, Bruno Marchal <marc...@ulb.ac.be> wrote:
> Hi Colin,
> On 07 Jun 2011, at 09:42, Colin Hales wrote:
>> Hales, C. G. 'On the Status of Computationalism as a Law of Nature',
>> International Journal of Machine Consciousness vol. 3, no. 1, 2011. 1-35.
>> The paper has finally been published. Phew what an epic!
> Congratulation Colin.
> Like others, I don't succeed in getting it, neither at home nor at the
> From the abstract I am afraid you might not have taken into account our
> (many) conversations. Most of what you say about the impossibility of
> building an artificial scientist is provably correct in the (weak) comp
> theory. It is unfortunate that you derive this from comp+materialism, which
> is inconsistent. Actually, comp prevents "artificial intelligence". This
> does not prevent the existence, and even the apparition, of intelligent
> machines. But this might happen *despite* humans, instead of 'thanks to the
> humans'. This is related with the fact that we cannot know which machine we
> are ourselves. Yet, we can make copy at some level (in which case we don't
> know what we are really creating or recreating, and then, also, descendent
> of bugs in regular programs can evolve. Or we can get them serendipitously.
> It is also relate to the fact that we don't *want* intelligent machine,
> which is really a computer who will choose its user, if ... he want one. We
> prefer them to be slaves. It will take time before we recognize them
> Of course the 'naturalist comp' theory is inconsistent. Not sure you take
> that into account too.
> Artificial intelligence will always be more mike fishing or exploring
> spaces, and we might *discover* strange creatures. Arithmetical truth is a
> universal zoo. Well, no, it is really a jungle. We don't know what is in
> there. We can only scratch a tiny bit of it.
> Now, let us distinguish two things, which are very different:
> 1) intelligence-consciousness-free-will-emotion
> 2) cleverness-competence-ingenuity-gifted-learning-ability
> "1)" is necessary for the developpment of "2)", but "2)" has a negative
> feedback on "1)".
> I have already given on this list what I call the smallest theory of
> By definition a machine is intelligent if it is not stupid. And a machine
> can be stupid for two reason:
> she believes that she is intelligent, or
> she believes that she is stupid.
> Of course, this is arithmetized immediately in a weakening of G, the theory
> C having as axioms the modal normal axioms and rules + Dp -> ~BDp. So Dt
> (arithmetical consistency) can play the role of intelligence, and Bf
> (inconsistance) plays the role of stupidity. G* and G proves BDt -> Bf and
> G* proves BBf -> Bf (but not G!).
> This illustrates that "1)" above might come from Löbianity, and "2)" above
> (the scientist) is governed by theoretical artificial intelligence (Case and
> Smith, Oherson, Stob, Weinstein). Here the results are not just
> NON-constructive, but are *necessarily* so. Cleverness is just something
> that we cannot program. But we can prove, non constructively, the existence
> of powerful learning machine. We just cannot recognize them, or build them.
> It is like with the algorithmically random strings, we cannot generate them
> by a short algorithm, but we can generate all of them by a very short
> So, concerning intelligence/consciousness (as opposed to cleverness), I
> think we have passed the "singularity". Nothing is more
> intelligent/conscious than a virgin universal machine. By programming it, we
> can only make his "soul" fell, and, in the worst case, we might get
> something as stupid as human, capable of feeling itself superior, for
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to firstname.lastname@example.org.
> To unsubscribe from this group, send email to
> For more options, visit this group at
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at