Hi Colin,


I have sent it to you.

Thanks.



The key to the paper is that it should be regarded as an engineering document. I am embarked on building a real AGI using the real physical world of components in an act of science.

OK. Although, as you know, (or should know) the real physical reality is an emerging information pattern summing up infinities of computations. You can even exploit this (like in quantum computing). It might be not necessary, though.



Based on being inspired and guided by neuroscience, I have identified two basic choices as a route to AGI that works:

(i) use standard symbolic computing
  (of a  model of brain function derived by a human observer = me)
(ii) emulate what an brain actually does in inorganic form.

Based on the serious doubts that are identified in the COMP paper, given the choice I should prefer (ii), because (i) is loaded with unjustified, unproven presupposition and has >60 years of failure.

I can relate with this, but there are progress (in the acceptance of our ignorance). It fails also because all the energy is used to control such machine, where intelligence would consist in leaving them alone and free. It is a bit like "modern education". tecaher are encourage to let the student thinking by themselves, and to give them bad notes when the student do that! Now, to copy a brain, you need to choose a level, and I have no clue what the level really is. I can still hesitate between the Planck bottom scale and very high neuro-level. It can depend of what we identify ourselves with.




All other issues are secondary.

I start building this year.

Good luck in your enterprise. Keep us informed.

Best,

Bruno




cheers

Colin


Bruno Marchal wrote:
Hi Colin,

On 07 Jun 2011, at 09:42, Colin Hales wrote:

Hi,

Hales, C. G. 'On the Status of Computationalism as a Law of Nature', International Journal of Machine Consciousness vol. 3, no. 1, 2011. 1-35.

http://dx.doi.org/10.1142/S1793843011000613


The paper has finally been published. Phew what an epic!


Congratulation Colin.

Like others, I don't succeed in getting it, neither at home nor at the university.

From the abstract I am afraid you might not have taken into account our (many) conversations. Most of what you say about the impossibility of building an artificial scientist is provably correct in the (weak) comp theory. It is unfortunate that you derive this from comp+materialism, which is inconsistent. Actually, comp prevents "artificial intelligence". This does not prevent the existence, and even the apparition, of intelligent machines. But this might happen *despite* humans, instead of 'thanks to the humans'. This is related with the fact that we cannot know which machine we are ourselves. Yet, we can make copy at some level (in which case we don't know what we are really creating or recreating, and then, also, descendent of bugs in regular programs can evolve. Or we can get them serendipitously. It is also relate to the fact that we don't *want* intelligent machine, which is really a computer who will choose its user, if ... he want one. We prefer them to be slaves. It will take time before we recognize them (apparently). Of course the 'naturalist comp' theory is inconsistent. Not sure you take that into account too.

Artificial intelligence will always be more mike fishing or exploring spaces, and we might *discover* strange creatures. Arithmetical truth is a universal zoo. Well, no, it is really a jungle. We don't know what is in there. We can only scratch a tiny bit of it.

Now, let us distinguish two things, which are very different:

1) intelligence-consciousness-free-will-emotion

and

2) cleverness-competence-ingenuity-gifted-learning-ability

"1)" is necessary for the developpment of "2)", but "2)" has a negative feedback on "1)".

I have already given on this list what I call the smallest theory of intelligence.

By definition a machine is intelligent if it is not stupid. And a machine can be stupid for two reason:
she believes that she is intelligent, or
she believes that she is stupid.

Of course, this is arithmetized immediately in a weakening of G, the theory C having as axioms the modal normal axioms and rules + Dp -> ~BDp. So Dt (arithmetical consistency) can play the role of intelligence, and Bf (inconsistance) plays the role of stupidity. G* and G proves BDt -> Bf and G* proves BBf -> Bf (but not G!).

This illustrates that "1)" above might come from Löbianity, and "2)" above (the scientist) is governed by theoretical artificial intelligence (Case and Smith, Oherson, Stob, Weinstein). Here the results are not just NON-constructive, but are *necessarily* so. Cleverness is just something that we cannot program. But we can prove, non constructively, the existence of powerful learning machine. We just cannot recognize them, or build them. It is like with the algorithmically random strings, we cannot generate them by a short algorithm, but we can generate all of them by a very short algorithm.

So, concerning intelligence/consciousness (as opposed to cleverness), I think we have passed the "singularity". Nothing is more intelligent/conscious than a virgin universal machine. By programming it, we can only make his "soul" fell, and, in the worst case, we might get something as stupid as human, capable of feeling itself superior, for example.

Bruno





http://iridia.ulb.ac.be/~marchal/



--You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything- l...@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to