On 12 Oct 2012, at 10:27, Brett Hall wrote:

On 12/10/2012, at 16:27, "Bruno Marchal" <marc...@ulb.ac.be> wrote:

> On 10 Oct 2012, at 10:44, a b wrote:
>> On Wed, Oct 10, 2012 at 2:04 AM, Brett Hall <brhal...@hotmail.com>
>> wrote:
>>> On 09/10/2012, at 16:38, "hibbsa" <asb...@gmail.com> wrote:
>>>> http://www.kurzweilai.net/the-real-reasons-we-dont-have-agi-yet
>>> Ben Goertzel's article that hibbsa sent and linked to above says in >>> paragraph 7 that,"I salute David Deutsch’s boldness, in writing and
>>> thinking about a field where he obviously doesn’t have much
>>> practical grounding. Sometimes the views of outsiders with very
>>> different backgrounds can yield surprising insights. But I don’t
>>> think this is one of those times. In fact, I think Deutsch’s
>>> perspective on AGI is badly mistaken, and if widely adopted, would
>>> slow down progress toward AGI dramatically. The real reasons we
>>> don’t have AGI yet, I believe, have nothing to do with Popperian
>>> philosophy, and everything to do with:..." (Then he listed some
>>> things).
>>> That paragraph quoted seems an appeal to authority in an
>>> underhanded way. In a sense it says (in a condescending manner)
>>> that DD has little practical grounding in this subject and can
>>> probably be dismissed on that basis...but let's look at what he
>>> says anyways. As if "practical grounding" by the writer would
>>> somehow have made the arguments themselves valid or more valid (as
>>> though that makes sense). The irony is, Goertzel in almost the next
>>> breath writes that AGI has "nothing to do with Popperian
>>> philosophy..." Presumably, by his own criterion, he can only make
>>> that comment with any kind of validity if he has "practical
>>> grounding" in Popperian epistemology? It seems he has indeed
>>> written quite a bit on Popper...but probably as much as DD has
>>> written on stuff related to AI. So how much is enough before you
>>> should be taken seriously? I'm also not sure that Goertzel is
>>> expert in Popperian *epistemology*.
>>> Later he goes on to write, "I have conjectured before that once
>>> some proto-AGI reaches a sufficient level of sophistication in its
>>> behavior, we will see an “AGI Sputnik” dynamic — where various
>>> countries and corporations compete to put more and more money and
>>> attention into AGI, trying to get there first. The question is,
>>> just how good does a proto-AGI have to be to reach the AGI Sputnik
>>> level?"I'm not sure what "proto-AGO" means? It perhaps misses the
>>> central point that intelligence is a qualitative, not quantitative
>>> thing. Sputnik was a less advanced version of the International
>>> Space Station (ISS)...or a GPS satellite.
>>> But there is no "less advanced" version of being a universal
>>> explainer (i.e a person, i.e: intelligent, i.e: AGI) is there? So
>>> the analogy is quite false. As a side point is the "A" in AGI
>>> racist? Or does the A simply mean "intelligently designed" as
>>> opposed to "evolved by natural selection"? I'm not sure...what will
>>> Artificial mean to AGI when they are here? I suppose we might
>>> augment our senses in all sorts of ways so the distinction might be >>> blurred anyways as it is currently with race.So I think the Sputnik
>>> analogy is wrong.
>>> A better analogy would be...say you wanted to develop a *worldwide
>>> communications system* in the time of (say) the American Indians in
>>> the USA (say around 1200 AD for argument's sake). Somehow you knew
>>> *it must be possible* to create a communications system that
>>> allowed transmission of messages across the world at very very high
>>> speeds but so far your technology was limited to ever bigger fires
>>> and more and more smoke. Then the difference between (say) a smoke
>>> signal and a real communications satellite that can transmit a
>>> message around the world (like Sputnik) would be more appropriate.
>>> Then the smoke signal is the current state of AGI...and Sputnik is
>>> real AGI - what you get once you understand something brand new
>>> about orbits, gravity and radio waves...and probably most
>>> importantly - that the world was a giant *sphere* plagued by high
>>> altitude winds and diverse weather systems and so forth that would
>>> never even have entered your mind. Things you can't even conceive
>>> of if all you are doing in trying to devise a better world-wide
>>> communications system is making ever bigger fires and more and more
>>> smoke...because *surely* that approach will eventually lead to
>>> world-wide communications. After all - it's just a matter of bigger
>>> fires create more smoke which travels greater distance. Right?But
>>> even that analogy is no good really because the smoke signal and
>>> the satellite still have too much in common, perhaps. They are
>>> *both ways of communicating*. And yet, current "AI" and real "I" do
>>> *not* have in common "intelligence" or "thinking".
>>> What on Earth could "proto-agi" be in Ben's Goertzel's world? What
>>> would be the criterion for recognising it as distinct from actual
>>> AGI?
>>> I get the impression Ben might have missed the point that
>>> intelligeatnce is just qualitatively different from non-
>>> intelligence because the entire article is fixated on it being all
>>> about improvements in hardware. If you're intelligent then you are
>>> a universal explainer. And you are either a universal
>>> explainer...or not. There's no "Sputnik" level of intelligence
>>> which will lead towards GPS and ISS levels of intelligence. Right?
>>> Brett.
>> At the end of the day the guy has just been told he hasn't made any
>> progress so it seems nature [to me] that he'll hit back with some
>> arsey comments, one of which the line about Deutsch which is supposed
>> to mean something like ".....for a guy who knows shit about the
>> subject"
>> Personally I think that if you know why someone is getting something >> like that in, then it's better to just ignore it and look for the main >> ideas. His idea that intrigues me, is about how to get some emergence
>> taking place out of the underlying components.
>> This has to be part of the problem because an inner sense of self
>> cannot be written directly into code. One criticism of Deutsch's
>> article, for me, was that he seemed to trivialise this aspect by
>> calling it nothing more than 'self-reference'. It isn't self reference >> alone, it's inner experience. it's what is going on my head right now,
>> me thinking I am here and me seeing things in my room.
> It is explained by the difference between "provable(p)" which involves
> self-reference, and "provable(p) & p", which involves self reference
> and truth. The first gives a theory of self-reference in a third
> person way, like when you say "I have two arms", and the second
> provides a self-reference in a first person subjective way. It defines > a non nameable knower, verifying the axiom of the classical theory of
> knowledge (S4), and which happens already to be non definable by the
> subject itself. So if interested you can consult my sane04. It does
> confirm many ideas of Deutsch, but its uses a more standard vocabulary.

Bruno, I'm confused. But feel like I'm 'almost there'. If you are some entity that can do "provable(p)" then you recognise your image in a mirror...or...what exactly?

I am very literal on this. I am thinking to a machine with very few beliefs like the following one (together with axioms for equality, no need of logic here)

x + 0 = x
x + s(y) = s(x + y)

 x *0 = 0
 x*s(y) = x*y + x

Or like this:

((K, x), y) = x
(((S, x), y), z) = ((x, z), (y, z))

Those little theories can be shown Turing universal.

Then I extend them with a bit of classical logic, and, importantly with some induction axiom. This makes them "Löbian" which means that they are maximally self-referential (they will remain Löbian whatever axioms you add, as far as they are arithmetically sound). Such machine can be shown (it is equivalent with Löbianity) that they know (in a technical sense) that they are universal.

Those machines can (like the first one above) represent themselves. This is always long to show, but that is what Gödel did in his 1931 paper: he translated meta-arithmetic *in* arithmetic. There is no magic: the first theory above handle only the objects 0, s(0), s(s(0)), so you will have to repesent variables by such object (say by the positive even numbers: s(s(0)), s(s(s(s(0)))), ..., and then you will have to represent, formula, and proofs, with such object, and proves that you can represent all the working machinery of such theories *in* the language of the theory. You will thus represent (provable-by-this-theory(p) in term of a purely arithmetical relation). The predicate itself "provable" represent the machine ability to prove, in the lngauge of the machine.

Now such machine can prove its own Gödel's second incompleteness theorem: so both theory above, when supplemented with the induction axiom can prove

not-provable ("0 = s(0)") implies not-provable("not provable("0=s(0)")")

let us write provable(p) by Bp, and not by ~, and "0=s(0)" by f (for falsity). (and "->' for implies)

The line above become

~Bf -> ~B (~Bf) you can read it if I don't ever prove a falsity then I can prove that I will never prove a falsity.

But ~Bf is equivalent with Bf -> f (you can verify by the truth table method: ~p is the same as (p->f).

So if the machine is consistent, (and this can be proved for those simple machines, in some sense) we have that ~Bf is true, and so, together with ~Bf -> ~B (~Bf) , we have that ~B (~Bf) is true too, and so ~Bf is true (for the machine) and not provable by the machine. This means that Bf->f is true but not provable by the machine.

This means that Bp -> p will be true, but not, in general provable, by the machine. This makes Gödel realized that provability does not behave like a knowledge operator, as we ask them to obey to both Bp -> p, and B(Bp->p).

But this makes possible to define a new operator K, by Bp & p, as now we know that for the amchine Bp does not always implies p. This explains we will have that

Kp <-> Bp will be true, yet such truth cannot be justified by the machine. More, they will obey to different logics. Kp will obey to a knowledge logic (S4), but Bp obeys to the weird logic G.

We have a case of two identical set of "beliefs", but obeying quite different laws, and they fit well the difference between the first person (K), and the "scientific view on oneself", G.

Note this: we cannot define who we (first person) are, by any third person description. The machine is in the same situation, as the operator K can be shown NOT definable in the language of the machine (for reason similar to Tarski undefinablity theorem of truth). This makes such K a good candidate for the first person, as it is a knower (unable to doubt at some level: consciousness), which cannot give any third person description or account of what or who he is.

Modeling "belief" by "provability", the difference is really the difference between

"I believe that the snow is white", and

"I believe that the snow is white,  and the snow is (actually) white".

With provable(p)and(p) then you recognise that the image in the mirror is you and you are a self.

I am OK, you are right.

Or something like that. I'm sure you have something more formal. I'll try again;

Is provable(p) something like "It can be shown that I have two arms" because (p) is just
"I have two arms"?


Putting that into natural language though seems to suggest that provable(p) must as a prior necessity have "p" as true.

Not necessarily. The Löbian machine which adds the axiom Bf, remains consistent, and so we have B(Bf) and ~Bf. It is a case of Bp and ~p. That is why provability is more akin to believe that knowledge (in the standard terminology)..

But that's not what you're saying. You seem to be saying it's *easier* for some entity which can do computations to get to provable(p) than (p).

The machine can be lucky, or well done, relatively to her environment, but Bp cannot necessitate p. The machine might be dreaming, for example.

I do think I understand the difference between a third-person self reference and first-person self reference. It's almost like the difference between pointing at a mirror and asserting:

"That is me" (where the "me" is not a "self" but rather just a bunch of atoms you are in control of.)

And pointing at a mirror and asserting "That's my *reflection*. And this is me." (Where the "me" corresponds to some feeling that establishes ones own existence to one's own satisfaction).

Yes, indeed. The amazing thing is that such a difference already makes sense for very little machine.

The mirror test, well a weakening of it, illustrates Löbianity/ induction. It is enough to "induce" that there is some reality beyond the mirror, and to show astonishment when you discover there is no such reality. Amazingly some spider are already like that:




You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to