On 16 Jun 2011, at 18:36, meekerdb wrote:
On 6/16/2011 7:38 AM, Bruno Marchal wrote:
Concerning the learning competence of a machine, I measure it by
the classes of computable functions that the machine is able to
identify from finite samples of input-outputs. This leads to the
"computational learning theory" or "inductive inference" theory,
which shows that the possible competences form a complex lattice
with a lot of incomparable competences, and with a lot of
necessarily non constructive gaps existing among them.
Do you have some reference where this is explained?
My favorite paper on this is:
CASE J. & SMITH C., 1983, Comparison of Identification Criteria for
Inference. In Theoretical Computer Science 25,.pp 193-220.
But since, there has been a ton of papers published. Notably the COLT
There is also the book:
OSHERSON D.N., STOB M.and WEINSTEIN S., 1986, Systems that Learn, MIT
press. (New edition exists since)
It is a recursion theoretic based field. Of course it does not
interest so much the engineers as most result are not constructive,
indeed necessarily so.
A basic fundamental paper is:
GOLD, E. M., 1965, Limiting recursion, Journal of Symbolic Logic, 30,
1, pp. 27-48.
GOLD E.M., 1967, Language Identification in the Limit. Information &
Control 10, pp.
Another one is:
BLUM L. & BLUM M., 1975, Toward a Mathematical Theory of Inductive
Information and Control 28,.pp. 125-155.
There is a full chapter on this in "Conscience et mécanisme". You will
find other references in the "bibilographie générale" pdf.
The gap G* minus G formalizes an easy set of inferable but non
provable self-referential truth by inductive inference type of Löbian
machines (the 'mystical machines').
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org.
To unsubscribe from this group, send email to
For more options, visit this group at