On Tue, Aug 26, 2008 at 3:10 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Abram,
>
> I suspect what it comes down to - I'm tossing this out off-the-cuff - is
> that each new branch of maths involves new rules, new operations on numbers
> and figures, and new ways of relating the numbers and figures to real
> objects and sometimes new signs, period. And they aren't predictable or
> derivable from previous ones. Set theory is ultimately a v. useful
> convention, not an absolute necessity?

I think this is true in one sense and false in another, so I'll try to
be careful to distinguish. Mathematicians could have come up with
everything they have come up with w/o the aid of set theory-- it might
have been harder, but not impossible. (To take a specific example, it
seems like they would have to reconstruct the idea of infinite
ordinals in so many cases that it is unrealistic to suppose they
wouldn't notice the similarity and construct a general theory; but we
shall suppose it anyway for argument's sake.) However, in another
sense, it may be impossible: namely, it would not seem possible for
mathematicians to have done much math at all if set theory wasn't "in
there somewhere," that is, if mathematicians thought in a way that did
not admit sets as a coherent concept. I am not claiming that set
theory is "the logic of thought," but I do think it is to be
distinguished from things like 2d geometry that mathematicians
investigated essentially because of the sense-modalities at hand.

>
> Perhaps this overlaps with our previous discussion, which could perhaps be
> reduced to - is there a universal learning program - an AGI that can learn
> any skill? That perhaps can be formalised as - is there a program that can
> learn any program - a set of rules for learning any set of rules? I doubt
> it. Especially  if as we see with the relatively simple logic discussions on
> this forum, people can't agree on which rules/conventions/systems to apply,
> i.e. there are no definitive rules.

3 days ago Matt Mahoney referenced a paper by Shane Legg, supposedly
formally proving this point:

http://www.vetta.org/documents/IDSIA-12-06-1.pdf

I read it, and must say that I disagree with the interpretations
provided for the theorems. Specifically, one conclusion is that
because programs of high Kolmogorov complexity are required if we want
to guarantee the ability to learn sequences of comparably high
Kolmogorov complexity, AI needs to be an experimental science. So,
Shane Legg is assuming that highly complex programs are difficult to
invent. But there is an easy counterexample to this, which also
addresses your above point:

Given is T, the amount of computation time the algorithm is given
between sensory-inputs. Sensory inputs can ideally be thought of as
coming in at the rate of 1 bit per T cpu cycles (fitting with the
framework in the paper, which has data come in 1 bit at a time),
although in practice it would probably come in batches. Each time
period T:
--add the new input to a memory of all data that's come in so far
--Treat the memory as the output of a computer program in some
specific language. Run the program backwards, inferring everything
that can be inferred about its structure. A zero or one can only be
printed by particular basic print statements. It is impossible to know
for certain where conditional statements are, where loops are, and so
on, but at least the space of possibilities is well defined (since we
know which programming language we've chosen). Every time a choice
like this occurs, we split the simulation, so that we will quickly be
running a very large number of programs backwards.
--Whenever we get a complete program from this process, we need to run
it forwards (again, simulating it in parallel with everything else
that is going on). We record what it predicts as the NEXT data, along
with the program's length (because we will be treating shorter
programs as better models, and trusting what they tell us more
strongly than we trust longer programs).
--Because there are so many things going on at once, this will run
VERY slowly; however, we will simply terminate the process at time T
and take the best prediction we have at that point. (If we hadn't
gotten any yet, let's just say we predict 0.)

A more sophisticated version of that alg was presented at the AGI
conference in this paper:

http://www.agiri.org/docs/ComputationalApproximation.pdf

The algorithm will be able to learn any program, if given enough time!

NOW, why did Shane Legg's paper say that such a thing was impossible?
Well, in the formalism of the paper, the above "algorithm" cheated: it
isn't an algorithm at all! Fun, huh?

The reason is because I parameterized it in terms of that number T.
So, technically, it is a class of algorithms; we get a specific
algorithm by choosing a T-value. If we choose a very large T-value,
the algorithm coulf be very "complex", in terms of Kolmogorov
complexity. However, it will not be complex to humans, since it will
just be another instance of the same general algorithm! In fact, it
would just be the same algorithm given more time to do its job.

So, on that grounds, I disagree with Shane Legg: it is still possible
to find algorithms analytically, despite the found algorithms being
"of high complexity". Or, to rephrase, there are simple algorithms of
high Kolmogorov complexity, and those algorithms do the job that can't
be done by algorithms of low Kolmogoriv complexity.

>
> All this can perhaps be formalised neatly, near geometrically. (I'm still
> groping you understand). If we think of a screen of pixels - can all the
> visual games or branches of maths or art that can be expressed on that
> screen - mazes/maze-running/2d geometry/ 3d geometry/Riemannian/ abstract
> art/ chess/ go etc  - be united under - or derived from - a common set of
> metarules?

Now, THAT is more complicated :). You did not limit the situation to
computable "games", so I cannot answer yes by reminding you that the
set of computer programs is well-defined. You might be referring also
to uncomputable visual games. I WANT to answer "yes" in that case, but
I do not have any logic to point to that does the job. (A working
mathematician might point to set theory, and indeed it is good enough
for practically any game that you or I might naturally come up with,
but it is possible to come up with contrived cases that cannot be
defined inside set theory, yet evidently can be defined by a human.
Define an ordering on these games based on what colors the pixels are
actually colored; say, the first pixel to be a different color decides
the order, traveling left to right, top to bottom, forward in time.
The colors are ordered by their RGB hex code. Once we have the
ordering, I can define the game as: create the lowest video sequence
not definable in set theory. Neither you nor I has the ability to
actually win this game, but it is well-defined, therefore set theory
is not enough to do the job for us. And, as you can see, I didn't use
any special properties of set theory in this proof, so most logics
will fall by the same argument...)

>
> It should be fairly easy :) for an up-and-coming maths star like you to
> prove the obvious - that it isn't possible. Kauffman was looking for
> something like this. It's equivalent, it seems to me, to proving that you
> cannot derive any stage of evolution of matter or life from the previous one
> - that the world is fundamentally creative - that there are always new ways
> and new rules to join up the dots.

Well, maybe you accept the above parenthetical statement as such a
proof. I, on the other hand, still want a way out. What happens if I
replace set theory by a mathematician's inner logic?

"the first sequence not definable by human mathematicians"

Did I just define a sequence? It does not really seem like it, does
it? After all, if I really did, then the sequence I defined was not
the one I was after; I was after a non-definable sequence! So, the
above definition is not paradoxical in my book, it is just bad. And so
it seems that my intuitive logic escapes the above proof of set
theory's inadequacy. And so, it seems, such a logic could exist!
Right?

Maybe?

Hopefully?

--Abram Demski

>
>> Mike,
>>
>> The answer here is a yes. Many new branches of mathematics have arisen
>> since the formalization of set theory, but most of them can be
>> interpreted as special branches of set theory. Moreover,
>> mathematicians often find this to be actually useful, not merely a
>> curiosity.
>>
>> --Abram Demski
>>
>> On Tue, Aug 26, 2008 at 12:32 PM, Mike Tintner <[EMAIL PROTECTED]>
>> wrote:
>>>
>>> Valentina:In other words I'm looking for a way to mathematically define
>>> how
>>> the AGI will mathematically define its goals.
>>>
>>> Holy Non-Existent Grail? Has  any new branch of logic or mathematics ever
>>> been logically or mathematically (axiomatically) derivable from any old
>>> one?  e.g. topology,  Riemannian geometry, complexity theory, fractals,
>>> free-form deformation  etc etc
>>> ________________________________
>>> agi | Archives | Modify Your Subscription
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to