Ben,
My discussion of meaning was supposed to clarify that. The final
definition is the broadest I currently endorse, and it admits
meaningful uncomputable facts about numbers. It does not appear to get
into the realm of set theory, though.
--Abram
On Tue, Oct 21, 2008 at 12:07 PM, Ben Goertzel
On Tue, Oct 21, 2008 at 4:53 PM, Abram Demski [EMAIL PROTECTED] wrote:
As it happens, this definition of
meaning admits horribly-terribly-uncomputable-things to be described!
(Far worse than the above-mentioned super-omegas.) So, the truth or
falsehood is very much not computable.
I'm
Try Rudy Rucker's book Infinity and the Mind for a good nontechnical
treatment of related ideas
http://www.amazon.com/Infinity-Mind-Rudy-Rucker/dp/0691001723
The related wikipedia pages are a bit technical ;-p , e.g.
http://en.wikipedia.org/wiki/Inaccessible_cardinal
On Tue, Oct 21, 2008
Russell,
The wikipedia article Ben cites is definitely meant for
mathematicians, so I will try to give an example.
The halting problem asks us about halting facts for a single program.
To make it worse, I could ask about an infinite class of programs:
All programs satisfying Q eventually halt.
Abram Demski wrote:
Ben,
...
One reasonable way of avoiding the humans are magic explanation of
this (or humans use quantum gravity computing, etc) is to say that,
OK, humans really are an approximation of an ideal intelligence
obeying those assumptions. Therefore, we cannot understand the math
Charles,
You are right to call me out on this, as I really don't have much
justification for rejecting that view beyond I don't like it, it's
not elegant.
But, I don't like it! It's not elegant!
About the connotations of engineer... more specifically, I should
say that this prevents us from
I am completely unable to understand what this paragraph is supposed to
mean:
***
One reasonable way of avoiding the humans are magic explanation of
this (or humans use quantum gravity computing, etc) is to say that,
OK, humans really are an approximation of an ideal intelligence
obeying those
On Wed, Oct 22, 2008 at 11:21 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Personally my view is as follows. Science does not need to intuitively
explain all
aspects of our experience: what it has to do is make predictions about
finite sets of finite-precision observations, based on
Ben,
This is not what I meant at all! I am not trying to make an argument
from any sort of intuitive feeling of absolute free will in that
paragraph (or, well, ever).
That paragraph was referring to Terski's undefinability theorem.
Quoting the context directly before the paragraph in question:
Abram,
To re-explain: We might construct generalizations of AIXI that use a
broader range of models. Specifically, it seems reasonable to try
models that are extensions of first-order arithmetic, such as
second-order arithmetic (analysis), ZF-set theory... (Models in
first-order logic of
It doesn't, because **I see no evidence that humans can
understand the semantics of formal system in X in any sense that
a digital computer program cannot**
I agree with you there. Our disagreement is about what formal systems
a computer can understand. (The rest of your post seems to depend
Ben,
How accurate would it be to describe you as a finitist or
ultrafinitist? I ask because your view about restricting quantifiers
seems to reject even the infinities normally allowed by
constructivists.
--Abram
---
agi
Archives:
On Tue, Oct 21, 2008 at 10:11 PM, Abram Demski [EMAIL PROTECTED]wrote:
It doesn't, because **I see no evidence that humans can
understand the semantics of formal system in X in any sense that
a digital computer program cannot**
I agree with you there. Our disagreement is about what formal
I am a Peircean pragmatist ...
I have no objection to using infinities in mathematics ... they can
certainly be quite useful. I'd rather use differential calculus to do
calculations, than do everything using finite differences.
It's just that, from a science perspective, these mathematical
On Wed, Oct 22, 2008 at 3:11 AM, Abram Demski [EMAIL PROTECTED] wrote:
I agree with you there. Our disagreement is about what formal systems
a computer can understand.
I'm also not quite sure what the problem is, but suppose we put it this way:
I think the most useful way to understand the
Russel,
I could be wrong here. Jurgen's Super Omega is based on what I called
halting2, and while it would be simple to define super-super-omega
from halting3, and so on, I have not seen it done. The reason I called
these higher levels horribly-terribly-uncomputable is because
Jurgen's
too detailed for me
to ever see?
Charles Griffiths
--- On Tue, 10/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] constructivist issues
To: agi@v2.listbox.com
Date: Tuesday, October 21, 2008, 7:56 PM
I am a Peircean pragmatist ...
I have
I do not understand what kind of understanding of noncomputable numbers you
think a human has, that AIXI could not have. Could you give a specific
example of this kind of understanding? What is some fact about
noncomputable numbers that a human can understand but AIXI cannot? And how
are you
Ben,
The most extreme case is if we happen to live in a universe with
uncomputable physics, which of course would violate the AIXI
assumption. This could be the case merely because we have physical
constants that have no algorithmic description (but perhaps still have
mathematical descriptions).
Yes, if we live in a universe that has Turing-uncomputable physics, then
obviously AIXI is not necessarily going to be capable of adequately dealing
with that universe ... and nor is AGI based on digital computer programs
necessarily going to be able to equal human intelligence.
In that case, we
Ben,
I agree that these issues don't need to have much to do with
implementation... William Pearson convinced me of that, since his
framework is about as general as general can get. His idea is to
search the space of *internal* programs rather than *external* ones,
so that we aren't assuming that
I am not sure about your statements 1 and 2. Generally responding,
I'll point out that uncomputable models may compress the data better
than computable ones. (A practical example would be fractal
compression of images. Decompression is not exactly a computation
because it never halts, we
My statement was
***
if you take any uncomputable universe U, there necessarily exists some
computable universe C so that
1) there is no way to distinguish U from C based on any finite set of
finite-precision observations
2) there is no finite set of sentences in any natural or formal language
Ben,
[my statement] seems to incorporate the assumption of a finite
period of time because a finite set of sentences or observations must
occur during a finite period of time.
A finite set of observations, sure, but a finite set of statements can
include universal statements.
Fractal image
--- On Mon, 10/20/08, Ben Goertzel [EMAIL PROTECTED] wrote:
I do have a limited argument against these ideas, which has to do with
language. My point is that, if you take any uncomputable universe
U, there necessarily exists some computable universe C so that
1) there is no way to
On Mon, Oct 20, 2008 at 5:29 PM, Abram Demski [EMAIL PROTECTED] wrote:
Ben,
[my statement] seems to incorporate the assumption of a finite
period of time because a finite set of sentences or observations must
occur during a finite period of time.
A finite set of observations, sure, but a
Abram,
I find it more useful to think in terms of Chaitin's reformulation of
Godel's Theorem:
http://www.cs.auckland.ac.nz/~chaitin/sciamer.html
Given any computer program with algorithmic information capacity less than
K, it cannot prove theorems whose algorithmic information content is
Ben,
I don't know what sounded almost confused, but anyway it is apparent
that I didn't make my position clear. I am not saying we can
manipulate these things directly via exotic (non)computing.
First, I am very specifically saying that AIXI-style AI (meaning, any
AI that approaches AIXI as
--- On Sat, 10/18/08, Abram Demski [EMAIL PROTECTED] wrote:
No, I do not claim that computer theorem-provers cannot
prove Goedel's Theorem. It has been done. The objection applies
specifically to AIXI-- AIXI cannot prove goedel's theorem.
Yes it can. It just can't understand its own proof in
But, either you're just wrong or I don't understand your wording ... of
course, AIXI **can** reason about uncomputable entities. If you showed AIXI
the axioms of, say, ZF set theory (including the Axiom of Choice), and
reinforced it for correctly proving theorems about uncomputable entities as
Matt,
Yes, that is completely true. I should have worded myself more clearly.
Ben,
Matt has sorted out the mistake you are referring to. What I meant was
that AIXI is incapable of understanding the proof, not that it is
incapable of producing it. Another way of describing it: AIXI could
learn
Ben,
How so? Also, do you think it is nonsensical to put some probability
on noncomputable models of the world?
--Abram
On Sun, Oct 19, 2008 at 6:33 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
But: it seems to me that, in the same sense that AIXI is incapable of
understanding proofs about
Ben,
Just to clarify my opinion: I think an actual implementation of the
novamente/OCP design is likely to overcome this difficulty. However,
to the extent that it approximates AIXI, I think there will be
problems of these sorts.
The main reason I think OCP/novamente would *not* approximate AIXI
Matt,
I suppose you don't care about Steve's do not comment request? Oh
well, I want to discuss this anyway. 'Tis why I posted in the first
place.
No, I do not claim that computer theorem-provers cannot prove Goedel's
Theorem. It has been done. The objection applies specifically to
AIXI-- AIXI
101 - 134 of 134 matches
Mail list logo