Re: [agi] constructivist issues

2008-10-22 Thread Abram Demski
Too many responses for me to comment on everything! So, sorry to those I don't address... Ben, When I claim a mathematical entity exists, I'm saying loosely that meaningful statements can be made using it. So, I think "meaning" is more basic. I mentioned already what my current definition of mean

Re: [agi] constructivist issues

2008-10-22 Thread Vladimir Nesov
On Wed, Oct 22, 2008 at 7:47 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > >> The problem is to gradually improve overall causal model of >> environment (and its application for control), including language and >> dynamics of the world. Better model allows more detailed experience, >> and so throug

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
ucinogens, if not the subsequently warped thoughts, do have the serious value of raising your mental Boltzmann temperature. - Original Message - From: Ben Goertzel To: agi@v2.listbox.com Sent: Wednesday, October 22, 2008 11:11 AM Subject: Re: [agi] constructivist issues On

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
(joke) What? You don't love me any more? - Original Message - From: Ben Goertzel To: agi@v2.listbox.com Sent: Wednesday, October 22, 2008 11:11 AM Subject: Re: [agi] constructivist issues (joke) On Wed, Oct 22, 2008 at 11:11 AM, Ben Goertzel <[EMAIL P

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
essage - From: Ben Goertzel To: agi@v2.listbox.com Sent: Wednesday, October 22, 2008 11:11 AM Subject: Re: [agi] constructivist issues Personally, rather than starting with NLP, I think that we're going to need to start with a formal language that is a disambiguat

Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
> > The problem is to gradually improve overall causal model of > environment (and its application for control), including language and > dynamics of the world. Better model allows more detailed experience, > and so through having a better inbuilt model of an aspect of > environment, such as langua

Re: [agi] constructivist issues

2008-10-22 Thread Vladimir Nesov
On Wed, Oct 22, 2008 at 7:22 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > However, it's possible that working with Lojban could help cut through > the following "chicken and egg" problem: > > -- if your AI understands the world, then it can disambiguate language > > -- if your AI can disambiguat

Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
> > It looks like all this "disambiguation" by moving to a more formal > language is about sweeping the problem under the rug, removing the > need for uncertain reasoning from surface levels of syntax and > semantics, to remember about it 10 years later, retouch the most > annoying holes with simpl

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> I disagree, and believe that I can think X: "This is a thought (T) that is >> way too complex for me to ever have." >> Obviously, I can't think T and then think X, but I might represent T as a >> combination of myself plus a notebook or some other external media. Even if >> I only observe par

Re: [agi] constructivist issues

2008-10-22 Thread Vladimir Nesov
On Wed, Oct 22, 2008 at 7:11 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > >> >> >> Personally, rather than starting with NLP, I think that we're going to >> need to start with a formal language that is a disambiguated subset of >> English > > IMHO that is an almost hopeless approach, ambiguity is

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
formance and I definitely need to know the details if I'm diagnosing/fixing/debugging it -- but I can always learn them as I go . . . . - Original Message - From: Ben Goertzel To: agi@v2.listbox.com Sent: Tuesday, October 21, 2008 11:26 PM Subject: Re: [agi] constructivis

Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
(joke) On Wed, Oct 22, 2008 at 11:11 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > On Wed, Oct 22, 2008 at 10:51 AM, Mark Waser <[EMAIL PROTECTED]> wrote: > >> >> I don't want to diss the personal value of logically inconsistent >> thoughts. But I doubt their scientific and engineering valu

Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
On Wed, Oct 22, 2008 at 10:51 AM, Mark Waser <[EMAIL PROTECTED]> wrote: > >> I don't want to diss the personal value of logically inconsistent > thoughts. But I doubt their scientific and engineering value. > I doesn't seem to make sense that something would have personal value and > then not ha

Re: [agi] constructivist issues

2008-10-22 Thread Ben Goertzel
> > Personally, rather than starting with NLP, I think that we're going to need > to start with a formal language that is a disambiguated subset of English IMHO that is an almost hopeless approach, ambiguity is too integral to English or any natural language ... e.g preposition ambiguity If you

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
(1) We humans understand the semantics of formal system X. No. This is the root of your problem. For example, replace "formal system X" with "XML". Saying that "We humans understand the semantics of XML" certainly doesn't work and why I would argue that natural language understanding is AG

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> You have not convinced me that you can do anything a computer can't do. >> And, using language or math, you never will -- because any finite set of >> symbols >> you can utter, could also be uttered by some computational system. >> -- Ben G Can we pin this somewhere? (Maybe on Penrose? ;-)

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> I don't want to diss the personal value of logically inconsistent thoughts. >> But I doubt their scientific and engineering value. I doesn't seem to make sense that something would have personal value and then not have scientific or engineering value. I can sort of understand science if you

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
> It doesn't, because **I see no evidence that humans can > understand the semantics of formal system in X in any sense that > a digital computer program cannot** I just argued that humans can't understand the totality of any formal system X due to Godel's Incompleteness Theorem but the rest of t

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
> You may not like "Therefore, we cannot understand the math needed to define > our own intelligence.", but I'm rather convinced that it's correct. Do you mean to say that there are parts that we can't understand or that the totality is too large to fit and that it can't be cleanly and completel

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
t; Isn't it just like thinking "This is an image that is way too detailed for > me to ever see"? > > Charles Griffiths > > --- On *Tue, 10/21/08, Ben Goertzel <[EMAIL PROTECTED]>* wrote: > > From: Ben Goertzel <[EMAIL PROTECTED]> > Subject: Re:

Re: [agi] constructivist issues

2008-10-21 Thread charles griffiths
s is an image that is way too detailed for me to ever see"? Charles Griffiths --- On Tue, 10/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: From: Ben Goertzel <[EMAIL PROTECTED]> Subject: Re: [agi] constructivist issues To: agi@v2.listbox.com Date: Tuesday, October 21, 2008, 7

Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
Russel, I could be wrong here. Jurgen's Super Omega is based on what I called "halting2", and while it would be simple to define super-super-omega from halting3, and so on, I have not seen it done. The reason I called these higher levels "horribly-terribly-uncomputable" is because Jurgen's super-o

Re: [agi] constructivist issues

2008-10-21 Thread Russell Wallace
On Wed, Oct 22, 2008 at 3:11 AM, Abram Demski <[EMAIL PROTECTED]> wrote: > I agree with you there. Our disagreement is about what formal systems > a computer can understand. I'm also not quite sure what the problem is, but suppose we put it this way: I think the most useful way to understand the

Re: [agi] constructivist issues

2008-10-21 Thread Russell Wallace
On Tue, Oct 21, 2008 at 8:13 PM, Abram Demski <[EMAIL PROTECTED]> wrote: > The wikipedia article Ben cites is definitely meant for > mathematicians, so I will try to give an example. Yes indeed -- thanks! > The halting problem asks us about halting facts for a single program. > To make it worse,

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
I am a Peircean pragmatist ... I have no objection to using infinities in mathematics ... they can certainly be quite useful. I'd rather use differential calculus to do calculations, than do everything using finite differences. It's just that, from a science perspective, these mathematical infin

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
On Tue, Oct 21, 2008 at 10:11 PM, Abram Demski <[EMAIL PROTECTED]>wrote: > > It doesn't, because **I see no evidence that humans can > > understand the semantics of formal system in X in any sense that > > a digital computer program cannot** > > I agree with you there. Our disagreement is about wh

Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
Ben, How accurate would it be to describe you as a finitist or ultrafinitist? I ask because your view about restricting quantifiers seems to reject even the infinities normally allowed by constructivists. --Abram --- agi Archives: https://www.listbox.com/

Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
> It doesn't, because **I see no evidence that humans can > understand the semantics of formal system in X in any sense that > a digital computer program cannot** I agree with you there. Our disagreement is about what formal systems a computer can understand. (The rest of your post seems to depend

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
Abram, > To re-explain: We might construct generalizations of AIXI that use a > broader range of models. Specifically, it seems reasonable to try > models that are extensions of first-order arithmetic, such as > second-order arithmetic (analysis), ZF-set theory... (Models in > first-order logic o

Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
Ben, This is not what I meant at all! I am not trying to make an argument from any sort of "intuitive feeling" of "absolute free will" in that paragraph (or, well, ever). That paragraph was referring to Terski's undefinability theorem. Quoting the context directly before the paragraph in questio

Re: [agi] constructivist issues

2008-10-21 Thread Trent Waddington
On Wed, Oct 22, 2008 at 11:21 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > Personally my view is as follows. Science does not need to intuitively > explain all > aspects of our experience: what it has to do is make predictions about > finite sets of finite-precision observations, based on previou

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
I am completely unable to understand what this paragraph is supposed to mean: *** One reasonable way of avoiding the "humans are magic" explanation of this (or "humans use quantum gravity computing", etc) is to say that, OK, humans really are an approximation of an ideal intelligence obeying those

Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
Charles, You are right to call me out on this, as I really don't have much justification for rejecting that view beyond "I don't like it, it's not elegant". But, I don't like it! It's not elegant! About the connotations of "engineer"... more specifically, I should say that this prevents us from

Re: [agi] constructivist issues

2008-10-21 Thread Charles Hixson
Abram Demski wrote: Ben, ... One reasonable way of avoiding the "humans are magic" explanation of this (or "humans use quantum gravity computing", etc) is to say that, OK, humans really are an approximation of an ideal intelligence obeying those assumptions. Therefore, we cannot understand the ma

Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
Russell, The wikipedia article Ben cites is definitely meant for mathematicians, so I will try to give an example. The halting problem asks us about halting facts for a single program. To make it worse, I could ask about an infinite class of programs: "All programs satisfying Q eventually halt."

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
Try Rudy Rucker's book "Infinity and the Mind" for a good nontechnical treatment of related ideas http://www.amazon.com/Infinity-Mind-Rudy-Rucker/dp/0691001723 The related wikipedia pages are a bit technical ;-p , e.g. http://en.wikipedia.org/wiki/Inaccessible_cardinal On Tue, Oct 21, 2008

Re: [agi] constructivist issues

2008-10-21 Thread Russell Wallace
On Tue, Oct 21, 2008 at 4:53 PM, Abram Demski <[EMAIL PROTECTED]> wrote: > As it happens, this definition of > meaning admits horribly-terribly-uncomputable-things to be described! > (Far worse than the above-mentioned super-omegas.) So, the truth or > falsehood is very much not computable. > > I'm

Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
Ben, My discussion of "meaning" was supposed to clarify that. The final definition is the broadest I currently endorse, and it admits meaningful uncomputable facts about numbers. It does not appear to get into the realm of set theory, though. --Abram On Tue, Oct 21, 2008 at 12:07 PM, Ben Goertze

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
> > > But, worse, there are mathematically well-defined entities that are > not even enumerable or co-enumerable, and in no sense seem computable. > Of course, any axiomatic theory of these objects *is* enumerable and > therefore intuitively computable (but technically only computably > enumerable)

Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
Ben, Unfortunately, this response is going to be (somewhat) long, because I have several points that I want to make. If I understand what you are saying, you're claiming that if I pointed to the black box and said "That's a halting oracle", I'm not describing the box directly, but instead describ

Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
On Mon, Oct 20, 2008 at 10:30 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Mon, 10/20/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > I do have a limited argument against these ideas, which has to do with > > language. My point is that, if you take any uncomputable universe > > U, ther

Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
On Mon, Oct 20, 2008 at 5:29 PM, Abram Demski <[EMAIL PROTECTED]> wrote: > Ben, > > "[my statement] seems to incorporate the assumption of a "finite > period of time" because a finite set of sentences or observations must > occur during a finite period of time." > > A finite set of observations, s

Re: [agi] constructivist issues

2008-10-20 Thread Matt Mahoney
--- On Mon, 10/20/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > I do have a limited argument against these ideas, which has to do with > language.   My point is that, if you take any uncomputable universe > U, there necessarily exists some computable universe C so that > > 1) there is no way to di

Re: [agi] constructivist issues

2008-10-20 Thread Abram Demski
Ben, "[my statement] seems to incorporate the assumption of a "finite period of time" because a finite set of sentences or observations must occur during a finite period of time." A finite set of observations, sure, but a finite set of statements can include universal statements. "Fractal image

Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
My statement was *** if you take any uncomputable universe U, there necessarily exists some computable universe C so that 1) there is no way to distinguish U from C based on any finite set of finite-precision observations 2) there is no finite set of sentences in any natural or formal language (

Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
> > I am not sure about your statements 1 and 2. Generally responding, > I'll point out that uncomputable models may compress the data better > than computable ones. (A practical example would be fractal > compression of images. Decompression is not exactly a computation > because it never halts, w

Re: [agi] constructivist issues

2008-10-20 Thread Abram Demski
Ben, I agree that these issues don't need to have much to do with implementation... William Pearson convinced me of that, since his framework is about as general as general can get. His idea is to search the space of *internal* programs rather than *external* ones, so that we aren't assuming that

Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
Yes, if we live in a universe that has Turing-uncomputable physics, then obviously AIXI is not necessarily going to be capable of adequately dealing with that universe ... and nor is AGI based on digital computer programs necessarily going to be able to equal human intelligence. In that case, we m

Re: [agi] constructivist issues

2008-10-20 Thread Abram Demski
Ben, The most extreme case is if we happen to live in a universe with uncomputable physics, which of course would violate the AIXI assumption. This could be the case merely because we have physical constants that have no algorithmic description (but perhaps still have mathematical descriptions). A

Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
I do not understand what kind of understanding of noncomputable numbers you think a human has, that AIXI could not have. Could you give a specific example of this kind of understanding? What is some fact about noncomputable numbers that a human can understand but AIXI cannot? And how are you def

Re: [agi] constructivist issues

2008-10-19 Thread Abram Demski
Ben, Just to clarify my opinion: I think an actual implementation of the novamente/OCP design is likely to overcome this difficulty. However, to the extent that it approximates AIXI, I think there will be problems of these sorts. The main reason I think OCP/novamente would *not* approximate AIXI

Re: [agi] constructivist issues

2008-10-19 Thread Abram Demski
Ben, How so? Also, do you think it is nonsensical to put some probability on noncomputable models of the world? --Abram On Sun, Oct 19, 2008 at 6:33 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > But: it seems to me that, in the same sense that AIXI is incapable of > "understanding" proofs abou

Re: [agi] constructivist issues

2008-10-19 Thread Ben Goertzel
But: it seems to me that, in the same sense that AIXI is incapable of "understanding" proofs about uncomputable numbers, **so are we humans** ... On Sun, Oct 19, 2008 at 6:30 PM, Abram Demski <[EMAIL PROTECTED]> wrote: > Matt, > > Yes, that is completely true. I should have worded myself more cle

Re: [agi] constructivist issues

2008-10-19 Thread Abram Demski
Matt, Yes, that is completely true. I should have worded myself more clearly. Ben, Matt has sorted out the mistake you are referring to. What I meant was that AIXI is incapable of understanding the proof, not that it is incapable of producing it. Another way of describing it: AIXI could learn to

Re: [agi] constructivist issues

2008-10-19 Thread Ben Goertzel
But, either you're just wrong or I don't understand your wording ... of course, AIXI **can** reason about uncomputable entities. If you showed AIXI the axioms of, say, ZF set theory (including the Axiom of Choice), and reinforced it for correctly proving theorems about uncomputable entities as def

Re: [agi] constructivist issues

2008-10-19 Thread Matt Mahoney
--- On Sat, 10/18/08, Abram Demski <[EMAIL PROTECTED]> wrote: > No, I do not claim that computer theorem-provers cannot > prove Goedel's Theorem. It has been done. The objection applies > specifically to AIXI-- AIXI cannot prove goedel's theorem. Yes it can. It just can't understand its own proof

Re: [agi] constructivist issues

2008-10-19 Thread Abram Demski
Ben, I don't know what sounded "almost confused", but anyway it is apparent that I didn't make my position clear. I am not saying we can manipulate these things directly via exotic (non)computing. First, I am very specifically saying that AIXI-style AI (meaning, any AI that approaches AIXI as res

Re: [agi] constructivist issues

2008-10-19 Thread Ben Goertzel
Abram, I find it more useful to think in terms of Chaitin's reformulation of Godel's Theorem: http://www.cs.auckland.ac.nz/~chaitin/sciamer.html Given any computer program with algorithmic information capacity less than K, it cannot prove theorems whose algorithmic information content is greater

<    1   2