RE: [agi] On programming languages

2008-10-25 Thread John G. Rose
> From: Ben Goertzel [mailto:[EMAIL PROTECTED] > > Somewhat similarly, I've done coding on Windows before, but I dislike > the operating system quite a lot, so in general I try to avoid any > projects where I have to use it. > > However, if I found some AGI project that I thought were more promis

Re: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI)

2008-10-25 Thread Abram Demski
Mark, "...and that the (actually explicit) assumption underlying the whole scientific method is that the same causes produces the same results. Comments?" It seems like a somewhat weaker assumption *could* work; namely, "the same causes produce the same probability distribution on effects". This

Re: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI)

2008-10-25 Thread Mark Waser
>> -- truly general AI, even assuming the universe is computable, is impossible >> for any finite system Excellent. Unfortunately, I personally missed (or have forgotten) how AIXI shows or proves this (as opposed to invoking some other form of incompleteness) unless it is merely because of the

Re: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI)

2008-10-25 Thread Mark Waser
I am arguing by induction, not deduction: If the universe is computable, then Occam's Razor holds. Occam's Razor holds. Therefore the universe is computable. Of course, I have proved no such thing. Yep. That's a better summation of what I was trying to say . . . . Except that I'd like to bri

Re: [agi] On programming languages

2008-10-25 Thread Mark Waser
>> People seem to debate programming languages and OS's endlessly, and this >> list is no exception. Yes. And like all other debates there are good points and bad points.:-) >> To make progress on AGI, you just gotta make *some* reasonable choice and >> start building Strongly agree. Ot

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
Ah. An excellent distinction . . . .Thank you. Very helpful. Would it then be accurate to saySCIENCE = LEARNING + TRANSMISSION? Or, how about,SCIENCE = GROUP LEARNING? - Original Message - From: "Russell Wallace" <[EMAIL PROTECTED]> To: Sent: Saturday, October 2

Re: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI)

2008-10-25 Thread Ben Goertzel
AIXI shows a couple interesting things... -- truly general AI, even assuming the universe is computable, is impossible for any finite system -- given any finite level L of general intelligence that one desires, there are some finite R, M so that you can create a computer with less than R processi

AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI)

2008-10-25 Thread Matt Mahoney
--- On Sat, 10/25/08, Mark Waser <[EMAIL PROTECTED]> wrote: > Ummm. It seems like you were/are saying then that because > AIXI makes an > assumption limiting it's own applicability/proof (that > it requires that the > environment be computable) and because AIXI can make some > valid conclusions

Re: [agi] On programming languages

2008-10-25 Thread Eric Burton
MW, mine was an editorial reply to what struck me as a superficial pronouncement on a subject not amenable to treatment so cursory. But I like it less now, and I apologize. Eric B --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed:

Re: [agi] On programming languages

2008-10-25 Thread Ben Goertzel
> > Strong agreement with what you say but then effective rejection as a valid > point because language issues frequently are a total barrier to entry for > people who might have been able to do the algorithms and structures and > cognitive architecture. > > I'll even go so far as to use myself as

Re: [agi] On programming languages

2008-10-25 Thread Ben Goertzel
Agree, that was not a useful response ... On Sat, Oct 25, 2008 at 5:50 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > Surely a coherent reply to this assertion would involve the phrases >> "superstitious", "ignorant" and "FUD" >> > > So why don't you try to generate one to prove your guess? > > Are

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Russell Wallace
On Sat, Oct 25, 2008 at 11:14 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > Anyone else want to take up the issue of whether there is a distinction > between competent scientific research and competent learning (whether or not > both are being done by a machine) and, if so, what that distinction is?

Re: [agi] constructivist issues

2008-10-25 Thread Mark Waser
OK. A good explanation and I stand corrected and more educated. Thank you. - Original Message - From: "Abram Demski" <[EMAIL PROTECTED]> To: Sent: Saturday, October 25, 2008 6:06 PM Subject: Re: [agi] constructivist issues Mark, Yes. I wouldn't normally be so picky, but Godel's

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
So where is the difference There is no difference. Cool. That's one vote. Anyone else want to take up the issue of whether there is a distinction between competent scientific research and competent learning (whether or not both are being done by a machine) and, if so, what that distinction

Re: [agi] constructivist issues

2008-10-25 Thread Abram Demski
Mark, Yes. I wouldn't normally be so picky, but Godel's theorem *really* gets misused. Using Godel's theorem to say made it sound (to me) as if you have a very fundamental confusion. You were using a theorem about the incompleteness of proof to talk about the incompleteness of truth, so it sound

Re: On architecture was Re: [agi] On programming languages

2008-10-25 Thread Steve Richfield
William, On 10/24/08, William Pearson <[EMAIL PROTECTED]> wrote: > > I can't see a way to retrofit current systems to allow them to try out > a new kernel and revert to the previous one if the new one is worse > and malicious, without a human having to be involved. Digging into my grab bag of lo

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Matt Mahoney
--- On Sat, 10/25/08, Mark Waser <[EMAIL PROTECTED]> wrote: > > Scientists choose experiments to maximize information > > gain. There is no > > reason that machine learning algorithms couldn't > > do this, but often they don't. > > Heh. I would say that scientists attempt to do this and > machi

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
Ummm. It seems like you were/are saying then that because AIXI makes an assumption limiting it's own applicability/proof (that it requires that the environment be computable) and because AIXI can make some valid conclusions, that that "suggests" that AIXI's limiting assumptions are true of the

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Matt Mahoney
--- On Sat, 10/25/08, Mark Waser <[EMAIL PROTECTED]> wrote: > > The fact that Occam's Razor works in the real world > > suggests that the > > physics of the universe is computable. Otherwise AIXI > > would not apply. > > Hmmm. I don't get this. Occam's razor simply says > go with the simplest

Re: [agi] On programming languages

2008-10-25 Thread Mark Waser
Surely a coherent reply to this assertion would involve the phrases "superstitious", "ignorant" and "FUD" So why don't you try to generate one to prove your guess? Are you claiming that I'm superstitious and ignorant? That I'm fearful and uncertain or trying to generate fearfulness and uncert

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
Which "faulty" reasoning step are you talking about? You said that there is an alternative to ad hoc in optimal approximation. My request is that you show that the optimal approximation isn't going to just be determined in an ad hoc fashion. Your absurd strawman example of *using* a bad solut

Re: [agi] On programming languages

2008-10-25 Thread Eric Burton
> I'll even go so far as to use myself as an example. I can easily do C++ > (since I've done so in the past) but all the baggage around it make me > consider it not worth my while. I certainly won't hesitate to use what is > learned on that architecture but I'll be totally shocked if you aren't >

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Vladimir Nesov
On Sun, Oct 26, 2008 at 1:19 AM, Mark Waser <[EMAIL PROTECTED]> wrote: > > You are now apparently declining to provide an algorithmic solution without > arguing that not doing so is a disproof of your statement. > Or, in other words, you are declining to prove that Matt is incorrect in > saying tha

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
Vladimir said> > I pointed out only that it doesn't follow from AIXI that ad-hoc is justified. Matt used a chain of logic that went as follows: AIXI says that a perfect solution is not computable. However, a very general principle of both scientific research and machine learning is to favor si

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
Scientists choose experiments to maximize information gain. There is no reason that machine learning algorithms couldn't do this, but often they don't. Heh. I would say that scientists attempt to do this and machine learning algorithms should do it. So where is the difference other than in

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
The fact that Occam's Razor works in the real world suggests that the physics of the universe is computable. Otherwise AIXI would not apply. Hmmm. I don't get this. Occam's razor simply says go with the simplest explanation until forced to expand it and then only expand it as necessary. How

Re: [agi] On programming languages

2008-10-25 Thread Mark Waser
>> Anyway language issues are just not the main problem in creating AGI. >> Getting the algorithms and structures and cognitive architecture right are >> dramatically more important. Strong agreement with what you say but then effective rejection as a valid point because language issues freque

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Vladimir Nesov
On Sun, Oct 26, 2008 at 12:17 AM, Mark Waser <[EMAIL PROTECTED]> wrote: >> No, it doesn't justify ad-hoc, even when perfect solution is >> impossible, you could still have an optimal approximation under given >> limitations. > > So what is an optimal approximation under uncertainty? How do you kno

Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Charles Hixson
Dr. Matthias Heger wrote: ... I think humans represent chess by a huge number of **visual** patterns. The chessboard is 8x8 squares. Probably, a human considers all 2x2, 3x3 4x4 and even more subsets of the chessboard at once beside the possible moves. We see if a pawn is alone or if a knight

Re: [agi] constructivist issues

2008-10-25 Thread Mark Waser
So you're saying that if I switch to using Tarski's theory (which I believe is fundamentally just a very slightly different aspect of the same critical concept -- but unfortunately much less well-known and therefore less powerful as an explanation) that you'll agree with me? That seems akin to

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
No, it doesn't justify ad-hoc, even when perfect solution is impossible, you could still have an optimal approximation under given limitations. So what is an optimal approximation under uncertainty? How do you know when you've gotten there? If you don't believe in ad-hoc then you must have a

Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Charles Hixson
Dr. Matthias Heger wrote: The goal of chess is well defined: Avoid being checkmate and try to checkmate your opponent. What checkmate means can be specified formally. Humans mainly learn chess from playing chess. Obviously their knowledge about other domains are not sufficient for most beginner

Re: [agi] constructivist issues

2008-10-25 Thread Abram Demski
Eric, Nobody here is actually arguing that the brain is non-computational, though. (The quote you refer to was a misunderstanding). I was arguing that we have an understanding of noncomputational entities, and Ben was arguing (approximately) that any actual behavior could be explained equally wel

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Matt Mahoney
--- On Sat, 10/25/08, Mark Waser <[EMAIL PROTECTED]> wrote: > > AIXI says that a perfect solution is not computable. However, a very > > general principle of both scientific research and machine learning is to > > favor simple hypotheses over complex ones. AIXI justifies these practices > > in

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
AIXI says that a perfect solution is not computable. However, a very general principle of both scientific research and machine learning is to favor simple hypotheses over complex ones. AIXI justifies these practices in a formal way. It also says we can stop looking for a universal solution, whi

[agi] OpenCog hands-on workshop Sunday

2008-10-25 Thread Ben Goertzel
A reminder to all in the San Fran Bay area ... a hands-on workshop on OpenCog hosted by Ben Goertzel, Joel Pitt and David Hart will be held Sunday. Details are here: http://opencog.org/wiki/CogDev2008 There is no cost except your sanity. -- Ben G -- Ben Goertzel, PhD CEO, Novamente LLC and

Re: [agi] On programming languages

2008-10-25 Thread Vladimir Nesov
On Sat, Oct 25, 2008 at 1:17 PM, Russell Wallace <[EMAIL PROTECTED]> wrote: > On Sat, Oct 25, 2008 at 9:57 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> Note that people are working on this specific technical problem for 30 >> years, (see the scary amount of work by Cousot's lab, >> http://www.d

Re: [agi] On programming languages

2008-10-25 Thread Russell Wallace
On Sat, Oct 25, 2008 at 9:57 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: > Note that people are working on this specific technical problem for 30 > years, (see the scary amount of work by Cousot's lab, > http://www.di.ens.fr/~cousot/COUSOTpapers/ ), and they are still > tackling fixed invariants,

Re: [agi] On programming languages

2008-10-25 Thread Vladimir Nesov
On Sat, Oct 25, 2008 at 12:40 PM, Russell Wallace <[EMAIL PROTECTED]> wrote: > >> What I see as potential way of AI in program analysis is cracking >> abstract interpretation, automatically inventing invariants and >> proving that they hold, using these invariants to interface between >> results of

Re: [agi] On programming languages

2008-10-25 Thread Russell Wallace
On Sat, Oct 25, 2008 at 9:29 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: > There are systems that do just that, constructing models of a program > and representing conditions of absence of a bug as huge formulas. They > work with various limitations, theorem-prover based systems using > counterex

Re: [agi] On programming languages

2008-10-25 Thread Vladimir Nesov
On Sat, Oct 25, 2008 at 3:17 AM, Russell Wallace <[EMAIL PROTECTED]> wrote: > On Fri, Oct 24, 2008 at 7:42 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> This general sentiment doesn't help if I don't know what to do specifically. > > Well, given a C/C++ program that does have buffer overrun or s