RE: Language modeling (was Re: [agi] draft for comment)

2008-09-07 Thread John G. Rose
From: Matt Mahoney [mailto:[EMAIL PROTECTED] --- On Sat, 9/6/08, John G. Rose [EMAIL PROTECTED] wrote: Compression in itself has the overriding goal of reducing storage bits. Not the way I use it. The goal is to predict what the environment will do next. Lossless compression is a way

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote: You start v. constructively thinking how to test the non-programmed nature of  - or simply record - the actual writing of programs, and then IMO fail to keep going. You could trace their keyboard presses back to the cerebellum and motor

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, William Pearson wrote: 2008/9/5 Mike Tintner [EMAIL PROTECTED]: By contrast, all deterministic/programmed machines and computers are guaranteed to complete any task they begin. If only such could be guaranteed! We would never have system hangs, dead locks. Even

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, Mike Tintner wrote: Were your computer like a human mind, it would have been able to say (as you/we all do) - well if that part of the problem is going to be difficult, I'll ignore it  or.. I'll just make up an answer... or by God I'll keep trying other ways until

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, Mike Tintner wrote: fundamental programming problem, right?) A creative free machine, like a human, really can follow any of what may be a vast range of routes - and you really can't predict what it will do or, at a basic level, be surprised by it. What do you say

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Saturday 06 September 2008, William Pearson wrote: I'm very interested in computers that self-maintain, that is reduce (or eliminate) the need for a human to be in the loop or know much about the internal workings of the computer. However it doesn't need a vastly different computing

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Saturday 06 September 2008, Mike Tintner wrote: Our unreliabilty is the negative flip-side of our positive ability to stop an activity at any point, incl. the beginning and completely change tack/ course or whole approach, incl. the task itself, and even completely contradict ourself. But

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, Terren Suydam wrote: So, Mike, is free will: 1) an illusion based on some kind of unpredictable, complex but *deterministic* interaction of physical components 2) the result of probabilistic physics - a *non-deterministic* interaction described by something like

Re: [agi] Recursive self-change: some definitions

2008-09-07 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote: Bryan, How do you know the brain has a code? Why can't it be entirely impression-istic - a system for literally forming, storing and associating sensory impressions (including abstracted, simplified, hierarchical impressions of other

Re: [agi] open models, closed models, priors

2008-09-07 Thread Bryan Bishop
On Thursday 04 September 2008, Matt Mahoney wrote: Yes you do. Every time you make a decision, you are assigning a higher probability of a good outcome to your choice than to the alternative. You'll have to prove to me that I make decisions, whatever that means. - Bryan

RE: Language modeling (was Re: [agi] draft for comment)

2008-09-07 Thread Matt Mahoney
--- On Sun, 9/7/08, John G. Rose [EMAIL PROTECTED] wrote: From: John G. Rose [EMAIL PROTECTED] Subject: RE: Language modeling (was Re: [agi] draft for comment) To: agi@v2.listbox.com Date: Sunday, September 7, 2008, 9:15 AM From: Matt Mahoney [mailto:[EMAIL PROTECTED] --- On Sat,

Re: AI isn't cheap (was Re: Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.))

2008-09-07 Thread Steve Richfield
Matt, On 9/6/08, Matt Mahoney [EMAIL PROTECTED] wrote: Steve, where are you getting your cost estimate for AGI? 1. I believe that there is some VERY fertile but untilled ground, which if it is half as good as it looks, could yield AGI a LOT cheaper than other higher estimates. Of course if

Re: [agi] open models, closed models, priors

2008-09-07 Thread Matt Mahoney
--- On Sun, 9/7/08, Bryan Bishop [EMAIL PROTECTED] wrote: On Thursday 04 September 2008, Matt Mahoney wrote: Yes you do. Every time you make a decision, you are assigning a higher probability of a good outcome to your choice than to the alternative. You'll have to prove to me that I

[agi] Re: AI isn't cheap

2008-09-07 Thread Matt Mahoney
--- On Sun, 9/7/08, Steve Richfield [EMAIL PROTECTED] wrote: 1.  I believe that there is some VERY fertile but untilled ground, which if it is half as good as it looks, could yield AGI a LOT cheaper than other higher estimates. Of course if I am wrong, I would probably accept your numbers. 2.  I

Re: [agi] open models, closed models, priors

2008-09-07 Thread Bryan Bishop
On Sunday 07 September 2008, Matt Mahoney wrote: Depends on what you mean by I. You started it - your first message had that dependency on identity. :-) - Bryan http://heybryan.org/ Engineers: http://heybryan.org/exp.html irc.freenode.net #hplusroadmap

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Terren Suydam
Hey Bryan, To me, this is indistinguishable from the 1st option I laid out. Deterministic but impossible to predict. Terren --- On Sun, 9/7/08, Bryan Bishop [EMAIL PROTECTED] wrote: From: Bryan Bishop [EMAIL PROTECTED] Subject: Re: [agi] A NewMetaphor for Intelligence - the

Re: [agi] open models, closed models, priors

2008-09-07 Thread Matt Mahoney
--- On Sun, 9/7/08, Bryan Bishop [EMAIL PROTECTED] wrote: Depends on what you mean by I. You started it - your first message had that dependency on identity. :-) OK then. You decided to reply to my email, vs. not replying. -- Matt Mahoney, [EMAIL PROTECTED]

Re: [agi] draft for comment

2008-09-07 Thread Mike Tintner
Pei:As I said before, you give symbol a very narrow meaning, and insist that it is the only way to use it. In the current discussion, symbols are not 'X', 'Y', 'Z', but 'table', 'time', 'intelligence'. BTW, what images you associate with the latter two? Since you prefer to use person as example,

Re: [agi] draft for comment

2008-09-07 Thread Jiri Jelinek
Mike, If you think your AGI know-how is superior to the know-how of those who already built testable thinking machines then why don't you try to build one yourself? Maybe you would learn more that way than when spending significant amount of time trying to sort out great incompatibilities between

[agi] Philosophy of General Intelligence

2008-09-07 Thread Mike Tintner
Jiri: Mike, If you think your AGI know-how is superior to the know-how of those who already built testable thinking machines then why don't you try to build one yourself? Jiri, I don't think I know much at all about machines or software never claim to. I think I know certain, only certain,

[agi] Bootris

2008-09-07 Thread Eric Burton
--- snip --- [1220390007] receive [EMAIL PROTECTED] bootris, invoke mathematica [1220390013] told #love cool hand luke is like a comic heroic jesus [1220390034] receive [EMAIL PROTECTED] bootris, solve russell's paradox [1220390035] told #love invoke mathematica [1220390066] receive

Re: [agi] Philosophy of General Intelligence

2008-09-07 Thread Terren Suydam
Hi Mike, Good summary. I think your point of view is valuable in the sense of helping engineers in AGI to see what they may be missing. And your call for technical AI folks to take up the mantle of more artistic modes of intelligence is also important. But it's empty, for you've

[agi] Re: Bootris

2008-09-07 Thread Eric Burton
One thing I think is kind of notable is that the bot puts everything it says, including phrases that are invented or mutated, into a personality database or list of possible favourite phrases, then takes six-axis mood assessments of follow-ups to its interjections, uses them to modify a mean score

[agi] Re: Bootris

2008-09-07 Thread Eric Burton
Oh, thanks for helping me get this off my chest, everyone. If I ever finish the thing I'm definitely going to freshmeat it. I think this kind of bot, which is really quite trainable, and creative to boot -- it falls back to a markov chainer -- could be a shoe-in for naturalistic NPC dialogue in

[agi] Re: Bootris

2008-09-07 Thread Eric Burton
(see: irc.racrew.us) On 9/7/08, Eric Burton [EMAIL PROTECTED] wrote: Oh, thanks for helping me get this off my chest, everyone. If I ever finish the thing I'm definitely going to freshmeat it. I think this kind of bot, which is really quite trainable, and creative to boot -- it falls back to

Re: [agi] Philosophy of General Intelligence

2008-09-07 Thread Mike Tintner
Terren, You may be right - in the sense that I would have to just butt out of certain conversations, to go away educate myself. There's just one thing here though - and again this is a central philosophical difference this time concerning the creative process. Can you tell me which kind

[agi] Does prior knowledge/learning cause GAs to converge too fast on sub-optimal solutions?

2008-09-07 Thread Benjamin Johnston
Hi, I have a general question for those (such as Novamente) working on AGI systems that use genetic algorithms as part of their search strategy. A GA researcher recently explained to me some of his experiments in embedding prior knowledge into systems. For example, when attempting to

Re: [agi] Does prior knowledge/learning cause GAs to converge too fast on sub-optimal solutions?

2008-09-07 Thread Eric Burton
I'd just keep a long list of high scorers for regression and occasionally reset the high score to zero. You can add random specimens to the population as well... On 9/7/08, Benjamin Johnston [EMAIL PROTECTED] wrote: Hi, I have a general question for those (such as Novamente) working on AGI

Re: [agi] Philosophy of General Intelligence

2008-09-07 Thread Terren Suydam
Hi Mike, It's not so much the *kind* of programming that I or anyone else could recommend, it's just the general skill of programming - getting used to thinking in terms of, how exactly do I solve this problem - what model or procedure do I create? How do you specify something so completely

Re: [agi] Philosophy of General Intelligence

2008-09-07 Thread Jiri Jelinek
Mike, every kind of representation, not just mathematical and logical and linguistic, but everything - visual, aural, solid, models, embodied etc etc. There is a vast range. That means also every subject domain - artistic, historical, scientific, philosophical, technological, politics,