Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Word of advice. You're creating your own artificial world here with its own artificial rules. AGI is about real vision of real objects in the real world. The two do not relate - or compute. It's a pity - it's good that you keep testing yourself, it's bad that they aren't realistic tests.

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
Hi, I certainly agree with this method, but of course it's not original at all, it's pretty much the basis of algorithmic learning theory, right? Hutter's AIXI for instance works [very roughly speaking] by choosing the most compact program that, based on historical data, would have yielded

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
Thanks Ben, Right, explanatory reasoning not new at all (also called abduction and inference to the best explanation). But, what seems to be elusive is a precise and algorithm method for implementing explanatory reasoning and solving real problems, such as sensory perception. This is what I'm

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
For visual perception, there are many reasons to think that a hierarchical architecture can be effective... this is one of the things you may find in dealing with real visual data but not with these toy examples... E.g. in a spatiotemporal predictive hierarchy, the idea would be to create a

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
On Sun, Jun 27, 2010 at 1:31 AM, David Jones davidher...@gmail.com wrote: A method for comparing hypotheses in explanatory-based reasoning:*Here is a simplified version of how we solve case study 1: *The important hypotheses to consider are: 1) the square from frame 1 of the video that has a

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Jim :This illustrates one of the things wrong with the dreary instantiations of the prevailing mind set of a group. It is only a matter of time until you discover (through experiment) how absurd it is to celebrate the triumph of an overly simplistic solution to a problem that is, by its very

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
To put it more succinctly, Dave Ben Hutter are doing the wrong subject - narrow AI. Looking for the one right prediction/ explanation is narrow AI. Being able to generate more and more possible explanations, wh. could all be valid, is AGI. The former is rational, uniform thinking. The

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
lol. Mike, What I was trying to express by the word *expect* is NOT predict [some exact outcome]. Expect means that the algorithm has a way of comparing observations to what the algorithm considers to be consistent with an explanation. This is something I struggled to solve for a long time

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
The fact that you are using experiment and the fact that you recognized that AGI needs to provide both explanation and expectations (differentiated from the false precision of 'prediction') shows that you have a grasp of some of the philosophical problems, but the fact that you would rely on a

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
Jim, I am using over simplification to identify the core problems involved. As you can see, the over simplification is revealing how to resolve certain types of dilemmas and uncertainty. That is exactly why I did this. If you can't solve a simple environment, you certainly can't solve the full

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
On Sun, Jun 27, 2010 at 11:56 AM, Mike Tintner tint...@blueyonder.co.ukwrote: Jim :This illustrates one of the things wrong with the dreary instantiations of the prevailing mind set of a group. It is only a matter of time until you discover (through experiment) how absurd it is to celebrate

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
This is wishful thinking. Wishful thinking is dangerous. How about instead of hoping that AGI won't destroy the world, you study the problem and come up with a safe design. -- Matt Mahoney, matmaho...@yahoo.com From: rob levy r.p.l...@gmail.com To: agi

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Try ping-pong - as per the computer game. Just a line (/bat) and a square(/ball) representing your opponent - and you have a line(/bat) to play against them Now you've got a relatively simple true AGI visual problem - because if the opponent returns the ball somewhat as a real human AGI does,

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
That's a rather bizarre suggestion Mike ... I'm quite sure a simple narrow AI system could be constructed to beat humans at Pong ;p ... without teaching us much of anything about intelligence... Very likely a narrow-AI machine learning system could *learn* by experience to beat humans at Pong ...

Re: [agi] Questions for an AGI

2010-06-27 Thread rob levy
I definitely agree, however we lack a convincing model or plan of any sort for the construction of systems demonstrating subjectivity, and it seems plausible that subjectivity is functionally necessary for general intelligence. Therefore it is reasonable to consider symbiosis as both a safe design

Re: [agi] Questions for an AGI

2010-06-27 Thread The Wizard
This is wishful thinking. Wishful thinking is dangerous. How about instead of hoping that AGI won't destroy the world, you study the problem and come up with a safe design. Agreed on this dangerous thought! On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney matmaho...@yahoo.com wrote: This is

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
I am working on logical satisfiability again. If what I am working on right now works, it will become a pivotal moment in AGI, and what's more, the method that I am developing will (probably) become a core method for AGI. However, if the idea I am working on does not -itself- lead to a major

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Well, Ben, I'm glad you're quite sure because you haven't given a single reason why. Clearly you should be Number One advisor on every Olympic team, because you've cracked the AGI problem of how to deal with opponents that can move (whether themselves or balls) in multiple, unpredictable

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
Ben: I'm quite sure a simple narrow AI system could be constructed to beat humans at Pong ;p Mike: Well, Ben, I'm glad you're quite sure because you haven't given a single reason why. Although Ben would have to give us an actual example (of a pong program that could beat humans at Pong) just to

[agi] Reward function vs utility

2010-06-27 Thread Joshua Fox
This has probably been discussed at length, so I will appreciate a reference on this: Why does Legg's definition of intelligence (following on Hutters' AIXI and related work) involve a reward function rather than a utility function? For this purpose, reward is a function of the word state/history

RE: [agi] The problem with AGI per Sloman

2010-06-27 Thread John G. Rose
It's just that something like world hunger is so complex AGI would have to master simpler problems. Also, there are many people and institutions that have solutions to world hunger already and they get ignored. So an AGI would have to get established over a period of time for anyone to really care

Re: [agi] The problem with AGI per Sloman

2010-06-27 Thread Ian Parker
On 27 June 2010 21:25, John G. Rose johnr...@polyplexic.com wrote: It's just that something like world hunger is so complex AGI would have to master simpler problems. I am not sure that that follows necessarily. Computing is full of situations where a seemingly simple problem is not solved

Re: [agi] Reward function vs utility

2010-06-27 Thread Matt Mahoney
The definition of universal intelligence being over all utility functions implies that the utility function is unknown. Otherwise there is a fixed solution. -- Matt Mahoney, matmaho...@yahoo.com From: Joshua Fox joshuat...@gmail.com To: agi

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
rob levy wrote: This is wishful thinking. I definitely agree, however we lack a convincing model or plan of any sort for the construction of systems demonstrating subjectivity, Define subjectivity. An objective decision might appear subjective to you only because you aren't intelligent

Re: [agi] Questions for an AGI

2010-06-27 Thread Travis Lenting
I don't like the idea of enhancing human intelligence before the singularity. I think crime has to be made impossible even for an enhanced humans first. I think life is too adapt to abusing opportunities if possible. I would like to see the singularity enabling AI to be as least like a

[agi] Theory of Hardcoded Intelligence

2010-06-27 Thread M E
I sketched a graph the other day which represented my thoughts on the usefulness of hardcoding knowledge into an AI. (Graph attached) Basically, the more hardcoded knowledge you include in an AI, of AGI, the lower the overall intelligence it will have, but that faster you will reach

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
Travis Lenting wrote: I don't like the idea of enhancing human intelligence before the singularity. The singularity is a point of infinite collective knowledge, and therefore infinite unpredictability. Everything has to happen before the singularity because there is no after. I think crime

Re: [agi] Theory of Hardcoded Intelligence

2010-06-27 Thread Matt Mahoney
Correct. Intelligence = log(knowledge) + log(computing power). At the extreme left of your graph is AIXI, which has no knowledge but infinite computing power. At the extreme right you have a giant lookup table. -- Matt Mahoney, matmaho...@yahoo.com From: M

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
Hi Steve, A few comments... 1) Nobody is trying to implement Hutter's AIXI design, it's a mathematical design intended as a proof of principle 2) Within Hutter's framework, one calculates the shortest program that explains the data, where shortest is measured on Turing machine M. Given a

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
I think you're thinking of a plodding limited-movement classic Pong line. I'm thinking of a line that can like a human player move with varying speed and pauses to more or less any part of its court to hit the ball, and then hit it with varying speed to more or less any part of the opposite

Re: [agi] Reward function vs utility

2010-06-27 Thread Ben Goertzel
You can always build the utility function into the assumed universal Turing machine underlying the definition of algorithmic information... I guess this will improve learning rate by some additive constant, in the long run ;) ben On Sun, Jun 27, 2010 at 4:22 PM, Joshua Fox joshuat...@gmail.com

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Steve Richfield
Ben, What I saw as my central thesis is that propagating carefully conceived dimensionality information along with classical information could greatly improve the cognitive process, by FORCING reasonable physics WITHOUT having to understand (by present concepts of what understanding means)

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
Steve, I know what dimensional analysis is, but it would be great if you could give an example of how it's useful for everyday commonsense reasoning such as, say, a service robot might need to do to figure out how to clean a house... thx ben On Sun, Jun 27, 2010 at 6:43 PM, Steve Richfield

Re: [agi] Questions for an AGI

2010-06-27 Thread Travis Lenting
Everything has to happen before the singularity because there is no after. I meant when machines take over technological evolution. That is easy. Eliminate all laws. I would prefer a surveillance state. I should say impossible to get away with if conducted in public. Is there a difference

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
Even with the variations you mention, I remain highly confident this is not a difficult problem for narrow-AI machine learning methods -- Ben G On Sun, Jun 27, 2010 at 6:24 PM, Mike Tintner tint...@blueyonder.co.ukwrote: I think you're thinking of a plodding limited-movement classic Pong

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Steve Richfield
Ben, On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel b...@goertzel.org wrote: know what dimensional analysis is, but it would be great if you could give an example of how it's useful for everyday commonsense reasoning such as, say, a service robot might need to do to figure out how to clean a

RE: [agi] The problem with AGI per Sloman

2010-06-27 Thread John G. Rose
-Original Message- From: Ian Parker [mailto:ianpark...@gmail.com] So an AGI would have to get established over a period of time for anyone to really care what it has to say about these types of issues. It could simulate things and come up with solutions but they would not get

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
On Sun, Jun 27, 2010 at 7:09 PM, Steve Richfield steve.richfi...@gmail.comwrote: Ben, On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel b...@goertzel.org wrote: know what dimensional analysis is, but it would be great if you could give an example of how it's useful for everyday commonsense

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
Travis Lenting wrote: Is there a difference between enhancing our intelligence by uploading and creating killer robots? Think about it. Well yes, we're not all bad but I think you read me wrong because thats basically my worry. What I mean is that one way to look at uploading is to create

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
Mike, you are mixing multiple issues. Just like my analogy of the rubix cube, full AGI problems involve many problems at the same time. The problem I wrote this email about was not about how to solve them all at the same time. It was about how to solve one of those problems. After solving the

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Michael Swan
On Sun, 2010-06-27 at 19:38 -0400, Ben Goertzel wrote: Humans may use sophisticated tactics to play Pong, but that doesn't mean it's the only way to win Humans use subtle and sophisticated methods to play chess also, right? But Deep Blue still kicks their ass... If the rules of chess