RE: [agi] AI is a trivial problem

2003-01-12 Thread Ben Goertzel
Hi, > > I think that the hard problem of AGI is actually the other part: > > > > BUILDING A SYSTEM CAPABLE OF SUPPORTING SPECIALIZED INTELLIGENCES THAT > > COMBINE NARROW-AI HEURISTICS WITH SMALL-N GENERAL INTELLIGENCE > > Yes this is part of the problem, the other thing you don't mention > is

Re: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Shane Legg
Pei Wang wrote: Again, I'm not saying that NARS is not a TM in any sense, but that it is not a TM at the questing-answering level. As I said in the paper, if you consider the life-long history of input and output of the whole system, NARS is a TM. Also, if you check each individual inference st

Re: [agi] AI is a trivial problem

2003-01-12 Thread Shane Legg
Ben, We seem to be thinking along similar lines in most aspects here... The way the human brain seems to work is: * some of its architecture is oriented towards increasing the sum over n of int(S,n,rS,rT), where rS is given by brain capacity (enhanced by tools like writing?) and rT is differe

RE: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Ben Goertzel
Shane wrote: > Two systems are fundamentally equivalent if it's possible for them to > simulate each other given any finite amount of resources. They are > fundamentally different if this is impossible no matter how much > resource is made available. Clearly this is a very deep and > fundamenta

Re: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Shane Legg
Hi Ben, If two computational models can solve radically different problems *under realistic space and time constraints*, then are they "fundamentally different" or not?? You seem to want call two models "fundamentally the same" if they can solve the same problems under infinite time and space c

RE: [agi] AI is a trivial problem

2003-01-12 Thread Ben Goertzel
Risto Miikkulainen [!! love that name !!] did some fun stuff having a GA-evolved neural net learn to play Othello. it learned the rules of the game, not just strategy (actually it learned them both simultaneously)... But to use ID3 or GA/NN for this kind of problem, one uses a different tree or

Re: [agi] AI is a trivial problem

2003-01-12 Thread Joseph S Rubenfeld
This has been tried for chess with very limited success. Quinlin, a PhD student of Donald Michie in England, now I believe a professor in Sydney Australia developed the ID3 algorithm and tested it on learning chess. On Sat, 11 Jan 2003 22:34:01 -0500 "Ben

RE: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Ben Goertzel
Well, computational complexity theory and formal computer science DO have practical applications, of course Loads of applications in compiler design and in OS design too (in the theory underlying scheduling, load balancing, etc.) Algorithmic information theory doesn't have practical applicat

Re: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Damien Sullivan
On Sun, Jan 12, 2003 at 03:25:08PM -0500, Ben Goertzel wrote: > It is clear that traditional formal computing theory, though in principle > *applicable* to AGI programs, is pretty much useless to the AGI theorist... Is this anything unique to AGI? Does computing theory have much relevance for Li

RE: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Ben Goertzel
> And just to be clear, its obviously a Turing machine since it > runs on Turing > machines and you can run any Turing machine on it. It is just very > different from normal conceptions of universal computers. But > then, I could > say the exact same thing about the human brain. > > Cheers, > >

Re: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread James Rogers
On 1/12/03 9:43 AM, "Damien Sullivan" <[EMAIL PROTECTED]> wrote: > > I'll take the risk of replying to other messages here without reading the 2 > dozen other replies first. It sounds like James is using primitive recursive > functions, equivalent to Hofstadter's BLoop in _GEB_. Those are a subs

Re: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Damien Sullivan
On Sun, Jan 12, 2003 at 11:05:15AM -0500, Pei Wang wrote: > another related topic: the final state. In my paper I said that my system is > not a TM, also because it doesn't have a set of predetermined final states A system using S and K combinators isn't a TM at all; totally different mechanism o

Re: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Damien Sullivan
On Sun, Jan 12, 2003 at 10:37:13AM -0500, Pei Wang wrote: > See my replies to Ben. As soon as the final answer (not intermidiate > answer) depends on internal state, we are not talking about the same Turing > Machine anymore. Of course you can build a thoery in this way, but it is > already not

Re: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Damien Sullivan
On Sun, Jan 12, 2003 at 09:38:26AM -0500, Ben Goertzel wrote: > To me, the question of what a computational model can do with moderately > small space and time resource constraints is at least equally "fundamental" Computability theory: can this be computed? Complexibility theory: does it take po

RE: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Ben Goertzel
> > I'm sorry but I still don't understand exactly what you mean by "Is > computer > > program-instance X a TM with respect to problem P" > > Each TM (or algorithm) is defined with respect to a "problem", > which is set > of valid input strings. Each string in the set is a problem instance, ans >

RE: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Ben Goertzel
NARS does have a finite set of possible final states, though, because it's implemented on a finite-state machine... And, given a NARS codebase and its state in RAM & on disk at a given point in time, you CAN predict its behavior from its inputs alone... ben > -Original Message- > From:

Re: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Pei Wang
- Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Sunday, January 12, 2003 10:50 AM Subject: RE: [agi] AI and computation (was: The Next Wave) > > Once again, the interesting question is not "Is NARS a TM?", but > > "Is NARS a > > TM with respect

Re: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Pei Wang
James, I basically agree with your points. In my replies to Shane and Ben, I argued that the TM definition is too limited by assuming a unique initial state. Now your posting addressed another related topic: the final state. In my paper I said that my system is not a TM, also because it doesn't

RE: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Ben Goertzel
> Once again, the interesting question is not "Is NARS a TM?", but > "Is NARS a > TM with respect to problem P?" If the problem is "To answer > Ben's email on > `AI and compuation'", then the system is not a TM (though it may > be a TM in > many other senses). For this reason, to discuss the comp

Re: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Pei Wang
- Original Message - From: "Shane Legg" <[EMAIL PROTECTED]> > I think my reply to this is similar in nature to Ben's (however I think > Ben's example of a push down automaton is a bit misleading as it's > really the abilty to carry state between system cycles not during a > cycle of that

RE: [agi] Music and AI's Friendliness

2003-01-12 Thread Ben Goertzel
    Cosmodelia wrote:   *  b) more importantly, I think that making the program musically interact with humans, would be a big help in helping the system *emotionally* understand humans ... which is important for the system's long-term Friendliness toward humans     IMHO this is the

Re: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Pei Wang
- Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]> > So let's look at NARS over the time-interval [s,t] corresponding to the > answering of an individual question... > > Over this time-interval > > "The NARS program plus its internal state at the time point s" > > is still mode

RE: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Ben Goertzel
Pei Wang wrote: > Again, I'm not saying that NARS is not a TM in any sense, but > that it is not > a TM at the questing-answering level. As I said in the paper, if you > consider the life-long history of input and output of the whole > system, NARS > is a TM. Also, if you check each individual

RE: [agi] Ethics for AGIs

2003-01-12 Thread Ben Goertzel
    Well, there is no doubt that hardwiring can have a significant effect on a system's propensity to take various kinds of actions   However, the human urge to reproduce is a case in point of the ability of a complex self-organizing AI system to act against its innate propensities.    We

[agi] Music and AI's Friendliness

2003-01-12 Thread cosmodelia
Music and AI's Friendliness Hi Ben: The project you describe -- AGI applied to music -- is one very dear to my heart... It is great to know a person working in one of the few projects of AGI loves music, and I guess art. It is something I miss in the transhumanist community at large. It is

RE: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Ben Goertzel
Shane wrote (responding to James) > > We implement a virtual machine on top of a standard > > computer architecture that is designed around a fundamentally different > > model of a universal computer. > > I doubt that your model is really all that fundamentally different. > Either your model isn't

RE: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Ben Goertzel
James Rogers wrote: > Obviously this is something of an oversimplification; I think I see your > point. There is still a halting problem, just not a practical halting > problem for most intents and purposes. Or at least it has been > pushed to a > level of abstraction where we don't really wor

RE: [agi] Ethics for AGIs: response to www.goertzel.org/dynapsyc/2002/AIMorality.htm

2003-01-12 Thread Ben Goertzel
  Phil Sutton wrote:   ***  I think if an ethical goal is general and highly important then we should make sure we find ways to hard wire it - ie we shouldn't launch AGIs into the world until we have worked out how to hardwire the really critical ethical goals/contraints.    ...    I think i

RE: [agi] AI is a trivial problem

2003-01-12 Thread Ben Goertzel
Shane Legg wrote: > Why consider just one test problem? > > Doing so you will always be in danger of having a system that > isn't truly general and is built for one specific kind of a task. > > A better idea, I think, would be to test the system on *all* problems > that can be described in n bits

[agi] Test Space

2003-01-12 Thread Kevin Copple
Phillip Sutton said:   >I think it is possible to lodge very abstract concepts into an entity, and use hard wiring to assist the AGI to rapidly and easily recognise examples of the 'hard' abstract concepts - thus giving some life to each abstract concept.   Another class of test for th

[agi] Ethics for AGIs: response to www.goertzel.org/dynapsyc/2002/AIMorality.htm

2003-01-12 Thread Philip . Sutton
I've just read Ben's article Thoughts on AI Morality, May, 2002 www.goertzel.org/dynapsyc/2002/AIMorality.htm First a summary and then my comments. Ben argues that AGIs desirably should have a set of ethics that among other things motivate their compassionate treatment of life including h

RE: [agi] AI is a trivial problem

2003-01-12 Thread Kevin Copple
Shane Legg wrote: >A better idea, I think, would be to test the system on *all* problems >that can be described in n bits or less (or use a large random sample >from this space). Then your system is gauranteed to be completely >general in a computational sense. Sounds good to me. Perhaps my mot

Re: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Shane Legg
Hi James, Interesting, as I make a similar argument. I'm using an unconventional model of universal computation on finite state machinery that, among other and from your other email, There is still a halting problem, just not a practical halting problem for most intents and purposes. As t