[agi] JOIN: Will Pearson and adaptive systems

2005-08-16 Thread William Pearson
Some of you may remember me from other places, and once before on this list. But I thought now is the correct time for some criticism of my ideas as they are slightly refined. Now first off I have recently renounced my status as an AI researcher, as studying intelligence is not what I wish to do.

Re: [agi] Growth of computer power

2005-08-17 Thread William Pearson
Eugen Leitl Thu, 23 Jun 2005 02:18:14 -0700 Do any of you here use MPI, and assume 10^3..10^5 node parallelism? I assume 2^14 node parallelism with only a small fraction computing at any time. But then my nodes are really smart memory rather than full-blown processors and not async yet. At

[agi] Re: Representing Thoughts

2005-09-09 Thread William Pearson
On 9/9/05, Ben Goertzel [EMAIL PROTECTED] wrote: Leitl wrote: In the language of Gregory Bateson (see his book Mind and Nature), you're suggesting to do away with learning how to learn --- which is not at all a workable idea for AGI. Learning to evolve by evolution is sure a

[agi] Re: Representing Thoughts

2005-09-09 Thread William Pearson
On 9/9/05, Yan King Yin [EMAIL PROTECTED] wrote: learning to learn which I interpret as applying the current knowledge rules to the knowledge base itself. Your idea is to build an AGI that can modify its own ways of learning. This is a very fanciful idea but is not the most direct way to

[agi] Re: Representing Thoughts

2005-09-12 Thread William Pearson
On 9/12/05, Yan King Yin [EMAIL PROTECTED] wrote: Will Pearson wrote: Define what you mean by an AGI. Learning to learn is vital if you wish to try and ameliorate the No Free Lunch theorems of learning. I suspect that No Free Lunch is not very relevant in practice. Any learning

[agi] Re: Representing Thoughts

2005-09-23 Thread William Pearson
On 9/20/05, Yan King Yin [EMAIL PROTECTED] wrote: William wrote: I suspect that it will be quite important in competition between agents. If one agent has a constant method of learning it will be more easily predicted by an agent that can figure out its constant method (if it is simple).

Re: [agi] AGI bottlenecks

2006-06-02 Thread William Pearson
On 01/06/06, Richard Loosemore [EMAIL PROTECTED] wrote: I had similar feelings about William Pearson's recent message about systems that use reinforcement learning: A reinforcement scenario, from wikipedia is defined as Formally, the basic reinforcement learning model consists of: 1. a

[agi] The weak option

2006-06-07 Thread William Pearson
I don't think this has been raised before, the only similar suggestion is that we should start by understanding systems that might be weak and then convert it to a strong system rather than aiming for weakness that is hard to convert to a strong system. Caveats: 1) I don't believe strong

Re: [agi] The weak option

2006-06-08 Thread William Pearson
On 08/06/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: William Pearson wrote: I tried posting this to SL4 and it got sucked into some vacuum. As far as I can tell, it went normally through SL4. I got it. It is harder to tell on gmail than other email systems what gets through, my

Re: [agi] The weak option

2006-06-08 Thread William Pearson
On 08/06/06, William Pearson [EMAIL PROTECTED] wrote: With regards to how careful I am being with the system: one of the central design guidances for the system is to assume the programs in the hardware are selfish and may do things I don't want. The failure mode I envisage more than

Motivational system was Re: [agi] AGI bottlenecks

2006-06-09 Thread William Pearson
On 09/06/06, Richard Loosemore [EMAIL PROTECTED] wrote: Likewise, an artificial general intelligence is not a set of environment states S, a set of actions A, and a set of scalar rewards in the Reals.) Watching history repeat itself is pretty damned annoying. While I would agree with you

Re: [agi] Motivational system

2006-06-09 Thread William Pearson
On 09/06/06, Dennis Gorelik [EMAIL PROTECTED] wrote: William, It is very simple and I wouldn't apply it to everything that behaviourists would (we don't get direct rewards for solving crossword puzzles). How do you know that we don't get direct rewards on solving crossword puzzles (or any

Re: [agi] Reward versus Punishment? .... Motivational system

2006-06-10 Thread William Pearson
On Fri, 09 Jun 2006 19:13:19 -500, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: What about punishment? Currently I see it as the programs in control of outputting (and hence the ones to get reward), losing the control and the chance to get reinforcement. However experiment or better theory

Re: [agi] Bayes in the brain

2006-06-11 Thread William Pearson
On 11/06/06, Philip Goetz [EMAIL PROTECTED] wrote: An article with an opposing point of view than the one I mentioned yesterday... http://www.bcs.rochester.edu/people/alex/pub/articles/KnillPougetTINS04.pdf Why do you find whether there are bayesian estimators in the brain an interesting

Re: [agi] Reward versus Punishment? .... Motivational system

2006-06-12 Thread William Pearson
On 12/06/06, James Ratcliff [EMAIL PROTECTED] wrote: Will, Right now I would think that a negative reward would be usable for this aspect. I agree it is usable. But I am not sure it necessary, you can just normalise the reward value. Let say for most states you normally give 0 for a

Re: Worthwhile time sinks was Re: [agi] list vs. forum

2006-06-13 Thread William Pearson
On 13/06/06, sanjay padmane [EMAIL PROTECTED] wrote: On the suggestion of creating a wiki, we already have it here http://en.wikipedia.org/wiki/Artificial_general_intelligence I wouldn't want to pollute the wiki proper with our unverified claims. , as you know, and its exposure is much

Re: Worthwhile time sinks was Re: [agi] list vs. forum

2006-06-13 Thread William Pearson
On 13/06/06, Yan King Yin [EMAIL PROTECTED] wrote: Will, I've been thinking of hosting a wiki for some time, but not sure if we have reached critical mass here. Possibly not. I may just collate my own list of questions and answers until the time does come. When we get down to the details,

Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-15 Thread William Pearson
On 15/06/06, arnoud [EMAIL PROTECTED] wrote: On Thursday 15 June 2006 21:35, Ben Goertzel wrote: If this doesn't seem to be the case, this is because of that some concepts are so abstract that they don't seem to be tied to perception anymore. It is obvious that they are (directly) tied to

Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-18 Thread William Pearson
On 17/06/06, arnoud [EMAIL PROTECTED] wrote: As long as some of those things are learnt by watching humans doing them, in practise I agree with you. In theory though a sufficiently powerful Giant look up table, could also seem to learn these things, so I also going to be look at the

Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-06 Thread William Pearson
On 06/07/06, Russell Wallace [EMAIL PROTECTED] wrote: On 7/6/06, William Pearson [EMAIL PROTECTED] wrote: How would you define the sorts of tasks humans are designed to carry out? I can't see an easy way of categorising all the problems individual humans have shown there worth

Re: [agi] General problem solving

2006-07-08 Thread William Pearson
On 06/07/06, Russell Wallace [EMAIL PROTECTED] wrote: On 7/6/06, William Pearson [EMAIL PROTECTED] wrote: A generic PC almost fulfils the description, programmable, generic and if given the right software to start with can solve problems. But I am guessing it is missing something. As someone

Re: [agi] General problem solving

2006-07-08 Thread William Pearson
On 08/07/06, Russell Wallace [EMAIL PROTECTED] wrote: On 7/8/06, William Pearson [EMAIL PROTECTED] wrote: Agreed, but I think looking at it in terms of a single language is a mistake. Humans use body language and mimicry to acquire spoken language and spoken/body to acquire written

Re: [agi] AGI open source license

2006-08-28 Thread William Pearson
On 28/08/06, Russell Wallace [EMAIL PROTECTED] wrote: On 8/28/06, Stephen Reed [EMAIL PROTECTED] wrote: Google wouldn't work at all well under the GPL. Why? Because if everyone had their own little Google, it would be quite useless [1]. The system's usefulness comes from the fact that there is

Re: [agi] AGI open source license

2006-08-28 Thread William Pearson
On 28/08/06, Russell Wallace [EMAIL PROTECTED] wrote: On 8/28/06, William Pearson [EMAIL PROTECTED] wrote: If the macro AGI can't translate between differences in language or representation that the micro AGIs have acquired from being open source, then we probably haven't done our job

Re: [agi] AGI open source license

2006-08-28 Thread William Pearson
On 28/08/06, Russell Wallace [EMAIL PROTECTED] wrote: On 8/28/06, William Pearson [EMAIL PROTECTED] wrote: We may well not have enough computing resources available to do it on the cheap using local resources. But that is the approach I am inclined to take, I'll just wait until we do

Re: [agi] AGI open source license

2006-08-28 Thread William Pearson
On 28/08/06, Russell Wallace [EMAIL PROTECTED] wrote: On 8/28/06, William Pearson [EMAIL PROTECTED] wrote: Things like hooking it up to low quality sound video feeds and have it judge by posture/expression/time of day what the most useful piece of information in the RSS feeds/email etc

[agi] Voodoo meta-learning and knowledge representations

2006-09-27 Thread William Pearson
I am interested in meta-learning voodoo, so I thought I would add my view on KR in this type of system. If you are interested in meta-learning the KR you have to ditch thinking about knowledge as the lowest level of changeable information in your system, and just think about changing state.

Re: [agi] Voodoo meta-learning and knowledge representations

2006-09-27 Thread William Pearson
On 27/09/06, Richard Loosemore [EMAIL PROTECTED] wrote: William Pearson wrote: I am interested in meta-learning voodoo, so I thought I would add my view on KR in this type of system. If you are interested in meta-learning the KR you have to ditch thinking about knowledge as the lowest level

Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread William Pearson
Richard Loosemoore As for your suggestion about the problem being centered on the use of model-theoretic semantics, I have a couple of remarks. One is that YES! this is a crucial issue, and I am so glad to see you mention it. I am going to have to read your paper and discuss with you

Re: [agi] Information extraction from inputs and an experimental path forward

2006-11-22 Thread William Pearson
On 21/11/06, Pei Wang [EMAIL PROTECTED] wrote: That sounds better to me. In general, I'm against attempts to get complete, consistent, certain, and absolute descriptions (of either internal or external state), and prefer partial, not-necessarily-consistent, uncertain, and relative ones --- not

Re: Re: [agi] Language acquisition in humans: How bound up is it with tonal pattern recognition...?

2006-12-02 Thread William Pearson
On 02/12/06, Ben Goertzel [EMAIL PROTECTED] wrote: I think that our propensity for music is pretty damn simple: it's a side-effect of the general skill-learning machinery that makes us memetic substrates. Tunes are trajectories in n-space as are the series of motor signals involved in

Re: [agi] Enumeration of useful genetic biases for AGI

2007-02-14 Thread William Pearson
On 14/02/07, Ben Goertzel [EMAIL PROTECTED] wrote: Does anyone know of a well-thought-out list of this sort. Of course I could make one by surveying the cognitive psych literature, but why reinvent the wheel? None that I have come across. Biases that I have come across are things like paying

Re: [agi] dopamine and reward prediction error

2007-04-13 Thread William Pearson
On 13/04/07, Richard Loosemore [EMAIL PROTECTED] wrote: To convey this subtlety as simply as I can, I would suggest that you ask yourself how much intelligence is being assumed in the preprocessing system that does the work of (a) picking out patterns to be considered by the system, and (b)

Re: [agi] Circular definitions of intelligence

2007-04-26 Thread William Pearson
On 26/04/07, Richard Loosemore [EMAIL PROTECTED] wrote: Consider that, folks, to be a challenge: to those who think there is such a definition, I await your reply. While I don't think it the sum of all intelligence, I'm studying something I think is a precondition of being intelligent. That

[agi] What would motivate you to put work into an AGI project?

2007-05-02 Thread William Pearson
My current thinking is that it will take lots of effort by multiple people, to take a concept or prototype AGI and turn into something that is useful in the real world. And even one or two people worked on the correct concept for their whole lives it may not produce the full thing, they may hit

Re: [agi] Tommy

2007-05-11 Thread William Pearson
On 11/05/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: Tommy, the scientific experiment and engineering project, is almost all about concept formation. He gets a voluminous input stream but is required to parse it into coherent concepts (e.g. objects, positions, velocities, etc). None of

Re: [agi] Beyond AI chapters up on Kurzweil

2007-06-01 Thread William Pearson
On 01/06/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: Ray Kurzweil has arranged to put a couple of sample chapters up on his site: Kinds of Minds http://www.kurzweilai.net/meme/frame.html?main=/articles/art0707.html The Age of Virtuous Machines

[agi] New AI related charity?

2007-06-04 Thread William Pearson
Is there space within the charity world for another one related to intelligence but with a different focus to SIAI? Rather than specifically funding an AGI effort or creating one in order to bring about a specific goal state of humanity in mind, it would be dedicated to funding a search for the

Re: Slavery (was Re: [agi] Opensource Business Model)

2007-06-05 Thread William Pearson
On 04/06/07, Matt Mahoney [EMAIL PROTECTED] wrote: Suppose you build a human level AGI, and argue that it is not autonomous no matter what it does, because it is deterministically executing a program. I suspect an AGI that executes one fixed unchangeable program is not physically possible.

Re: Slavery (was Re: [agi] Opensource Business Model)

2007-06-05 Thread William Pearson
On 05/06/07, Ricardo Barreira [EMAIL PROTECTED] wrote: On 6/5/07, William Pearson [EMAIL PROTECTED] wrote: On 04/06/07, Matt Mahoney [EMAIL PROTECTED] wrote: Suppose you build a human level AGI, and argue that it is not autonomous no matter what it does, because it is deterministically

Re: [agi] about AGI designers

2007-06-06 Thread William Pearson
On 06/06/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: There're several reasons why AGI teams are fragmented and AGI designers don't want to join a consortium: A. believe that one's own AGI design is superior B. want to ensure that the global outcome of AGI is friendly C. want to get

Re: [agi] AGI introduction

2007-06-23 Thread William Pearson
On 22/06/07, Pei Wang [EMAIL PROTECTED] wrote: Hi, I put a brief introduction to AGI at http://nars.wang.googlepages.com/AGI-Intro.htm , including an AGI Overview followed by Representative AGI Projects. It is basically a bunch of links and quotations organized according to my opinion.

Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread William Pearson
On 23/06/07, Mike Tintner [EMAIL PROTECTED] wrote: - Will Pearson: My theory is that the computer architecture has to be more brain-like than a simple stored program architecture in order to allow resource constrained AI to implemented efficiently. The way that I am investigating, is an

Re: Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread William Pearson
On 24/06/07, Bo Morgan [EMAIL PROTECTED] wrote: On Sun, 24 Jun 2007, William Pearson wrote: ) I think the brains programs have the ability to protect their own ) storage from interference from other programs. The architecture will ) only allow programs that have proven themselves better

Re: Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread William Pearson
Sorry, sent accidentally while half finished. Bo wrote: This is only partially true, and mainly only for the neocortex, right? For example, removing small parts of the brainstem result in coma. I'm talking about control in memory access, and by memory access I am referring to synaptic changes

[agi] Re: An amazing blind (?!!) boy (and a super mom!)

2007-09-27 Thread William Pearson
On 27/09/2007, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: This is why the word impossible has no place outside of math departments. Original Message Subject: [bafuture] An amazing blind (?!!) boy (and a super mom!) Date: Thu, 27 Sep 2007 11:49:42 -0700 From: Kennita

Re: [agi] Religion-free technical content

2007-09-30 Thread William Pearson
On 29/09/2007, Vladimir Nesov [EMAIL PROTECTED] wrote: Although it indeed seems off-topic for this list, calling it a religion is ungrounded and in this case insulting, unless you have specific arguments. Killing huge amounts of people is a pretty much possible venture for regular humans, so

Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-01 Thread William Pearson
On 01/10/2007, Matt Mahoney [EMAIL PROTECTED] wrote: --- William Pearson [EMAIL PROTECTED] wrote: On 30/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote: The real danger is this: a program intelligent enough to understand software would be intelligent enough to modify itself. Well

Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-02 Thread William Pearson
On 02/10/2007, Mark Waser [EMAIL PROTECTED] wrote: A quick question, do people agree with the scenario where, once a non super strong RSI AI becomes mainstream it will replace the OS as the lowest level of software? For the system that it is running itself on? Yes, eventually. For

Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread William Pearson
On 05/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: We have good reason to believe, after studying systems like GoL, that even if there exists a compact theory that would let us predict the patterns from the rules (equivalent to predicting planetary dynamics given the inverse square law

Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread William Pearson
On 05/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: William Pearson wrote: On 05/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: We have good reason to believe, after studying systems like GoL, that even if there exists a compact theory that would let us predict the patterns

Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-06 Thread William Pearson
On 07/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: I have a question for you, Will. Without loss of generality, I can change my use of Game of Life to a new system called GoL(-T) which is all of the possible GoL instantiations EXCEPT the tiny subset that contain Turing Machine

Re: Turing Completeness of a Lump of Dirt [WAS Re: [agi] Conway's Game of Life and Turing machine equivalence]

2007-10-08 Thread William Pearson
On 08/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: William Pearson wrote: On 07/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: William Pearson wrote: On 07/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: The TM implementation not only has no relevance to the behavior

Re: Turing Completeness of a Lump of Dirt [WAS Re: [agi] Conway's Game of Life and Turing machine equivalence]

2007-10-08 Thread William Pearson
On 08/10/2007, Mark Waser [EMAIL PROTECTED] wrote: From: William Pearson [EMAIL PROTECTED] Laptops aren't TMs. Please read the wiki entry to see that my laptop isn't a TM. But your laptop can certainly implement/simulate a Turing Machine (which was the obvious point of the post(s) that you

Re: [agi] Do the inference rules.. P.S.

2007-10-12 Thread William Pearson
On 12/10/2007, Edward W. Porter [EMAIL PROTECTED] wrote: (2) WITH REGARD TO BOOKWORLD -- IF ALL THE WORLD'S BOOKS WERE IN ELECTRONIC FORM AND YOU HAD A MASSIVE AMOUNT OF AGI HARDWARD TO READ THEM ALL I THINK YOU WOULD BE ABLE TO GAIN A TREMENDOUS AMOUNT OF WORLD KNOWLEDGE FROM THEM, AND

Re: [agi] Poll

2007-10-18 Thread William Pearson
On 18/10/2007, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: I'd be interested in everyone's take on the following: 1. What is the single biggest technical gap between current AI and AGI? (e.g. we need a way to do X or we just need more development of Y or we have the ideas, just need hardware,

[agi] Computational formalisms appropriate to adaptive and intelligent systems?

2007-10-30 Thread William Pearson
I have recently been trying to find better formalisms than TMs for different classes of adaptive systems (including the human brain), and have come across the Persistent Turing Machines[1], which seem to be a good first step in that direction. They have expressiveness claimed to be greater than

Re: [agi] Computational formalisms appropriate to adaptive and intelligent systems?

2007-10-31 Thread William Pearson
On 30/10/2007, Pei Wang [EMAIL PROTECTED] wrote: Thanks for the link. I agree that this work is moving in an interesting direction, though I'm afraid that for AGI (and adaptive systems in general), TM may be too low as a level of description --- the conclusions obtained in this kind of work

Re: [agi] NLP + reasoning?

2007-11-06 Thread William Pearson
On 06/11/2007, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Will Pearson asked I'm also wondering what you consider success in this case. For example do you want the system to be able to maintain conversational state such as would be needed to deal with the following. For all following

Re: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-08 Thread William Pearson
On 08/11/2007, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: My impression is that most machine learning theories assume a search space of hypotheses as a given, so it is out of their scope to compare *between* learning structures (eg, between logic and neural networks). Algorithmic learning

Re: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-08 Thread William Pearson
On 08/11/2007, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: Thanks for the input. There's one perplexing theorem, in the paper about the algorithmic complexity of programming, that the language doesn't matter that much, ie, the algorithmic complexity of a program in different languages only

Re: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-08 Thread William Pearson
On 08/11/2007, Jef Allbright [EMAIL PROTECTED] wrote: I'm sorry I'm not going to be able to provide much illumination for you at this time. Just the few sentences of yours quoted above, while of a level of comprehension equal or better than average on this list, demonstrate epistemological

Re: [agi] Re: Superseding the TM [Was: How valuable is Solmononoff...]

2007-11-09 Thread William Pearson
On 09/11/2007, Jef Allbright [EMAIL PROTECTED] wrote: On 11/8/07, William Pearson [EMAIL PROTECTED] wrote: On 08/11/2007, Jef Allbright [EMAIL PROTECTED] wrote: This discussion reminds me of hot rod enthusiasts arguing passionately about how to build the best racing car, while

Re: Re[8]: [agi] Funding AGI research

2007-11-21 Thread William Pearson
On 21/11/2007, Dennis Gorelik [EMAIL PROTECTED] wrote: Benjamin, That's massive amount of work, but most AGI research and development can be shared with narrow AI research and development. There is plenty overlap btw AGI and narrow AI but not as much as you suggest... That's only

[agi] Flexibility of AI vs. a PC

2007-12-05 Thread William Pearson
One thing that has been puzzling me for a while is, why some people expect an intelligence to be less flexible than a PC. What do I mean by this? A PC can have any learning algorithm, bias or representation of data we care to create. This raises another question: how are we creating a

Re: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread William Pearson
On 06/12/2007, Ed Porter [EMAIL PROTECTED] wrote: Matt, So if it is perceived as something that increases a machine's vulnerability, it seems to me that would be one more reason for people to avoid using it. Ed Porter Why are you having this discussion on an AGI list? Will Pearson -

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread William Pearson
On 07/01/2008, Robert Wensman [EMAIL PROTECTED] wrote: I think what you really want to use is the concept of adaptability, or maybe you could say you want an AGI system that is programmed in an indirect way (meaning that the program instructions are very far away from what the system actually

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread William Pearson
On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote: Processing a dictionary in a useful way requires quite sophisticated language understanding ability, though. Once you can do that, the hard part of the problem is already solved ;-) While this kind of system requires sophisticated

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread William Pearson
On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote: I'll be a lot more interested when people start creating NLP systems that are syntactically and semantically processing statements *about* words, sentences and other linguistic structures and adding syntactic and semantic rules

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread William Pearson
On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote: On Jan 10, 2008 10:26 AM, William Pearson [EMAIL PROTECTED] wrote: On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote: I'll be a lot more interested when people start creating NLP systems that are syntactically

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-11 Thread William Pearson
Vladimir, What do you mean by difference in processing here? I said the difference was after the initial processing. By processing I meant syntactic and semantic processing. After processing the syntax related sentence the realm of action is changing the system itself, rather than knowledge of

Re: [agi] Definitions of Intelligence and the problem problem was Ben's Definition

2008-01-12 Thread William Pearson
My problem with both these definitions (and the one underpinning AIXI), is that they either don't define the word problem well or define it in a limited way. For example AIXI defines it as the solution of a problem as finding a function that transforms an input to an output. No mention of having

[agi] Legg and Hutter on resource efficiency was Re: Yawn.

2008-01-14 Thread William Pearson
On 14/01/2008, Pei Wang [EMAIL PROTECTED] wrote: On Jan 13, 2008 7:40 PM, Richard Loosemore [EMAIL PROTECTED] wrote: And, as I indicated, my particular beef was with Shane Legg's paper, which I found singularly content-free. Shane Legg and Marcus Hutter have a recent publication on this

Re: [agi] Comments on Pei Wang 's What Do You Mean by “AI”?

2008-01-14 Thread William Pearson
Something I noticed while trying to fit my definition of AI into the categories given. There is another way that definitions can be principled. This similarity would not be on the function of percepts to action. Instead it would require a similarity on the function of percepts to internal state

Re: [agi] Comments on Pei Wang 's What Do You Mean by “AI”?

2008-01-14 Thread William Pearson
On 14/01/2008, Pei Wang [EMAIL PROTECTED] wrote: Will, The situation you mentioned is possible, but I'd assume, given the similar functions from percepts to states, there must also be similar functions from states to actions, that is, AC = GC(SC), AH = GH(SH), GC ≈ GH Pei, Sorry I

Re: [agi] CEMI Field

2008-01-23 Thread William Pearson
On 23/01/2008, Günther Greindl [EMAIL PROTECTED] wrote: I find the theory very compelling, as I always found the functionalistic AI approach a bit lacking whereas I am a full endorser of a materialistic/monist approach (and I believe strong AI is feasible). EM fields arising through the

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread William Pearson
On 27/01/2008, Matt Mahoney [EMAIL PROTECTED] wrote: --- Vladimir Nesov [EMAIL PROTECTED] wrote: On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Software correctness is undecidable -- the halting problem reduces to it. Computer security isn't going to be magically

[agi] Types of automatic programming? was Re: Singularity Outcomes

2008-01-28 Thread William Pearson
On 28/01/2008, Bob Mottram [EMAIL PROTECTED] wrote: On 28/01/2008, Ben Goertzel [EMAIL PROTECTED] wrote: When your computer can write and debug software faster and more accurately than you can, then you should worry. A tool that could generate computer code from formal specifications

Re: [agi] The Test

2008-02-04 Thread William Pearson
On 04/02/2008, Mike Tintner [EMAIL PROTECTED] wrote: (And it's a fairly safe bet, Joseph, that no one will now do the obvious thing and say.. well, one idea I have had is..., but many will say, the reason why we can't do that is...) And maybe they would have a reason for doing so. I would like

Re: [agi] The Test

2008-02-05 Thread William Pearson
On 05/02/2008, Mike Tintner [EMAIL PROTECTED] wrote: William P : I can't think of any external test that can't be fooled by a giant look up table (ned block thought of this argument first). A by definition requirement of a general test is that the systembuilder doesn't set it, and can't

Re: [agi] A 1st Step To Using Your Image-ination

2008-02-15 Thread William Pearson
On 15/02/2008, Ed Porter [EMAIL PROTECTED] wrote: Mike, You have been pushing this anti-symbol/pro-image dichotomy for a long time. I don't understand it. Images are set, or nets, of symbols. So, if, as you say all symbols provide an extremely limited *inventory of

[agi] Thought experiment on informationally limited systems

2008-02-28 Thread William Pearson
I'm going to try and elucidate my approach to building an intelligent system, in a round about fashion. This is the problem I am trying to solve. Imagine you are designing a computer system to solve an unknown problem, and you have these constraints A) Limited space to put general information

Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread William Pearson
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: On 2/28/08, William Pearson [EMAIL PROTECTED] wrote: I'm going to try and elucidate my approach to building an intelligent system, in a round about fashion. This is the problem I am trying to solve. Imagine you are designing

Re: [agi] Solomonoff Induction Question

2008-03-01 Thread William Pearson
On 29/02/2008, Abram Demski [EMAIL PROTECTED] wrote: I'm an undergrad who's been lurking here for about a year. It seems to me that many people on this list take Solomonoff Induction to be the ideal learning technique (for unrestricted computational resources). I'm wondering what justification

Re: [agi] Solomonoff Induction Question

2008-03-01 Thread William Pearson
On 01/03/2008, Jey Kottalam [EMAIL PROTECTED] wrote: On Sat, Mar 1, 2008 at 3:10 AM, William Pearson [EMAIL PROTECTED] wrote: Keeping the same general shape of the system (trying to account for all the detail) means we are likely to overfit, due to trying to model systems

[agi] AGI 08 blogging?

2008-03-02 Thread William Pearson
Anyone blogging what they are finding interesting in AGI 08? Will Pearson --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription:

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread William Pearson
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: On 2/28/08, William Pearson [EMAIL PROTECTED] wrote: Note I want something different than computational universality. E.g. Von Neumann architectures are generally programmable, Harvard architectures aren't. As they can't

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread William Pearson
On 28/02/2008, Mike Tintner [EMAIL PROTECTED] wrote: You must first define its existing skills, then define the new challenge with some degree of precision - then explain the principles by which it will extend its skills. It's those principles of extension/generalization that are the

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread William Pearson
On 02/03/2008, Mike Tintner [EMAIL PROTECTED] wrote: Jeez, Will, the point of Artificial General Intelligence is that it can start adapting to an unfamiliar situation and domain BY ITSELF. And your FIRST and only response to the problem you set was to say: I'll get someone to tell it what

Re: [agi] Flies Neural Networks

2008-03-16 Thread William Pearson
On 16/03/2008, Ed Porter [EMAIL PROTECTED] wrote: I am not an expert on neural nets, but from my limited understanding it is far from clear exactly what the new insight into neural nets referred to in this article is, other than that timing neuron firings is important in the brain, which

Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread William Pearson
On 24/03/2008, Jim Bromer [EMAIL PROTECTED] wrote: To try to understand what I am talking about, start by imagining a simulation of some physical operation, like a part of a complex factory in a Sim City kind of game. In this kind of high-level model no one would ever imagine all of the

Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread William Pearson
On 25/03/2008, Vladimir Nesov [EMAIL PROTECTED] wrote Simple systems can be computationally universal, so it's not an issue in itself. On the other hand, no learning algorithm is universal, there are always distributions that given algorithms will learn miserably. The problem is to find a

Re: I know, I KNOW :-) WAS Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread William Pearson
On 26/03/2008, Mark Waser [EMAIL PROTECTED] wrote: First a riddle: What can be all learning algorithms, but is none? A human being! Well my answer was a common PC, which I hope is more illuminating because we know it well. But human being works, as does any future AI design, as far as I am

Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread William Pearson
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote: Although I symphathize with some of Hawkin's general ideas about unsupervised learning, his current HTM framework is unimpressive in comparison with state-of-the-art techniques such as Hinton's RBM's, LeCun's convolutional nets and the

Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread William Pearson
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote: Intelligence is not *only* about the modalities of the data you get, but modalities are certainly important. A deafblind person can still learn a lot about the world with taste, smell, and touch, but the senses one has access to defines

Re: [agi] Instead of an AGI Textbook

2008-03-31 Thread William Pearson
On 26/03/2008, Ben Goertzel [EMAIL PROTECTED] wrote: Hi all, A lot of students email me asking me what to read to get up to speed on AGI. So I started a wiki page called Instead of an AGI Textbook, http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics I've

[agi] The resource allocation problem

2008-04-01 Thread William Pearson
The resource allocation problem and why it needs to be solved first How much memory and processing power should you apply to the following things?: Visual Processing Reasoning Sound Processing Seeing past experiences and how they apply to the current one Searching for new ways of doing things

Re: [agi] The resource allocation problem

2008-04-04 Thread William Pearson
On 01/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote: On Tue, Apr 1, 2008 at 6:30 PM, William Pearson [EMAIL PROTECTED] wrote: The resource allocation problem and why it needs to be solved first How much memory and processing power should you apply to the following things

Re: [agi] The resource allocation problem

2008-04-05 Thread William Pearson
On 05/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote: On Sat, Apr 5, 2008 at 12:24 AM, William Pearson [EMAIL PROTECTED] wrote: On 01/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote: This question supposes a specific kind of architecture, where these things are in some sense

Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-20 Thread William Pearson
On 19/04/2008, Ed Porter [EMAIL PROTECTED] wrote: WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? I'm not quite sure how to describe it, but this brief sketch will have to do until I get some more time. These may be in some new AI material, but I haven't had the chance to read up much recently.

  1   2   >