Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread William Pearson
2009/1/9 Ben Goertzel b...@goertzel.org: This is an attempt to articulate a virtual world infrastructure that will be adequate for the development of human-level AGI http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf goertzel.org seems to be down. So I can't refresh my memory of the paper.

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread William Pearson
2009/1/13 Ben Goertzel b...@goertzel.org: Yes, I'm expecting the AI to make tools from blocks and beads No, i'm not attempting to make a detailed simulation of the human brain/body, just trying to use vaguely humanlike embodiment and high-level mind-architecture together with computer science

Re: [agi] Hypercomputation and AGI

2008-12-30 Thread William Pearson
2008/12/29 Ben Goertzel b...@goertzel.org: Hi, I expanded a previous blog entry of mine on hypercomputation and AGI into a conference paper on the topic ... here is a rough draft, on which I'd appreciate commentary from anyone who's knowledgeable on the subject:

Re: [agi] Hypercomputation and AGI

2008-12-30 Thread William Pearson
2008/12/30 Ben Goertzel b...@goertzel.org: It seems to come down to the simplicity measure... if you can have simplicity(Turing program P that generates lookup table T) simplicity(compressed lookup table T) then the Turing program P can be considered part of a scientific explanation...

Re: [agi] Taleb on Probability

2008-11-08 Thread William Pearson
You can read the full essay online here http://www.edge.org/3rd_culture/taleb08/taleb08.1_index.html Will 2008/11/8 Mike Tintner [EMAIL PROTECTED]: REAL LIFE IS NOT A CASINO By Nassim Nicholas Taleb On New Years day I received a a prescient essay from Nassim Taleb, author of The Black

Re: [agi] Occam's Razor and its abuse

2008-10-28 Thread William Pearson
2008/10/28 Ben Goertzel [EMAIL PROTECTED]: On the other hand, I just want to point out that to get around Hume's complaint you do need to make *some* kind of assumption about the regularity of the world. What kind of assumption of this nature underlies your work on NARS (if any)? Not

On architecture was Re: [agi] On programming languages

2008-10-24 Thread William Pearson
2008/10/24 Mark Waser [EMAIL PROTECTED]: But I thought I'd mention that for OpenCog we are planning on a cross-language approach. The core system is C++, for scalability and efficiency reasons, but the MindAgent objects that do the actual AI algorithms should be creatable in various

Re: [agi] Re: Value of philosophy

2008-10-20 Thread William Pearson
2008/10/20 Mike Tintner [EMAIL PROTECTED]: (There is a separate, philosophical discussion, about feasibility in a different sense - the lack of a culture of feasibility, which is perhaps, subconsciously what Ben was also referring to - no one, but no one, in AGI, including Ben, seems

Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread William Pearson
2008/10/19 Dr. Matthias Heger [EMAIL PROTECTED]: The process of outwardly expressing meaning may be fundamental to any social intelligence but the process itself needs not much intelligence. Every email program can receive meaning, store meaning and it can express it outwardly in order to

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-18 Thread William Pearson
2008/10/18 Ben Goertzel [EMAIL PROTECTED]: 1) There definitely IS such a thing as a better algorithm for intelligence in general. For instance, compare AIXI with an algorithm called AIXI_frog, that works exactly like AIXI, but inbetween each two of AIXI's computational operations, it

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-17 Thread William Pearson
2008/10/17 Ben Goertzel [EMAIL PROTECTED]: The difficulty of rigorously defining practical intelligence doesn't tell you ANYTHING about the possibility of RSI ... it just tells you something about the possibility of rigorously proving useful theorems about RSI ... More importantly, you

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread William Pearson
2008/10/14 Terren Suydam [EMAIL PROTECTED]: --- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote: An AI that is twice as smart as a human can make no more progress than 2 humans. Spoken like someone who has never worked with engineers. A genius engineer can outproduce 20 ordinary

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread William Pearson
Hi Terren, I think humans provide ample evidence that intelligence is not necessarily correlated with processing power. The genius engineer in my example solves a given problem with *much less* overall processing than the ordinary engineer, so in this case intelligence is correlated with

Re: [agi] COMP = false

2008-10-05 Thread William Pearson
2008/10/4 Colin Hales [EMAIL PROTECTED]: Hi Will, It's not an easy thing to fully internalise the implications of quantum degeneracy. I find physicists and chemists have no trouble accepting it, but in the disciplines above that various levels of mental brick walls are in place. Unfortunately

Re: [agi] COMP = false

2008-10-04 Thread William Pearson
Hi Colin, I'm not entirely sure that computers can implement consciousness. But I don't find your arguments sway me one way or the other. A brief reply follows. 2008/10/4 Colin Hales [EMAIL PROTECTED]: Next empirical fact: (v) When you create a turing-COMP substrate the interface with space

[agi] Waiting to gain information before acting

2008-09-21 Thread William Pearson
I've started to wander away from my normal sub-cognitive level of AI, and have been thinking about reasoning systems. One scenario I have come up with is the, foresight of extra knowledge, scenario. Suppose Alice and Bob have decided to bet $10 on the weather in the 10 days time in alaska whether

Re: [agi] self organization

2008-09-17 Thread William Pearson
2008/9/16 Terren Suydam [EMAIL PROTECTED]: Hi Will, Such an interesting example in light of a recent paper, which deals with measuring the difference between activation of the visual cortex and blood flow to the area, depending on whether the stimulus was subjectively invisible. If the

Re: [agi] self organization

2008-09-16 Thread William Pearson
2008/9/15 Vladimir Nesov [EMAIL PROTECTED]: I guess that intuitively, argument goes like this: 1) economy is more powerful than individual agents, it allows to increase the power of intelligence in individual agents; 2) therefore, economy has an intelligence-increasing potency; 3) so, we can

Re: [agi] Does prior knowledge/learning cause GAs to converge too fast on sub-optimal solutions?

2008-09-09 Thread William Pearson
2008/9/8 Benjamin Johnston [EMAIL PROTECTED]: Does this issue actually crop up in GA-based AGI work? If so, how did you get around it? If not, would you have any comments about what makes AGI special so that this doesn't happen? Does it also happen in humans? I'd say yes, therefore it might

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread William Pearson
2008/9/5 Mike Tintner [EMAIL PROTECTED]: MT:By contrast, all deterministic/programmed machines and computers are guaranteed to complete any task they begin. Will:If only such could be guaranteed! We would never have system hangs, dead locks. Even if it could be made so, computer systems

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread William Pearson
2008/9/6 Mike Tintner [EMAIL PROTECTED]: Will, Yes, humans are manifestly a RADICALLY different machine paradigm- if you care to stand back and look at the big picture. Employ a machine of any kind and in general, you know what you're getting - some glitches (esp. with complex programs) etc

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread William Pearson
2008/9/5 Mike Tintner [EMAIL PROTECTED]: By contrast, all deterministic/programmed machines and computers are guaranteed to complete any task they begin. If only such could be guaranteed! We would never have system hangs, dead locks. Even if it could be made so, computer systems would not

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread William Pearson
2008/9/4 Mike Tintner [EMAIL PROTECTED]: Terren, If you think it's all been said, please point me to the philosophy of AI that includes it. A programmed machine is an organized structure. A keyboard (and indeed a computer with keyboard) are something very different - there is no

Re: [agi] Recursive self-change: some definitions

2008-09-03 Thread William Pearson
2008/9/2 Ben Goertzel [EMAIL PROTECTED]: Yes, I agree that your Turing machine approach can model the same situations, but the different formalisms seem to lend themselves to different kinds of analysis more naturally... I guess it all depends on what kinds of theorems you want to

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-09-03 Thread William Pearson
2008/8/28 Valentina Poletti [EMAIL PROTECTED]: Got ya, thanks for the clarification. That brings up another question. Why do we want to make an AGI? To understand ourselves as intelligent agents better? It might enable us to have decent education policy, rehabilitation of criminals. Even if

[agi] Recursive self-change: some definitions

2008-09-02 Thread William Pearson
I've put up a short fairly dense un-referenced paper (basically an email but in a pdf to allow for maths) here. http://codesoup.sourceforge.net/RSC.pdf Any thoughts/ feed back welcomed. I'll try and make it more accessible at some point, but I don't want to spend too much time on it at the

Re: [agi] Recursive self-change: some definitions

2008-09-02 Thread William Pearson
2008/9/2 Ben Goertzel [EMAIL PROTECTED]: Hmmm.. Rather, I would prefer to model a self-modifying AGI system as something like F(t+1) = (F(t))( F(t), E(t) ) where E(t) is the environment at time t and F(t) is the system at time t Are you assuming the system knows the environment totally?

Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread William Pearson
2008/8/29 Ben Goertzel [EMAIL PROTECTED]: About recursive self-improvement ... yes, I have thought a lot about it, but don't have time to write a huge discourse on it here One point is that if you have a system with N interconnected modules, you can approach RSI by having the system

Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread William Pearson
2008/8/30 Ben Goertzel [EMAIL PROTECTED]: On Sat, Aug 30, 2008 at 10:06 AM, William Pearson [EMAIL PROTECTED] wrote: 2008/8/29 Ben Goertzel [EMAIL PROTECTED]: About recursive self-improvement ... yes, I have thought a lot about it, but don't have time to write a huge discourse

Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread William Pearson
2008/8/30 Ben Goertzel [EMAIL PROTECTED]: Isn't it an evolutionary stable strategy for the modification system module to change to a state where it does not change itself?1 Not if the top-level goals are weighted toward long-term growth Let me give you a just so story and you can tell

Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread William Pearson
2008/8/30 Ben Goertzel [EMAIL PROTECTED]: Don't they have to specify a specific state? Or am I reading http://opencog.org/wiki/OpenCogPrime:GoalAtom wrong? They don't have to specify a specific state. A goal could be some PredicateNode P expressing an abstract evaluation of state,

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread William Pearson
2008/8/29 j.k. [EMAIL PROTECTED]: On 08/28/2008 04:47 PM, Matt Mahoney wrote: The premise is that if humans can create agents with above human intelligence, then so can they. What I am questioning is whether agents at any intelligence level can do this. I don't believe that agents at any

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread William Pearson
2008/8/29 j.k. [EMAIL PROTECTED]: On 08/29/2008 01:29 PM, William Pearson wrote: 2008/8/29 j.k.[EMAIL PROTECTED]: An AGI with an intelligence the equivalent of a 99.-percentile human might be creatable, recognizable and testable by a human (or group of humans) of comparable

Re: [agi] The Necessity of Embodiment

2008-08-25 Thread William Pearson
2008/8/25 Terren Suydam [EMAIL PROTECTED]: --- On Sun, 8/24/08, Vladimir Nesov [EMAIL PROTECTED] wrote: On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam wrong. This ability might be an end in itself, the whole point of building an AI, when considered as applying to the dynamics of the world

Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-23 Thread William Pearson
2008/8/23 Matt Mahoney [EMAIL PROTECTED]: Valentina Poletti [EMAIL PROTECTED] wrote: I was wondering why no-one had brought up the information-theoretic aspect of this yet. It has been studied. For example, Hutter proved that the optimal strategy of a rational goal seeking agent in an

Re: [agi] The Necessity of Embodiment

2008-08-11 Thread William Pearson
2008/8/11 Mike Tintner [EMAIL PROTECTED]: Will: thought you meant rational as applied to the system builder :P Consistency of systems is overrated, as far as I am concerned. Consistency is only important if it ever the lack becomes exploited. A system that alter itself to be consistent after

Re: [agi] The Necessity of Embodiment

2008-08-10 Thread William Pearson
2008/8/10 Mike Tintner [EMAIL PROTECTED]: Just as you are in a rational, specialist way picking off isolated features, so, similarly, rational, totalitarian thinkers used to object to the crazy, contradictory complications of the democratic, conflict system of decisionmaking by contrast with

[agi] Definition of Pattern?

2008-08-03 Thread William Pearson
Is there a mathematical wiki-pedia sized definition of the a Goertzelian pattern out there? It would make assessing the underpinnings of Open Cog Prime easier. Will Pearson --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed:

Re: [agi] The exact locus of the supposed 'complexity'

2008-08-03 Thread William Pearson
2008/8/3 Richard Loosemore [EMAIL PROTECTED]: I probably don't need to labor the rest of the story, because you have heard it before. If there is a brick wall between the overall behavior of the system and the design choices that go into it - if it is impossible to go from 'I want the

What does it do? useful in AGI? Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-23 Thread William Pearson
2008/7/22 Mike Archbold [EMAIL PROTECTED]: It looks to me to be borrowed from Aristotle's ethics. Back in my college days, I was trying to explain my project and the professor kept interrupting me to ask: What does it do? Tell me what it does. I don't understand what your system does.

Re: Location of goal/purpose was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-15 Thread William Pearson
2008/7/14 Terren Suydam [EMAIL PROTECTED]: Will, --- On Fri, 7/11/08, William Pearson [EMAIL PROTECTED] wrote: Purpose and goal are not intrinsic to systems. I agree this is true with designed systems. And I would also say of evolved systems. My fingers purpose could equally well be said

Re: [agi] Is clustering fundamental?

2008-07-09 Thread William Pearson
2008/7/6 Abram Demski [EMAIL PROTECTED]: In fact, adding hidden predicates and entities in the case of Markov logic makes the space of models Turing-complete (and even bigger than that if higher-order logic is used). But if I am not mistaken the clustering used in the paper I refer to is not

Re: Formal proved code change vs experimental was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-07 Thread William Pearson
2008/7/3 Steve Richfield [EMAIL PROTECTED]: William and Vladimir, IMHO this discussion is based entirely on the absence of any sort of interface spec. Such a spec is absolutely necessary for a large AGI project to ever succeed, and such a spec could (hopefully) be wrung out to at least avoid

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-04 Thread William Pearson
Terren, Remember when I said that a purpose is not the same thing as a goal? The purpose that the system might be said to have embedded is attempting to maximise a certain signal. This purpose presupposes no ontology. The fact that this signal is attached to a human means the system as a

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
2008/7/3 Terren Suydam [EMAIL PROTECTED]: --- On Wed, 7/2/08, William Pearson [EMAIL PROTECTED] wrote: Evolution! I'm not saying your way can't work, just saying why I short cut where I do. Note a thing has a purpose if it is useful to apply the design stance* to it. There are two things

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]: On Thu, Jul 3, 2008 at 12:59 AM, William Pearson [EMAIL PROTECTED] wrote: 2008/7/2 Vladimir Nesov [EMAIL PROTECTED]: On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote: They would get less credit from the human supervisor. Let me

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]: On Thu, Jul 3, 2008 at 10:45 AM, William Pearson [EMAIL PROTECTED] wrote: Nope. I don't include B in A because if A' is faulty it can cause problems to whatever is in the same vmprogram as it, by overwriting memory locations. A' being a separate

Formal proved code change vs experimental was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
Sorry about the long thread jack 2008/7/3 Vladimir Nesov [EMAIL PROTECTED]: On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote: Because it is dealing with powerful stuff, when it gets it wrong it goes wrong powerfully. You could lock the experimental code away in a sand

Re: [agi] Re: Theoretic estimation of reliability vs experimental

2008-07-03 Thread William Pearson
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]: On Thu, Jul 3, 2008 at 9:36 PM, William Pearson [EMAIL PROTECTED] wrote: Sorry about the long thread jack 2008/7/3 Vladimir Nesov [EMAIL PROTECTED]: On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote: Because it is dealing

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
Sorry about the late reply. snip some stuff sorted out 2008/6/30 Vladimir Nesov [EMAIL PROTECTED]: On Tue, Jul 1, 2008 at 2:02 AM, William Pearson [EMAIL PROTECTED] wrote: 2008/6/30 Vladimir Nesov [EMAIL PROTECTED]: If internals are programmed by humans, why do you need automatic system

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Terren Suydam [EMAIL PROTECTED]: Mike, This is going too far. We can reconstruct to a considerable extent how humans think about problems - their conscious thoughts. Why is it going too far? I agree with you that we can reconstruct thinking, to a point. I notice you didn't say

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]: On Wed, Jul 2, 2008 at 2:48 PM, William Pearson [EMAIL PROTECTED] wrote: Okay let us clear things up. There are two things that need to be designed, a computer architecture or virtual machine and programs that form the initial set of programs within

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Abram Demski [EMAIL PROTECTED]: How do you assign credit to programs that are good at generating good children? I never directly assign credit, apart from the first stage. The rest of the credit assignment is handled by the vmprograms, er, programming. Particularly, could a program

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]: On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote: They would get less credit from the human supervisor. Let me expand on what I meant about the economic competition. Let us say vmprogram A makes a copy of itself, called

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread William Pearson
2008/6/30 Terren Suydam [EMAIL PROTECTED]: Hi Will, --- On Mon, 6/30/08, William Pearson [EMAIL PROTECTED] wrote: The only way to talk coherently about purpose within the computation is to simulate self-organized, embodied systems. I don't think you are quite getting my system. If you

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread William Pearson
2008/6/30 Terren Suydam [EMAIL PROTECTED]: Ben, I agree, an evolved design has limits too, but the key difference between a contrived design and one that is allowed to evolve is that the evolved critter's intelligence is grounded in the context of its own 'experience', whereas the

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread William Pearson
Hello Terren A Von Neumann computer is just a machine. It's only purpose is to compute. When you get into higher-level purpose, you have to go up a level to the stuff being computed. Even then, the purpose is in the mind of the programmer. What I don't see is why your simulation gets away

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread William Pearson
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]: On Mon, Jun 30, 2008 at 10:34 PM, William Pearson [EMAIL PROTECTED] wrote: I'm seeking to do something half way between what you suggest (from bacterial systems to human alife) and AI. I'd be curious to know whether you think it would suffer from

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread William Pearson
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]: On Tue, Jul 1, 2008 at 1:31 AM, William Pearson [EMAIL PROTECTED] wrote: 2008/6/30 Vladimir Nesov [EMAIL PROTECTED]: It is a wrong level of organization: computing hardware is the physics of computation, it isn't meant to implement specific

Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-27 Thread William Pearson
2008/6/27 Steve Richfield [EMAIL PROTECTED]: Russell and William, OK, I think that I am finally beginning to get it. No one here is really planning to do wonderful things that people can't reasonably do, though Russell has pointed out some improvements which I will comment on separately. I

Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-27 Thread William Pearson
I'm going to ignore the oversimplifications of a variety of peoples positions. But no one in AGI knows how to design or instruct a machine to work without algorithms - or, to be more precise, *complete* algorithms. It's unthinkable - it seems like asking someone not to breathe... until, like

Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-26 Thread William Pearson
2008/6/26 Steve Richfield [EMAIL PROTECTED]: Jiri previously noted that perhaps AGIs would best be used to manage the affairs of humans so that we can do as we please without bothering with the complex details of life. Of course, people and some (communist) governments now already perform

Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread William Pearson
2008/6/23 Bob Mottram [EMAIL PROTECTED]: 2008/6/22 William Pearson [EMAIL PROTECTED]: 2008/6/22 Vladimir Nesov [EMAIL PROTECTED]: Well since intelligence explosions haven't happened previously in our light cone, it can't be a simple physical pattern Probably the last intelligence explosion

Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread William Pearson
2008/6/23 Vladimir Nesov [EMAIL PROTECTED]: On Mon, Jun 23, 2008 at 12:50 AM, William Pearson [EMAIL PROTECTED] wrote: 2008/6/22 Vladimir Nesov [EMAIL PROTECTED]: Two questions: 1) Do you know enough to estimate which scenario is more likely? Well since intelligence explosions haven't

[agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-22 Thread William Pearson
While SIAI fills that niche somewhat, it concentrates on the Intelligence explosion scenario. Is there a sufficient group of researchers/thinkers with a shared vision of the future of AI coherent enough to form an organisation? This organisation would discus, explore and disseminate what can be

Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-22 Thread William Pearson
2008/6/22 Vladimir Nesov [EMAIL PROTECTED]: Two questions: 1) Do you know enough to estimate which scenario is more likely? Well since intelligence explosions haven't happened previously in our light cone, it can't be a simple physical pattern, so I think non-exploding intelligences have the

Re: [agi] Breaking Solomonoff induction (really)

2008-06-21 Thread William Pearson
2008/6/21 Wei Dai [EMAIL PROTECTED]: A different way to break Solomonoff Induction takes advantage of the fact that it restricts Bayesian reasoning to computable models. I wrote about this in is induction unformalizable? [2] on the everything mailing list. Abram Demski also made similar points

Re: [agi] Nirvana

2008-06-12 Thread William Pearson
2008/6/12 J Storrs Hall, PhD [EMAIL PROTECTED]: I'm getting several replies to this that indicate that people don't understand what a utility function is. If you are an AI (or a person) there will be occasions where you have to make choices. In fact, pretty much everything you do involves

Re: [agi] Nirvana

2008-06-12 Thread William Pearson
2008/6/12 J Storrs Hall, PhD [EMAIL PROTECTED]: On Thursday 12 June 2008 02:48:19 am, William Pearson wrote: The kinds of choices I am interested in designing for at the moment are should program X or program Y get control of this bit of memory or IRQ for the next time period. X and Y can

Re: [agi] Nirvana

2008-06-11 Thread William Pearson
2008/6/11 J Storrs Hall, PhD [EMAIL PROTECTED]: Vladimir, You seem to be assuming that there is some objective utility for which the AI's internal utility function is merely the indicator, and that if the indicator is changed it is thus objectively wrong and irrational. There are two

[Humour of some sort] Re: Are rocks conscious? (was RE: [agi] Did this message get completely lost?)

2008-06-04 Thread William Pearson
2008/6/4 Bob Mottram [EMAIL PROTECTED]: 2008/6/4 J Storrs Hall, PhD [EMAIL PROTECTED]: What is the rock thinking? T h i s i s w a a a y o f f t o p i c . . . Rocks are obviously superintelligences. By behaving like inert matter and letting us build monuments and gravel pathways

Re: [agi] Re: Merging - or: Multiplicity

2008-05-28 Thread William Pearson
2008/5/27 Mike Tintner [EMAIL PROTECTED]: Will:And you are part of the problem insisting that an AGI should be tested by its ability to learn on its own and not get instruction/help from other agents be they human or other artificial intelligences. I insist[ed] that an AGI should be tested on

Re: [agi] Design Phase Announce - VRRM project

2008-05-28 Thread William Pearson
2008/5/27 Steve Richfield [EMAIL PROTECTED]: William, This sounds like you should be announcing the analysis phase! Detailed comments follow... Design/research/analysis, call it what you will. On 5/26/08, William Pearson [EMAIL PROTECTED] wrote: VRRM - Virtual Reinforcement Resource

Re: [agi] Re: Merging - or: Multiplicity

2008-05-27 Thread William Pearson
2008/5/27 Mike Tintner [EMAIL PROTECTED]: Actually, that's an absurdity. The whole story of evolution tells us that the problems of living in this world for any species of creature/intelligence at any level can only be solved by a SOCIETY of individuals. This whole dimension seems to be

[agi] Design Phase Announce - VRRM project

2008-05-26 Thread William Pearson
VRRM - Virtual Reinforcement Resource Managing Machine Overview This is a virtual machine designed to allow non-catastrophic unconstrained experimentation of programs in a system as close to the hardware as possible. This should allow the system to change as much as is possible and needed for

Different problem types was Re: [agi] AGI and Wiki, was Understanding a sick puppy

2008-05-16 Thread William Pearson
2008/5/16 Steve Richfield [EMAIL PROTECTED]: Does anyone else here share my dream of a worldwide AI with all of the knowledge of the human race to support it - built with EXISTING Wikipedia and Dr. Eliza software and a little glue to hold it all together? I'm taking this as a jumping off

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-14 Thread William Pearson
Matt mahoney: I am not sure what you mean by AGI. I consider a measure of intelligence to be the degree to which goals are satisfied in a range of environments. It does not matter what the goals are. They may seem irrational to you. The goal of a smart bomb is to blow itself up at a

Re: [agi] Self-maintaining Architecture first for AI

2008-05-11 Thread William Pearson
2008/5/11 Russell Wallace [EMAIL PROTECTED]: On Sat, May 10, 2008 at 10:10 PM, William Pearson [EMAIL PROTECTED] wrote: It depends on the system you are designing on. I think you can easily create as many types of sand box as you want in programming language E (1) for example. If the principle

Re: [agi] Self-maintaining Architecture first for AI

2008-05-10 Thread William Pearson
2008/5/10 Richard Loosemore [EMAIL PROTECTED]: This is still quite ambiguous on a number of levels, so would it be possible for you to give us a road map of where the argument is going? At the moment I am not sure what the theme is. That is because I am still ambiguous as to what the later

[agi] Self-maintaining Architecture first for AI

2008-05-09 Thread William Pearson
After getting completely on the wrong foot last time I posted something, and not having had time to read the papers I should have. I have decided to try and start afresh and outline where I am coming from. I'll get around to do a proper paper later. There are two possible modes for designing a

Re: [agi] How general can be and should be AGI?

2008-04-27 Thread William Pearson
2008/4/27 Dr. Matthias Heger [EMAIL PROTECTED]: Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54 Yes, truly general AI is only possible in the case of infinite processing power, which is likely not physically realizable. How much generality can be achieved with

Re: [agi] How general can be and should be AGI?

2008-04-26 Thread William Pearson
2008/4/26 Dr. Matthias Heger [EMAIL PROTECTED]: How general should be AGI? My answer, as *potentially* general as possible. In a similar fashion that a UTM is as potentially as general as possible, but with more purpose. There are plenty of problems you can define that don't need the halting

Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread William Pearson
On 21/04/2008, Ed Porter [EMAIL PROTECTED] wrote: So when people are given a sentence such as the one you quoted about verbs, pronouns, and nouns, presuming they have some knowledge of most of the words in the sentence, they will understand the concept that verbs are doing words. This is

Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-20 Thread William Pearson
On 19/04/2008, Ed Porter [EMAIL PROTECTED] wrote: WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? I'm not quite sure how to describe it, but this brief sketch will have to do until I get some more time. These may be in some new AI material, but I haven't had the chance to read up much recently.

Re: [agi] The resource allocation problem

2008-04-05 Thread William Pearson
On 05/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote: On Sat, Apr 5, 2008 at 12:24 AM, William Pearson [EMAIL PROTECTED] wrote: On 01/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote: This question supposes a specific kind of architecture, where these things are in some sense

Re: [agi] The resource allocation problem

2008-04-04 Thread William Pearson
On 01/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote: On Tue, Apr 1, 2008 at 6:30 PM, William Pearson [EMAIL PROTECTED] wrote: The resource allocation problem and why it needs to be solved first How much memory and processing power should you apply to the following things

[agi] The resource allocation problem

2008-04-01 Thread William Pearson
The resource allocation problem and why it needs to be solved first How much memory and processing power should you apply to the following things?: Visual Processing Reasoning Sound Processing Seeing past experiences and how they apply to the current one Searching for new ways of doing things

Re: [agi] Instead of an AGI Textbook

2008-03-31 Thread William Pearson
On 26/03/2008, Ben Goertzel [EMAIL PROTECTED] wrote: Hi all, A lot of students email me asking me what to read to get up to speed on AGI. So I started a wiki page called Instead of an AGI Textbook, http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics I've

Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread William Pearson
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote: Although I symphathize with some of Hawkin's general ideas about unsupervised learning, his current HTM framework is unimpressive in comparison with state-of-the-art techniques such as Hinton's RBM's, LeCun's convolutional nets and the

Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread William Pearson
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote: Intelligence is not *only* about the modalities of the data you get, but modalities are certainly important. A deafblind person can still learn a lot about the world with taste, smell, and touch, but the senses one has access to defines

Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread William Pearson
On 25/03/2008, Vladimir Nesov [EMAIL PROTECTED] wrote Simple systems can be computationally universal, so it's not an issue in itself. On the other hand, no learning algorithm is universal, there are always distributions that given algorithms will learn miserably. The problem is to find a

Re: I know, I KNOW :-) WAS Re: [agi] The Effect of Application of an Idea

2008-03-26 Thread William Pearson
On 26/03/2008, Mark Waser [EMAIL PROTECTED] wrote: First a riddle: What can be all learning algorithms, but is none? A human being! Well my answer was a common PC, which I hope is more illuminating because we know it well. But human being works, as does any future AI design, as far as I am

Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread William Pearson
On 24/03/2008, Jim Bromer [EMAIL PROTECTED] wrote: To try to understand what I am talking about, start by imagining a simulation of some physical operation, like a part of a complex factory in a Sim City kind of game. In this kind of high-level model no one would ever imagine all of the

Re: [agi] Flies Neural Networks

2008-03-16 Thread William Pearson
On 16/03/2008, Ed Porter [EMAIL PROTECTED] wrote: I am not an expert on neural nets, but from my limited understanding it is far from clear exactly what the new insight into neural nets referred to in this article is, other than that timing neuron firings is important in the brain, which

[agi] AGI 08 blogging?

2008-03-02 Thread William Pearson
Anyone blogging what they are finding interesting in AGI 08? Will Pearson --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription:

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread William Pearson
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: On 2/28/08, William Pearson [EMAIL PROTECTED] wrote: Note I want something different than computational universality. E.g. Von Neumann architectures are generally programmable, Harvard architectures aren't. As they can't

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread William Pearson
On 28/02/2008, Mike Tintner [EMAIL PROTECTED] wrote: You must first define its existing skills, then define the new challenge with some degree of precision - then explain the principles by which it will extend its skills. It's those principles of extension/generalization that are the

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread William Pearson
On 02/03/2008, Mike Tintner [EMAIL PROTECTED] wrote: Jeez, Will, the point of Artificial General Intelligence is that it can start adapting to an unfamiliar situation and domain BY ITSELF. And your FIRST and only response to the problem you set was to say: I'll get someone to tell it what

Re: [agi] Solomonoff Induction Question

2008-03-01 Thread William Pearson
On 29/02/2008, Abram Demski [EMAIL PROTECTED] wrote: I'm an undergrad who's been lurking here for about a year. It seems to me that many people on this list take Solomonoff Induction to be the ideal learning technique (for unrestricted computational resources). I'm wondering what justification

Re: [agi] Solomonoff Induction Question

2008-03-01 Thread William Pearson
On 01/03/2008, Jey Kottalam [EMAIL PROTECTED] wrote: On Sat, Mar 1, 2008 at 3:10 AM, William Pearson [EMAIL PROTECTED] wrote: Keeping the same general shape of the system (trying to account for all the detail) means we are likely to overfit, due to trying to model systems

  1   2   >