Re: [agi] Is computation a good concept for describing AI software?

2005-05-23 Thread Russell Wallace
On 5/21/05, Ben Goertzel [EMAIL PROTECTED] wrote: Yeah. In practice this is the biggest constraint on Novamente's performance. We have a big AtomTable data structure in memory and getting information in and out of it happens constantly and takes the bulk of processor time. This of course is

Re: [agi] My strategy. A fun look at the FAI problem

2005-06-23 Thread Russell Wallace
On 6/23/05, Marc Geddes [EMAIL PROTECTED] wrote: But if I'm right...Who will turn the golden key and set us free? Be careful what you wish for. - Russell --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to

Re: [agi] /. [Unleashing the Power of the Cell Broadband Engine]

2005-11-27 Thread Russell Wallace
On 11/27/05, Lukasz Kaiser [EMAIL PROTECTED] wrote: I think you are wrong here Ben or at least you are missing one serious problem.As far as I know the GP fitness evaluation code is a largely branching stuffthat can hardly be vectorized and the SPEs have weak or no branch predictor at all and

Re: [agi] Hardware and software continues to evolve...

2006-03-24 Thread Russell Wallace
On 3/24/06, Eugen Leitl [EMAIL PROTECTED] wrote: Arguably even RAM are useless for realtime, since you need~second to stream through a node's memory. And that's justone iteration. And it's not random-access, so it can getsome 10-20 times as slow.Now if you could process the entire ~GByte chunk in

Re: [agi] the Singularity Summit and regulation of AI

2006-05-10 Thread Russell Wallace
On 5/10/06, Bill Hibbard [EMAIL PROTECTED] wrote: The Singularity Summit should include all points ofview, including advocates for regulation of intelligentmachines. It will weaken the Summit to exclude thispoint of view. Then it would be better if the Summit were not held at all. Nanotech, AGI

Re: [agi] Robotic Turtles and The Future of AI

2006-05-13 Thread Russell Wallace
I like the disco music! ^.^ Good video, very funny; Zeb obviously has talent.On 5/13/06, Ben Goertzel [EMAIL PROTECTED] wrote:Hi,If any of you have 14 minutes to spare for some silliness, my son Zeb (age 12) has made a brief animated movie about how two of mycolleagues and I create a superhuman

Re: [agi] procedural vs declarative knowledge

2006-06-05 Thread Russell Wallace
On 6/3/06, Ben Goertzel [EMAIL PROTECTED] wrote: It's a little more than that (more than just speed optimization),because the declarative knowledge may be uncertain, but the procedurederived from it will often be more determinate... I'm curious - how can a procedure be more certain than the

Re: [agi] Two draft papers: AI and existential risk; heuristics and biases

2006-06-07 Thread Russell Wallace
On 6/7/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: But -as I said in the chapter conclusion - imagine how careful you would haveto be if you wanted to survive as an *individual*; and that is howcareful humanity must be to survive existential risks. *nods* I know where you're coming from

Re: [agi] list vs. forum

2006-06-10 Thread Russell Wallace
On 6/10/06, sanjay padmane [EMAIL PROTECTED] wrote: I feel you should discontinue the list. That will force people to post there.I'm not using the forum only because no one else is using it (or veryfew), and everyone is perhaps doing the same. And I feel the forum should be discontinued, so as to

Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Russell Wallace
On 6/13/06, Ben Goertzel [EMAIL PROTECTED] wrote: The issue is: how might NNs effectively represent abstract knowledge? With difficulty! Okay, to put it in a less facetious-sounding way: It is worth bearing in mind that biological neural nets are _very bad_ at syntactic symbol manipulation;

Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Russell Wallace
On 6/13/06, Eugen Leitl [EMAIL PROTECTED] wrote: Representing and manipulating formal system is a very recent componentin the fitness function, and hence not well-optimized. True; but I will claim that no matter how much you optimize a biological neural net, it will always have characteristics

[agi] Architecture

2006-06-13 Thread Russell Wallace
On 6/13/06, Eugen Leitl [EMAIL PROTECTED] wrote: You can't actually rewire the circuit, so you haveto switch state which reprsents the circuit. It's easierif you embrace the model of dynamically traced outcircuitry in a computational substrate. Very fewthings are instant in a current

Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Russell Wallace
On 6/13/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: This thread has completely missed Ben's original point, surely. It diverged certainly, which is why I changed the subject heading for my latest reply. It has nothing to do with whether neurons are faster/better/whatever thandigital circuits,

Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Russell Wallace
On 6/13/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: What I said in my previous reply was that something very like neural nets(with all the beneficial features for which people got interested in NNs inthe first place) *can* do syntax, and all forms of abstract representation. Clearly they can -

Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Russell Wallace
On 6/14/06, [EMAIL PROTECTED]@pop.lightlink.com [EMAIL PROTECTED]@pop.lightlink.com wrote: Russell Wallace wrote: Has anyone yet made an artificial NN or anything like one handle syntax?Uhhh:did you read my first post on this thread? Yes; you appear to be saying that as far as you know nobody

[agi] General problem solving

2006-07-06 Thread Russell Wallace
On 7/6/06, William Pearson [EMAIL PROTECTED] wrote: This is an interesting term. If we could define what it meansprecisely we would be a long way to building a useful system. What doyou think the closest system humanity has created to a pgpps is? The Internet. Ageneric PC almost fulfils the

Re: [agi] General problem solving

2006-07-09 Thread Russell Wallace
On 7/8/06, William Pearson [EMAIL PROTECTED] wrote: To resolve this conflict requires us to have some form of knowledge ofhow well the processes have been performing. If X has been performingbetter than Y we can hope that X is functioning correctly and allow Xto correct Y. How can we know whether

Re: [agi] [META] Is there anything we can do to keep junk out of the AGI Forum?

2006-07-26 Thread Russell Wallace
On 7/26/06, Richard Loosemore [EMAIL PROTECTED] wrote: I am beginning to wonder if this forum would be better off with arestricted membership policy. Well we only get stuff like that once in a blue moon so I don't see a major problem. If it started being every day then yeah, I'd agree there was a

Re: [agi] fuzzy logic necessary?

2006-08-03 Thread Russell Wallace
On 8/3/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: When you think something is more likely or less likely, you'retranslating a feeling into English.The English translation doesn'tinvolve verbal probabilities like 0.6 or 0.8 - the syllablesprobability zero point eight don't flow through your

Re: [agi] fuzzy logic necessary?

2006-08-04 Thread Russell Wallace
On 8/4/06, Yan King Yin [EMAIL PROTECTED] wrote: Now, figuring out all the heuristical NTV / symbolic qualifier'supdate rules, such thatan AGI will always be internally consistent, and provably increasing in accuracy, is a very non-trivial task. Well indeed it is of course impossible, no matter

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-12 Thread Russell Wallace
On 8/12/06, Matt Mahoney [EMAIL PROTECTED] wrote: In order to compress text well, the compressor must be able to estimate probabilities over text strings, i.e. predict text. Um no, the compressor doesn't need to predict anything - it has the entire file already at hand. The _de_compressor would

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-12 Thread Russell Wallace
On 8/12/06, Matt Mahoney [EMAIL PROTECTED] wrote: First, the compression problem is not in NP. The general problem of encoding strings as the smallest programs to output them is undecidable. But as I said, it becomes NP when there's an upper limit to decompression time. Second, given a model,

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-12 Thread Russell Wallace
On 8/13/06, Matt Mahoney [EMAIL PROTECTED] wrote: Whether or not a compressor implements a model as a predictor or not is irrelevant. Modeling the entire input at once is mathematically equivalent to predicting successive symbols. Even if you think you are not modeling, you are. If you design a

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-12 Thread Russell Wallace
On 8/13/06, Matt Mahoney [EMAIL PROTECTED] wrote: There is no knowledge that you can demonstrate verbally that cannot also be learned verbally. An unusual claim... do you mean all knowledge can be learned verbally, or do you think there are some kinds of knowledge that cannot be demonstrated

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-20 Thread Russell Wallace
On 8/20/06, Eugen Leitl [EMAIL PROTECTED] wrote: Language can be used to serialize and transfer state of cloned objects.This doesn't mean human experts know their inner state, or can freezeand serialize it, and that other instances can instantiate such serialized state. Exactly. To unsubscribe,

Re: [agi] AGI open source license

2006-08-28 Thread Russell Wallace
On 8/28/06, Stephen Reed [EMAIL PROTECTED] wrote: An assumption of mine that can be debated perhaps in aseparate message thread, is that there should beeffectively only one AGI, allowing for a federation ofAGI's contrived to prevent war between them. I've explained my opinion of the various AI

Re: [agi] AGI open source license

2006-08-28 Thread Russell Wallace
On 8/28/06, Stephen Reed [EMAIL PROTECTED] wrote: Yes, but suppose the government of China decides todownload an open source AGI and install it on one ormore of their Top 500 supercomputer facilities? Suppose the government of China decide to get hold of CAD, simulation software etc and install it

Re: [agi] AGI open source license

2006-08-28 Thread Russell Wallace
On 8/28/06, Stephen Reed [EMAIL PROTECTED] wrote: I assume that you fully understand the benefits andbusiness case of an open source project, and that yourpoint is made even with the former fully considered. Yes. For that matter, my answer would be the same if you proposed a closed source project

Re: [agi] AGI open source license

2006-08-28 Thread Russell Wallace
On 8/28/06, William Pearson [EMAIL PROTECTED] wrote: I was thinking more long term than you. I agree in the first phase wecan't rely on it being to translate different information fromdifferent AGI. But to start with I wouldn't attempt the google killer,merely the outlook killer. Okay, but... We

Re: [agi] AGI open source license

2006-08-28 Thread Russell Wallace
On 8/28/06, Bill Hibbard [EMAIL PROTECTED] wrote: By open source distribution you are expressing optimismabout human nature, and your developer community willmostly justify that optimism. The best approach for thefew who disappoint you is to simply ignore them. I agree. When I suggested a no

Re: [agi] AGI open source license

2006-08-28 Thread Russell Wallace
On 8/28/06, William Pearson [EMAIL PROTECTED] wrote: Things like hooking it up to low quality sound video feeds and have itjudge by posture/_expression_/time of day what the most useful piece ofinformation in the RSS feeds/email etc to provide to the user is. Wewould have to program a large

Re: [agi] AGI open source license

2006-08-28 Thread Russell Wallace
On 8/28/06, William Pearson [EMAIL PROTECTED] wrote: Possibly I am not explaining things clearly enough. One of mymotivations for developing AI, apart from the challenge, is to enableme to get the information I need, when I need it.As a lot of the power I have in this world is through what I buy,

Re: [agi] Why so few AGI projects?

2006-09-13 Thread Russell Wallace
On 9/13/06, Joshua Fox [EMAIL PROTECTED] wrote: I'd like to raise a FAQ: Why is so little AGI research and development being done? Time and money. AGI takes too long. When people spend several years on something for no result whatsoever, they quite reasonably find something more productive to do

Re: [agi] Why so few AGI projects?

2006-09-13 Thread Russell Wallace
On 9/13/06, Stephen Reed [EMAIL PROTECTED] wrote: I would add that previous more-or-less general AIprojects have not greatly exceeded their modestexpectations.So given this experience perhaps thereis a tendency among potential sponsors to classify newAGI projects as crackpot schemes. And let's be

Re: [agi] Failure scenarios

2006-09-25 Thread Russell Wallace
On 9/26/06, Ben Goertzel [EMAIL PROTECTED] wrote: But, what I would say in response to you is: If you presume a **bad**KR format, you can't match it with a learning mechanism that reliablyfills one's knowledge repository with knowledge...If you presume a sufficiently and appropriately flexible KR

Re: [agi] Voodoo meta-learning and knowledge representations

2006-09-27 Thread Russell Wallace
On 9/27/06, William Pearson [EMAIL PROTECTED] wrote: You could test this by looking at the brain regions that people usewhen solving problems. If this changes on a person by person basis,then it is likely that some form of meta-learning of the sort I aminterested in occurs in humans. I vaguely

Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Russell Wallace
On 10/31/06, John Scanlon [EMAIL PROTECTED] wrote: One of the major obstacles to real AI is the belief thatknowledge ofa natural language is necessary for intelligence. Ahuman-level intelligent system should be expected to have the ability to learn a natural language, but it is not

Re: Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Russell Wallace
On 11/4/06, Ben Goertzel [EMAIL PROTECTED] wrote: I of course don't think that SHRDLU vs. AGISim is a fair comparison.Agreed. SHRDLU didn't even try to solve the real problems - for the simple and sufficient reason that it was impossible to make a credible attempt at such on the hardware of the

Re: [agi] Design Complexity

2006-11-11 Thread Russell Wallace
On 11/12/06, Michael Wilson [EMAIL PROTECTED] wrote: Naturally, no one is prepared to admit that they personally might bein this category - at most, one assigns it a trivial probability. :)Though some are prepared to admit they personally have been in this category in the past :) I know this sort

Re: [agi] RSI - What is it and how fast?

2006-11-16 Thread Russell Wallace
On 11/16/06, Hank Conn [EMAIL PROTECTED] wrote: How fast could RSI plausibly happen? Is RSI inevitable / how soon will it be? How do we truly maximize the benefit to humanity? The concept is unfortunately based on a category error: intelligence (in the operational sense of ability to get

Re: [agi] Understanding Natural Language

2006-11-29 Thread Russell Wallace
On 11/28/06, Philip Goetz [EMAIL PROTECTED] wrote: I see evidence of dimensionality reduction by humans in the fact that adopting a viewpoint has such a strong effect on the kind of information a person is able to absorb. In conversations about politics or religion, I often find ideas that to

Re: [agi] Project proposal: MindPixel 2

2007-01-28 Thread Russell Wallace
On 1/28/07, Eric Baum [EMAIL PROTECTED] wrote: How do you respond to the 20-question argument that there are only of order 2^20 knowledge items ? The granularity of knowledge items for 20 Questions and the number 20 are specifically chosen to match each other, to make the game fair. While

Re: [agi] Project proposal: MindPixel 2

2007-01-28 Thread Russell Wallace
On 1/28/07, Eric Baum [EMAIL PROTECTED] wrote: Have you ever played 20 questions? Yep. In the games I've played, Alice in Wonderland would be a fine topic. I admit its surprising that one plays as well as one does. Interesting, and surprising, but I don't draw the same conclusion as you

Re: [agi] Optimality of using probability

2007-02-03 Thread Russell Wallace
On 2/3/07, Ben Goertzel [EMAIL PROTECTED] wrote: My approach was to formulate a notion of general intelligence as achieving a complex goal, and then ask something like: Given what resource levels R and goals G, is approximating probability theory the best way to approximately achieve G using

Re: [agi] Optimality of using probability

2007-02-03 Thread Russell Wallace
On 2/3/07, Ben Goertzel [EMAIL PROTECTED] wrote: I do mean A, but I don't think it is so trivial to prove, even though it is conceptually obvious... Well, it's about the perspective one is taking. If your criterion for defining whether a result is correct is whether it agrees with

Re: [agi] Optimality of using probability

2007-02-03 Thread Russell Wallace
On 2/3/07, Ben Goertzel [EMAIL PROTECTED] wrote: See, my definition of obeying probability theory had to do with the consistency-with-probability-theory of the system's **local actions**. Aren't all actions local, unless you're Neo? Maybe I'm misunderstanding you, can you give a concrete

Re: [agi] Optimality of probabilistic consistency

2007-02-03 Thread Russell Wallace
On 2/4/07, Ben Goertzel [EMAIL PROTECTED] wrote: Hi Russell, OK, I'll try to specify my ideas in this regard more clearly. Bear in mind though that there are many ways to formalize an intuition, and the style of formalization I'm suggesting here may or may not be the right one. With this

Re: [agi] Optimality of probabilistic consistency

2007-02-03 Thread Russell Wallace
On 2/4/07, Ben Goertzel [EMAIL PROTECTED] wrote: However, I'm not sure it helps with the quite hard task of coming up with a proof of the hypothesis, or a fully rigorous formulation ;-) I guess I'd better leave that to the professionals :) - This list is sponsored by AGIRI:

Re: [agi] Betting and multiple-component truth values

2007-02-05 Thread Russell Wallace
On 2/5/07, Pei Wang [EMAIL PROTECTED] wrote: Sorry that now I don't have the time for long discussions, but a brief scan of your message remind me of Ellsberg paradox (see http://en.wikipedia.org/wiki/Ellsberg_paradox). He used betting examples to show that a probability is not enough, and a

Re: [agi] Betting and multiple-component truth values

2007-02-05 Thread Russell Wallace
On 2/5/07, gts [EMAIL PROTECTED] wrote: I wonder how a logically-omniscient player might be defined. Will you please explain your meaning? An entity capable of proving or disproving the truth of any statement which has a logical proof or disproof. (Requires infinite computing power.) I

Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Russell Wallace
On 2/18/07, Chuck Esterbrook [EMAIL PROTECTED] wrote: You are absolutely...correct. I think the utility of existing database servers is very underappreciated in academia and many AI researchers are from academia or working on academia style projects (gov't research grants or work to support

Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-20 Thread Russell Wallace
On 2/20/07, Mark Waser [EMAIL PROTECTED] wrote: Realistically, you'll have an AGI before the environment is completed . . . . I think you slightly underestimate the difficulty of creating AGI ;) Personally, I'd start with a commercial extensible development environment and a

Re: **SPAM** Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-20 Thread Russell Wallace
On 2/20/07, Mark Waser [EMAIL PROTECTED] wrote: I think that you grossly underestimate the magnitude of what is being proposed because the tag development environment has been attached to it.:-) *grin* No, I think it's a big project, at least the version I have in mind (on my to-do list

Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-20 Thread Russell Wallace
On 2/20/07, Ben Goertzel [EMAIL PROTECTED] wrote: Novamente works fine on 64-bit machines -- but it took nearly a man-month of work to 64-bit-ify the code, which was done back in 2004... I guess I stand corrected on that one! - This list is sponsored by AGIRI: http://www.agiri.org/email

Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-21 Thread Russell Wallace
On 2/21/07, Mark Waser [EMAIL PROTECTED] wrote: and I think that there are a whole bunch more of similar cases that add up and add up and add up. Did you have to write code to load and save from memory to disk (both for swapping and semi-permanent purposes)? Are you confident that you know

Re: [agi] Using object oriented databases for mapping structures

2007-02-25 Thread Russell Wallace
On 2/25/07, Aki Iskandar [EMAIL PROTECTED] wrote: For those leveraging existing datastores, has anyone had any success with using object oriented databases, as opposed to relational databases, for mapping the data structures in your AI / AGI programs - and if so, what textbook algorithms have

Re: [agi] Using object oriented databases for mapping structures

2007-02-25 Thread Russell Wallace
On 2/26/07, Aki Iskandar [EMAIL PROTECTED] wrote: Since OODBs promise to do the same thing as ORMs, from the developer's point of reference, I thought I'd try to leverage them. Sure, you can use either. My point is simply that relational is better than OO, so using an OODB or ORM, while

Re: [agi] Using object oriented databases for mapping structures

2007-02-26 Thread Russell Wallace
On 2/26/07, Aki Iskandar [EMAIL PROTECTED] wrote: Stemming from software design patterns, with respect to OOP, the need exists more often than not to persist, or serialize, object state. Well yes and no. There's an important difference between internal machinery versus domain knowledge. OO

Re: [agi] general weak ai

2007-03-07 Thread Russell Wallace
On 3/7/07, Eugen Leitl [EMAIL PROTECTED] wrote: Anything vaguely physical, and doing long-range interactions by iteration of overlapping local neighbourhoods. It's not much of a constraint. Of course, you have to add more data to the volume element, depending on what you want to do. I'm

Re: [agi] general weak ai

2007-03-07 Thread Russell Wallace
On 3/7/07, Ben Goertzel [EMAIL PROTECTED] wrote: A more interesting question to think about, rather than how to represent a story in a formal language, is: How would you convince yourself that your AGI actually understood a story? What kind of question-answers or behaviors would convince you

Re: [agi] general weak ai

2007-03-09 Thread Russell Wallace
On 3/9/07, Charles D Hixson [EMAIL PROTECTED] wrote: Russell Wallace wrote: To test whether a program understands a story, start by having it generate an animated movie of the story. Nearly every person I know would

Re: [agi] general weak ai

2007-03-09 Thread Russell Wallace
On 3/9/07, Charles D Hixson [EMAIL PROTECTED] wrote: You aren't requesting it of the person, you're requesting it of the AI. In other words, you are insisting that the AI demonstrate more capabilities (in a restricted domain, admittedly) than an average person before you will admit that it is

Re: [agi] The Missing Piece

2007-03-10 Thread Russell Wallace
On 3/10/07, Ben Goertzel [EMAIL PROTECTED] wrote: In a sense we do, but it's not implemented in the brain as an actual sim world with a physics engine and so forth Yes it is, or at least a reasonable facsimile thereof. ... our internal sim world is a lot less physically accurate (more

Re: [agi] My proposal for an AGI agenda

2007-03-11 Thread Russell Wallace
On 3/11/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: The main problem with this is that it seems to assume that there is One True Knowledge Representation in the system. In the automatic microprocessor design stuff I was doing in the 90s, there were up to 50 different levels of

[agi] Logical representation

2007-03-11 Thread Russell Wallace
On 3/11/07, Ben Goertzel [EMAIL PROTECTED] wrote: YES -- anything can be represented in logic. The question is whether this is a useful representational style, in the sense that it matches up with effective learning algorithms!!! In some domains it is, in others not. Represented in logic

Re: [agi] Logical representation

2007-03-12 Thread Russell Wallace
On 3/12/07, Richard Loosemore [EMAIL PROTECTED] wrote: I'm not sure if you're just summarizing what someone would mean if they were talking about 'logical representation,' or advocating it. I'm saying there are 5 different things someone might mean, and going on to advocate 3.5 of them while

Re: [agi] Logical representation

2007-03-12 Thread Russell Wallace
Ah! That makes your position much clearer, thanks. To paraphrase to make sure I understand you, the reason you don't regard human readability as a critical feature is that you're of the seed AI school of thought that says we don't need to do large-scale engineering, we just need to solve the

Re: [agi] Logical representation

2007-03-12 Thread Russell Wallace
On 3/12/07, Eugen Leitl [EMAIL PROTECTED] wrote: You don't need the entire four billion years since you don't have to start from scratch (animals, ahem), and you can put things on fast-forward, and select the fitness function for a heavy bias towards intelligence. You're also a couple dozen

Re: [agi] Logical representation

2007-03-12 Thread Russell Wallace
On 3/12/07, Richard Loosemore [EMAIL PROTECTED] wrote: I'm still not quite sure if what I said came across clearly, because some of what you just said is so far away from what I intended that I have to make some kind of response. Indeed it seems I'm still not understanding you... I thought

Re: [agi] Logical representation

2007-03-12 Thread Russell Wallace
On 3/12/07, Eugen Leitl [EMAIL PROTECTED] wrote: The first and biggest step is to get your system to learn how to evolve. I understand many do not yet see this as a problem at all. Indeed! I don't understand why you moved away from it (it's the only game in town), but if you have a

Re: [agi] Logical representation

2007-03-12 Thread Russell Wallace
On 3/13/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: 1. re #4: As an example, the logical term chair is defined, as a logical rule, by other logical terms like edges, planes, blocks, etc. Sensory perception is a process of *applying* such rules; algorithmically this is known as *pattern

Re: [agi] Logical representation

2007-03-13 Thread Russell Wallace
On 3/13/07, Richard Loosemore [EMAIL PROTECTED] wrote: Good god no. It *is* the program. It is the architecture of an AI. So it is part of the AI then, like I said. Regarding the use of readable names. The atomic units of knowledge in the resulting system (the symbols, concepts, logical

Re: [agi] My proposal for an AGI agenda

2007-03-13 Thread Russell Wallace
On 3/13/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: But the bottom line problem for using FOPC (or whatever) to represent the world is not that it's computationally incapable of it -- it's Turing complete, after all -- but that it's seductively easy to write propositions with symbols that

Re: [agi] Logical representation

2007-03-13 Thread Russell Wallace
Richard: I'm not sure why it's been so extraordinarily difficult to communicate, but from what you're saying here it seems to be back to square one again; continuing to try to communicate in abstract English about this topic doesn't appear to be a productive use of either of our time at this

Re: [agi] Logical representation

2007-03-13 Thread Russell Wallace
On 3/13/07, Mark Waser [EMAIL PROTECTED] wrote: Russell is conflating concept names (a.k.a. symbols) and variables. And a longer list of other things than I care to enumerate. The distinction I've been making all along is between human-readable formats like predicate calculus, SQL and XML,

Re: [agi] Logical representation

2007-03-13 Thread Russell Wallace
On 3/13/07, Mark Waser [EMAIL PROTECTED] wrote: Human-readable is an interesting term . . . . Is a picture human-readable? I think that you would argue not (in this context, obviously). Well, a picture is (in some domains) human-readable - and I think tools that display certain kinds of

Re: [agi] Logical representation

2007-03-13 Thread Russell Wallace
On 3/13/07, Richard Loosemore [EMAIL PROTECTED] wrote: I would cautiously (and with due respect) suggest that IF you have been tempted to categorize this discussion as [Loosemore talking vague nonsense again], you might want to resist that temptation. The more concrete stuff, when it arrives,

Re: [agi] Logical representation

2007-03-13 Thread Russell Wallace
On 3/13/07, Mark Waser [EMAIL PROTECTED] wrote: Do the many modules have to have one canonical format for representing content -- or do they have to have one canonical format for *communicating* content? I think that you need to resign yourself to the fact that many of the modules are going to

Re: [agi] Logical representation

2007-03-13 Thread Russell Wallace
On 3/13/07, Mark Waser [EMAIL PROTECTED] wrote: Hmmm, the dictionary definition of semantic is of, pertaining to, or arising from the different meanings of words or other symbols -- which I take to be the *meaning* or *communication* level which certainly can be different from the *working*

Re: [agi] Logical representation

2007-03-14 Thread Russell Wallace
On 3/14/07, David Clark [EMAIL PROTECTED] wrote: an AI system consisting of many modules has to have one canonical format for representing content WHY? Because for A to talk to B, they have to use a language/format/representation that both of them understand. By far the most efficient way

Re: [agi] Logical representation

2007-03-14 Thread Russell Wallace
On 3/14/07, David Clark [EMAIL PROTECTED] wrote: I think that our minds have many systems that, at least at the higher levels, have different data representations. These systems in our minds seem to communicate with each other in words. I don't think it's as simple as that, but in any case

Re: [agi] My proposal for an AGI agenda

2007-03-18 Thread Russell Wallace
On 3/18/07, Charles D Hixson [EMAIL PROTECTED] wrote: Perhaps it would be best to have, say, four different formats for different classes of problems (with the understanding that most problems are mixed). E.g., some classes of problems are best represented via a priority queue, others via a

Re: [agi] My proposal for an AGI agenda

2007-03-18 Thread Russell Wallace
On 3/19/07, Charles D Hixson [EMAIL PROTECTED] wrote: Yes, datawise a priority queue is just a set of things with priority numbers attached and the alpha-beta algorithm is, well, an algorithm, but neither of those is propositional logic. Yes, you CAN represent them as logic (you can represent

[agi] Emergence

2007-03-19 Thread Russell Wallace
On 3/19/07, Ben Goertzel [EMAIL PROTECTED] wrote: Minsky is not big on emergence This is an interesting point. I'm not big on emergence, not in artificial systems anyway. It produced us, sure, but that's one planet with intelligence out of a zillion universes without it. Emergence is what

Re: [agi] Emergence

2007-03-19 Thread Russell Wallace
On 3/19/07, Ben Goertzel [EMAIL PROTECTED] wrote: According to the above definition, it is quite possible to engineer systems with emergent properties, and to prove things about the constraints on emergent system properties as well. Sure. I'm not claiming it's impossible (see the

Re: [agi] structure of the mind

2007-03-20 Thread Russell Wallace
On 3/20/07, Eric Baum [EMAIL PROTECTED] wrote: This is the problem with Wallace's complaints. You actually want the machine [to do] something unpredicted, namely the right thing in unpredicted circumstances. Its true that its hard and expensive to engineer/find an underlying compact

Re: [agi] My proposal for an AGI agenda

2007-03-24 Thread Russell Wallace
On 3/24/07, John Rose [EMAIL PROTECTED] wrote: If you could imagine a really, really super advanced language created by super-intelligent giant brained aliens (seriously) or created by their alien supercomputer, what would that language be like? Would it be a mishmash of lowest common

Re: [agi] My proposal for an AGI agenda

2007-03-25 Thread Russell Wallace
On 3/25/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: Some form of unifying framework, whatever that is, is of course desirable. But the problem is how to get people to *agree* to work within your framework (or any particular one). Consider the problem of getting everyone to agree to use

Re: [agi] Why C++ ?

2007-03-26 Thread Russell Wallace
On 3/26/07, rooftop8000 [EMAIL PROTECTED] wrote: -very hard to write code that writes code compared to LISP, Ruby etc -very hard to safely run code i think. in java you have security things to execute code in safe sandboxes, in C++ any array can just run outside its bounds But for AGI

Re: [agi] AGI interests

2007-03-26 Thread Russell Wallace
I don't believe AI in the sense of a self-willed mind is going to happen; fortunately, it doesn't need to. The two problems I want to help solve are the global loss of fifty million lives a year, and the difficulty in living in the 99.999...999% of the universe that isn't Earth. Each of these is

Re: [agi] AGI interests

2007-03-28 Thread Russell Wallace
On 3/27/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: One thing that I really don't understand is why so many people I've talked to about AGI insist on working for free. Do you have a source of finance? This is not a rhetorical question; if you have, I'd be very interested in working for

Re: [agi] small code small hardware

2007-03-28 Thread Russell Wallace
On 3/28/07, Jean-Paul Van Belle [EMAIL PROTECTED] wrote: Kevin, you're most probably right there. But remember that us small code people *have* to have this belief in order to justify ourselves working as individuals / tiny teams often during spare time and snatched moments. A very good

Re: [agi] AGI interests

2007-03-29 Thread Russell Wallace
On 3/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: Yes, I think I have seed capital, that is enough to get a conventional startup started. Also I believe getting subsequent VC funding is not that difficult. That's more than most people have! I think the reason a lot of AGI people think

Re: [agi] small code small hardware

2007-03-29 Thread Russell Wallace
On 3/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: Let's take a poll? I believe that a minimal AGI core, *sans* KB content, may be around 100K lines of code. What are other people's estimates? Sounds right to me. I'd put the framework (sans content) as roughly comparable to a web

Re: [agi] small code small hardware

2007-03-29 Thread Russell Wallace
On 3/29/07, Pei Wang [EMAIL PROTECTED] wrote: *. Though high-level self-modifying will give the system more flexibility, it does not necessarily make the system more intelligent. Self-modifying at the meta-level is often dangerous, and it should be used only when the same effect cannot be

Re: [agi] small code small hardware

2007-03-29 Thread Russell Wallace
On 3/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: From what you say, Python sounds like a pretty good *procedural* language -- would you say it's the easiest way to build an AGI prototype? Remember this is for the framework (rather than content) we're talking about, so a procedural

Re: [agi] AGI and Web 2.0

2007-03-29 Thread Russell Wallace
On 3/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: Obviously, commonsense knowledge (ie KB contents) can be acquired from the internet community. But what about the core? Can we build it using web-collaboration too? I think the framework at least initially needs to be written by a

Re: [agi] AGI and Web 2.0

2007-03-29 Thread Russell Wallace
On 3/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: Yes, I've heard the same thing, but I'm wondering if we can do better than that by going open sooner. You know, very often the biggest mistakes are made at the very beginning. If we can solicit the collective intelligence of a wider

Re: [agi] AGI and Web 2.0

2007-03-29 Thread Russell Wallace
On 3/29/07, Bob Mottram [EMAIL PROTECTED] wrote: I've lost count of the number of times which I've scrapped and re-written some of my own projects, but by now I think I've made most of the mistakes which its possible to make, and as they say when you have eliminated the impossible, whatever

Re: [agi] A Course on Foundations of Theoretical Psychology...

2007-04-13 Thread Russell Wallace
On 4/13/07, Richard Loosemore [EMAIL PROTECTED] wrote: How many people on this list would actually go to the trouble, if they could, of signing up for a truly comprehensive course in the foundations of AI/CogPsy/Neuroscience, which would give them a grounding in all of these fields and put them

  1   2   3   4   >