Re: [agi] confirmation paradox

2006-08-18 Thread YKY (Yan King Yin)
Phil wrote: YKY is advocating the post-modern viewpoint that knowledge is context-dependent, and true-false assignments and numeric value judgements are both extremely problematic.Pei is pointing out the commonsense, classicist position, and also the refutation of the post-modern tradition,

[agi] Context dependent words/concepts

2006-08-18 Thread YKY (Yan King Yin)
On 8/19/06, Ben Goertzel [EMAIL PROTECTED] wrote: The problem of context may be avoided by using an unambiguous language (for internal representation).Context-dependent words are a feature of natural language (NL) only.It arises when an NL word maps to multiple concepts in the knowledge

Re: [agi] Context dependent words/concepts

2006-08-18 Thread YKY (Yan King Yin)
On 8/19/06, Ben Goertzel [EMAIL PROTECTED] wrote: Well, but I can generate a hypothetical grounding for mushrooom pie on the fly even though I haven't seen one ;-) And I can form concepts of mathematical structures that I have never experienced nor exemplified and may in fact be inconsistent

Re: [agi] Context dependent words/concepts

2006-08-19 Thread YKY (Yan King Yin)
On 8/19/06, Ben Goertzel [EMAIL PROTECTED] wrote: In blackboard the NL word maps to either a board that is black in color or a board for writing that is usually black/green/white.The KR of those concepts are unambiguous; it's just that there are 2 alternatives. This is very naive...a

Re: [agi] AGI open source license

2006-09-01 Thread YKY (Yan King Yin)
Isupport opensource AGIwith the following reasons: 1. It would be nearly impossible to enforcethe single-AGI scenario; I think the best strategy is to start aproject and tryour best in it. 2. One possibilityis to make the AGI software commercial, but at a very low cost, and with differential

[agi] G0: new AGI architecture

2006-09-03 Thread YKY (Yan King Yin)
I have worked out a more detailed AGI architecture: http://www.geocities.com/genericai/GI-Architecture.htm But I'm still working on the webpages to explain the modules. It seems very suitable for theMAGIC message-passing model. I think it's the simplest architecture for general intelligence.

Re: [agi] G0: new AGI architecture

2006-09-05 Thread YKY (Yan King Yin)
On 9/5/06, M. Riad [EMAIL PROTECTED] wrote: Sorry to barge into the conversation in this way, but YKY mentioned something I needed clarification with. You said: Withlogic I can write down a rule for recognizing this pretty easily, mainly due to the use of symbolic variables. So you see the

Re: [agi] G0: new AGI architecture

2006-09-06 Thread YKY (Yan King Yin)
I forgot to add that, unsupervised learning is also needed, and desirable,inthe G0 architecture. How to conduct unsupervised learning under logic would be an interesting research topic. YKY To unsubscribe, change your address, or temporarily deactivate your subscription, please go to

Re: [agi] G0: new AGI architecture

2006-09-06 Thread YKY (Yan King Yin)
On 9/6/06, Fredrik Heintz [EMAIL PROTECTED] wrote: And inductive approaches have problems with overfitting and thereby lack of generality. They can find a pattern that very closely match your examples, but if you give it a radically new example it will utherly fail to generalize. Therefore the

Re: [agi] G0: new AGI architecture

2006-09-06 Thread YKY (Yan King Yin)
On 9/6/06, M Riad [EMAIL PROTECTED] wrote: Interesting. ILP is new for me. I did some basic reading and it's really a different form of supervised learning. But I still don't see how this can help build general knowledge. Using your bottle example, lets assume your ILP system recognizes bottles

Re: [agi] G0: new AGI architecture

2006-09-06 Thread YKY (Yan King Yin)
On 9/7/06, Fredrik Heintz [EMAIL PROTECTED] wrote: I haven't studied G0 in detail, but one of our current research problems is the execution and monitoring of plans. We have one of the worlds fastest and most expressive planners, TALplanner, which is forward-chaining domain-dependent planner

Re: [agi] Symbol grounding (was Failure scenarios)

2006-09-26 Thread YKY (Yan King Yin)
My guess at a good basis for KR is simply the cleanest, most powerful, and most general programming language I can come up with. That's because to learn new concepts and really understand them, the AI will have to do the equivalent of writing recognizers, simulators, experiment

Re: [agi] G0 theory completed

2006-10-07 Thread YKY (Yan King Yin)
David Clark wrote: I agree that an AGI fundamentally will be created by a combination of data (databases) and procedures (programs) but how large and by who the programs will be created has yet to be determined. Why do you assume that all AGI programs will be created by humans? Why couldn't an

Re: [agi] G0 theory completed

2006-10-09 Thread YKY (Yan King Yin)
Matt: (Sorry about the delay... I was busy advertising in other groups..) But now that you have completed your theory on how to build AGI, what do you do next?Which parts will you write yourself and which parts will you contract out? Ideally, any part that can be out-sourced should be

[agi] method for joining efforts

2006-10-15 Thread YKY (Yan King Yin)
Hi Ben and others The way I see it,we are close to buildinga complete AGI, but there are gaps to be filled in the details. In my opinion one thing that Ben can do betterto becomea leader in AGI RD is to delegatetaskstoother people / groups, ie adopt a division-of-labor strategy. I think the

Re: [agi] method for joining efforts

2006-10-15 Thread YKY (Yan King Yin)
On 10/15/06, Ben Goertzel [EMAIL PROTECTED] wrote: [...] The main problem is not the commercial one (that once you've finished your AGI, if it's privately held, you can more easily use it to make money). While I like money as much as the next guy, $$ is not the reason to make an AGI. There are

Re: A Mind Ontology Project? [Re: [agi] method for joining efforts]

2006-10-16 Thread YKY (Yan King Yin)
Re the Mind Ontology page: I have written a glossary of terms pertinent to our discussions, including Ben's suggestion of the terms: -- perception -- emergence -- symbol grounding -- logic and I also added many of the terms in my architecture (which is not meant to be final, only as aproposal

Re: [agi] SOTA

2006-10-19 Thread YKY (Yan King Yin)
Hi Peter, I think in all of the categories you listed, thereshould be a lot ofprogress, but they will hit a ceiling because of the lack of an AGI architecture. It is very clear that vision requires AGI to be complete. So does NLP. In vision, many objects require reasoning to recognize.NLP also

Re: [agi] method for joining efforts

2006-10-22 Thread YKY (Yan King Yin)
On 10/21/06, Philip Goetz [EMAIL PROTECTED] wrote: Commercially, I'm not sure if OS or CS is better.Remember Steve Job's APPLE lost the PC market to IBM because IBM provided a more open architecture (in addition to the fact that IBM was more resourceful).We need to be careful not to lose

[agi] Term logic added to G0

2006-10-22 Thread YKY (Yan King Yin)
I have now addedterm logic enhancements to the G0 knowledge representation. The new NL module can process NL in a way that mimics how babies learn language: http://www.geocities.com/genericai/GI-NLP.htm so it may be a genuine solution to the NL problem. The new KR can also solve the doing

Re: [agi] Language modeling

2006-10-23 Thread YKY (Yan King Yin)
On 10/23/06, Matt Mahoney [EMAIL PROTECTED] wrote: [...] One aspect of NARS and many other structured or semi-structured knowledge representations that concerns me is the direct representation of concepts such as is-a, equivalence, logic (if-then, and, or, not), quantifiers (all, some), time

Re: [agi] The concept of a KBMS

2006-11-06 Thread YKY (Yan King Yin)
On 11/7/06, John Scanlon [EMAIL PROTECTED] wrote: James Ratcliff wrote: In some form or another we are going to HAVE to have a natural language interface, either a translation program that can convert our english to the machine understandable form, or a simplified form of english that is

Re: [agi] The concept of a KBMS

2006-11-06 Thread YKY (Yan King Yin)
Hi John, This is the specification of my logic: http://www.geocities.com/genericai/GI-Geniform.htm I conjecture thatNL sentences can be easilytranslated to/fromthis form. The definition of Jinnteera looks interesting, do you have a demo of how it works? =) I'm now working on a

Re: [agi] The concept of a KBMS

2006-11-08 Thread YKY (Yan King Yin)
On 11/7/06, James Ratcliff [EMAIL PROTECTED] wrote: Yan, Do you have a version of the book layout that is all on one page, or PDF or anything? I would like ot print the whole thing off and look over it in more detail. Also lots of broken links, run a link checker, the GO link on the front

Re: [agi] The crux of the problem

2006-11-09 Thread YKY (Yan King Yin)
This is an interesting thread, I'll add some comments: 1. For KR purposes, I think first order predicate logic is a good choice. Geniform 2.0 can be expressed in FOL entirely. ANN is simply not in a state advanced enough to represent complex knowledge (eg things that are close to NL). I

Re: Re: [agi] The crux of the problem

2006-11-09 Thread YKY (Yan King Yin)
On 11/10/06, Ben Goertzel [EMAIL PROTECTED] wrote: 2.Ben raised the issue of learning.I think we should divide learning into 3 parts: (1) linguistic eg grammar (2) semantic /concepts (3) generic / factual. This leaves out a lot, for instance procedure learning and metalearning... and also

Re: Re: Re: Re: [agi] The crux of the problem

2006-11-11 Thread YKY (Yan King Yin)
On 11/10/06, Ben Goertzel [EMAIL PROTECTED] wrote: The word agent is famously polysemous in computer science.In my prior post, I used it in the sense of software agent not autonomous mental agent.These Novamente MindAgents are just software objects with certain functionalities, that get

Re: [agi] Design Complexity

2006-11-11 Thread YKY (Yan King Yin)
On 11/11/06, Michael Wilson [EMAIL PROTECTED] wrote: Ben Goertzel wrote: It is indeed complex, but I have not found anything simpler than I think will work... The key problem with complex designs can probably be summed up with 'anyone can design a system so complex that they cannot

Re: [agi] Design Complexity

2006-11-11 Thread YKY (Yan King Yin)
I don't think anyone here is not focusing on how to /succeed/ =) And I don't like trial and error either, I prefer to plan everything ahead instead of tinkering. But I'mquestioning whether emergence is really needed for AGI. ^^^ I meant emergence at the neural / subsymbolic level. YKY This

Re: [agi] A question on the symbol-system hypothesis

2006-11-11 Thread YKY (Yan King Yin)
On 11/12/06, John Scanlon [EMAIL PROTECTED] wrote: I get the impression that a lot of people interested in AIstill believe that the mental manipulation of symbols is equivalent to thought. As many other people understand now, symbol-manipulation is not thought. Instead, symbols can be

Re: [agi] A question on the symbol-system hypothesis

2006-11-11 Thread YKY (Yan King Yin)
On 11/12/06, John Scanlon [EMAIL PROTECTED] wrote:The majormissing piece in the AI puzzlegoes between the bottom level of automatic learning systems like neural nets, genetic algorithms, and the like, and top-level symbol manipulation. This middle layer is the biggest, most important piece,

Re: [agi] Design Complexity

2006-11-11 Thread YKY (Yan King Yin)
On 11/12/06, Michael Wilson [EMAIL PROTECTED] wrote: 'Understanding exactly how the system will succeed' is a lot harder than (a) it sounds and (b) merely convincing yourself (or even a few of your peers) that the system will succeed. Of course it's pretty much impossible to plan everything out

Re: [agi] One grammar parser URL

2006-11-16 Thread YKY (Yan King Yin)
On 11/16/06, James Ratcliff [EMAIL PROTECTED] wrote: Correct, Using inferences only works in toy, or small well understood domains, as inevitably when it goes 2+ steps away from direct knowledge it will be making large assumptions and be wrong. My thoughts have been on an AISim as well, but

Re: [agi] One grammar parser URL

2006-11-16 Thread YKY (Yan King Yin)
On 11/17/06, Matt Mahoney [EMAIL PROTECTED] wrote: Learning logic is similar to learning grammar. A statistical model can classify words into syntactic categories by context, e.g. the X is tells you that X is a noun, and that it can be used in novel contexts where other nouns have been

[agi] Wiki AGI

2006-11-18 Thread YKY (Yan King Yin)
James: you said you're building a wiki-style AGI, and I've heard another guy with a similar idea. I want to know if you intend it to have free contributions or is there a way to pay the contributors? It sounds like a good idea, I'm interested to help, but if it's free I'd have to worry about

Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread YKY (Yan King Yin)
I'm not saying that the n-space approach wouldn't work, but I have used that approach before and faced a problem. It was because of that problem that I switched to a logic-based approach. Maybe you can solve it. To illustrate it with an example, let's say the AGI can recognize apples, bananas,

Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread YKY (Yan King Yin)
On 11/28/06, Mike Dougherty [EMAIL PROTECTED] wrote: perhaps my view of a hypersurface is wrong, but wouldn't a subset of the dimensions associated with an object be the physical dimensions? (ok, virtual physical dimensions) Is On determined by a point of contact between two objects? (A is

[agi] Project proposal: MindPixel 2

2007-01-13 Thread YKY (Yan King Yin)
I'm considering this idea: build a repository of facts/rules in FOL (or Prolog) format, similar to Cyc's. For example water is wet, oil is slippery, etc. The repository is structureless, in the sense that it is just a collection of simple statements. It can serve as raw material for other

Re: [agi] Project proposal: MindPixel 2

2007-01-13 Thread YKY (Yan King Yin)
On 1/14/07, Pei Wang [EMAIL PROTECTED] wrote: How do you plan to represent water is wet? Pei Well, we need to agree on some conventions. A pretty standard way is: Is(water,wet). But the one I use in my system is: R(is, water, wet) where R is a generic predicate representing a

Re: [agi] Project proposal: MindPixel 2

2007-01-13 Thread YKY (Yan King Yin)
On 1/14/07, Pei Wang [EMAIL PROTECTED] wrote: Well, we need to agree on some conventions. A pretty standard way is: Is(water,wet). In the standard way of knowledge representation, a constant is either a predicate name or an individual name. Mass noun like water is neither. There is no

Re: [agi] Project proposal: MindPixel 2

2007-01-13 Thread YKY (Yan King Yin)
On 1/14/07, Bob Mottram [EMAIL PROTECTED] wrote: If Mindpixel does get revived I think it should be an open source project, with the results available to everyone. The idea of doing this on a commercial basis with the issuing of shares turned out not to be viable. This kind of effort is a

Re: [agi] Project proposal: MindPixel 2

2007-01-14 Thread YKY (Yan King Yin)
On 1/14/07, Chuck Esterbrook [EMAIL PROTECTED] wrote: * Would it support separate domains/modules? I didn't realize the importance of this point at first. Indeed, what we regard as common sense may be highly subjective as it involves matters such as human values, ideology or religion. So the

Re: [agi] Project proposal: MindPixel 2

2007-01-17 Thread YKY (Yan King Yin)
On 1/14/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: The choice of knowledge representation language makes a huge difference. IMO, Cyc committed themselves to an overcomplicated representation language that has rendered their DB far less useful than it would be otherwise If you want to

Re: [agi] Project proposal: MindPixel 2

2007-01-17 Thread YKY (Yan King Yin)
On 1/18/07, Charles D Hixson [EMAIL PROTECTED] wrote: Joel Pitt wrote: * I think such a project should make the data public domain. Ignore silly ideas like giving be shares in the knowledge or whatever. It just complicates things. If the project is really strapped for cash later, then either

Re: [agi] Project proposal: MindPixel 2

2007-01-18 Thread YKY (Yan King Yin)
On 1/19/07, Matt Mahoney [EMAIL PROTECTED] wrote: I think if you want to make a business out of AI, you are in for a lot of work.First you need something that is truly innovative, that does something that nobody else can do. What will that be? A search engine better than Google? A new

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread YKY (Yan King Yin)
On 1/19/07, Bob Mottram [EMAIL PROTECTED] wrote: My feeling is that this probably isn't a great business idea. I think collecting common sense data and building that into a general reasoner should really be thought of as a long term effort, which is unlikely to appeal to business investors

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread YKY (Yan King Yin)
On 1/19/07, Pei Wang [EMAIL PROTECTED] wrote: For example, what you called rule in your postings have two different meanings: (1) A declarative implication statement, X == Y; (2) A procedure that produces conclusions from premises, {X} |- Y. These two are related, but not the same thing. Both

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread YKY (Yan King Yin)
On 1/19/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: You have not explained how you will overcome the issues that plagued GOFAI, such as -- the need for massive amounts of highly uncertain background knowledge to make real-world commonsense inferences Precisely, we need to amass millions of

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread YKY (Yan King Yin)
On 1/20/07, David Clark [EMAIL PROTECTED] wrote: ... Do we divine the rules/laws/algorithms from a mass of data or do we generate the appropriate conclusions when we need them because we understand how it actually works? Just as chemistry is reducible to physics, in theory, while in reality

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread YKY (Yan King Yin)
On 1/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: Backward chaining is just as susceptible to combinatorial explosions as forward chaining... And, importance levels need to be context-dependent, so that assigning them requires sophisticated inference in itself... The problem may not be

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread YKY (Yan King Yin)
On 1/20/07, Stephen Reed [EMAIL PROTECTED] wrote: I've been using OpenCyc as the standard ontology for my texai project. OpenCyc contains only the very few rules needed to enable the OpenCyc deductive inference engine operate on its OpenCyc content. On the other hand ResearchCyc, whose

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread YKY (Yan King Yin)
On 1/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: A) This is just not true, many commonsense inferences require significantly more than 5 applications of rules OK, I concur. Long inference chains are built upon short inference steps. We need a mechanism to recognize the interestingness

Re: [agi] Project proposal: MindPixel 2

2007-01-24 Thread YKY (Yan King Yin)
On 1/20/07, Pei Wang [EMAIL PROTECTED] wrote: The bottomline is that the knowledge acquisition project is *separable* from specific inference methods. What is your argument supporting this strong claim? I guess every book on knowledge representation includes a statement saying that whether

Re: [agi] Project proposal: MindPixel 2

2007-01-24 Thread YKY (Yan King Yin)
On 1/24/07, Bob Mottram [EMAIL PROTECTED] wrote: I think it would be better to design a system with probabilistic reasoning as a fundamental component from the outset, rather than trying to bolt this on as an after thought. I know from doing a lot of stuff with machine vision that modelling

Re: [agi] Project proposal: MindPixel 2

2007-01-25 Thread YKY (Yan King Yin)
On 1/25/07, Bob Mottram [EMAIL PROTECTED] wrote: The trouble is that you can only really decide whether a statement is non-probabilistic if enough people have voted unanimously yes or no. Even then you can't be sure that the next person to vote won't go the opposite way. At the initial stage

Re: [agi] Project proposal: MindPixel 2

2007-01-25 Thread YKY (Yan King Yin)
On 1/25/07, Ben Goertzel [EMAIL PROTECTED] wrote: If there is a major problem with Cyc, it is not the choice of basic KR language. Predicate logic is precise and relatively simple. I agree mostly, though I think even Cyc's simple predicate logic language can be made even simpler and better.

Re: [agi] Project proposal: MindPixel 2

2007-01-27 Thread YKY (Yan King Yin)
for commonsense reasoning -- if you closely examine some of your own thoughts you'd see. On 1/19/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: For the type of common sense reasoner I described, we need a *massive* number of rules. You can either acquire these rule via machine learning or direct

Re: [agi] Project proposal: MindPixel 2

2007-01-27 Thread YKY (Yan King Yin)
On 1/25/07, Pei Wang [EMAIL PROTECTED] wrote: Suppose I have a set of *deductive* facts/rules in FOPL. You can actually use this data in your AGI to support other forms of inference such as induction and abduction. In this sense the facts/rules collection does not dictate the form of

Re: [agi] Project proposal: MindPixel 2

2007-01-27 Thread YKY (Yan King Yin)
On 1/27/07, Ben Goertzel [EMAIL PROTECTED] wrote: Yes, you can reduce nearly all commonsense inference to a few rules, but only if your rules and your knowledge base are not fully formalized... As I envision it, we would have a large number of rules. Some rules are very abstract (eg rules

Re: [agi] Project proposal: MindPixel 2

2007-01-27 Thread YKY (Yan King Yin)
On 1/27/07, David Hart [EMAIL PROTECTED] wrote: This license chooser may help: http://creativecommons.org/license/ Perhaps MindPixel2 discussion deserves its own list at this stage? Listbox, Google and many others offer list services (Google Code also offers a wiki, source version management,

Re: [agi] Project proposal: MindPixel 2

2007-01-28 Thread YKY (Yan King Yin)
On 1/29/07, Eric Baum [EMAIL PROTECTED] wrote: I haven't played 20 questions recently, but in response to your comment I just went to www.20q.net and played thinking of Alice in Wonderland, the book. The neural net guessed is it a novel on question 22, and then decided it had gone far enough and

Re: [agi] foundations of probability theory

2007-01-28 Thread YKY (Yan King Yin)
Ben, Is the probabilistic logic you use in Novamente the same as Pei Wang's version? If not, why do you use your version? YKY - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] foundations of probability theory

2007-01-29 Thread YKY (Yan King Yin)
On 1/29/07, Ben Goertzel [EMAIL PROTECTED] wrote: Pei Wang's uncertain logic is **not** probabilistic, though it uses frequency calculations IMO Pei's logic has some strong points, especially that it unifies fuzzy and probabilistic truth values into one pair of values. I think in Pei's logic

Re: [agi] The Missing Piece

2007-03-07 Thread YKY (Yan King Yin)
On 2/19/07, John Scanlon [EMAIL PROTECTED] wrote: [...] Logical deduction or inference is not thought. It is mechanical symbol manipulation that can can be programmed into any scientific pocket calculator. [...] Hi John, I admire your attitude for attacking the core AI issues =) One is

Re: [agi] The Missing Piece

2007-03-07 Thread YKY (Yan King Yin)
On 3/2/07, Matt Mahoney [EMAIL PROTECTED] wrote: What about English? Irregular grammar is only a tiny part of the language modeling problem. Uaing an artificial language with a regular grammar to simplify the problem is a false path. If people actually used Logban then it would be used in

Re: [agi] general weak ai

2007-03-07 Thread YKY (Yan King Yin)
I agree with Ben and Pei etc on this issue. Narrow AI is VERY different from general AI. It is not at all easy to integrate several narrow AI applications to a single, functioning system. I have never heard of something like this being done, even for two computer vision programs. IMO what we

Re: [agi] The Missing Piece

2007-03-11 Thread YKY (Yan King Yin)
Hi John, Re your idea that there should be an intermediate-level representation: 1. Obviously, we do not currently know how the brain stores that representation. Things get insanely complex as neuroscientists go higher up the visual pathways from the primary visual cortex. 2. I advocate

Re: [agi] The Missing Piece

2007-03-11 Thread YKY (Yan King Yin)
On 3/8/07, Matt Mahoney [EMAIL PROTECTED] wrote: [re: logical abduction for interpretation of natural language] One disadvantage of this approach is that you have to hand code lots of language knowledge. They don't seem to have solved the problem of acquiring such knowledge from training

[agi] My proposal for an AGI agenda

2007-03-11 Thread YKY (Yan King Yin)
This is the agenda for an *integrated* AGI system (vs Minsky's distributed one): 1. Fix a knowledge representation scheme (eg CycL, or Novamentese? etc) 2. Work out an uncertain logic (ie some form of logic + probability / fuzziness). 3. Develop an *efficient* deductive algorithm for said

Re: [agi] My proposal for an AGI agenda

2007-03-11 Thread YKY (Yan King Yin)
On 3/11/07, Jey Kottalam [EMAIL PROTECTED] wrote: I'm stuck at steps 1 and 2. Even if we assume that the remaining steps can be implemented once a knowledge representation is chosen, how do I evaluate and judge a knowledge representation scheme's appropriateness for AGI? All the steps are

Re: [agi] My proposal for an AGI agenda

2007-03-11 Thread YKY (Yan King Yin)
On 3/11/07, Ben Goertzel [EMAIL PROTECTED] wrote: All this is perfectly useful stuff, but IMO is not in itself sufficient for an AGI design. The basic problem is that there are many tasks important for intelligence, for which is it apparently not possible to create adequately efficient

Re: [agi] My proposal for an AGI agenda

2007-03-11 Thread YKY (Yan King Yin)
On 3/12/07, Ben Goertzel [EMAIL PROTECTED] wrote: Natural concepts in the mind are ones for which inductively learned feature-combination-based classifiers and logical classifiers give roughly the same answers... 1. The feature-combination-based classifiers CAN be encoded in the probabilistic

Re: [agi] My proposal for an AGI agenda

2007-03-11 Thread YKY (Yan King Yin)
Hi Josh, You touched on a lot of deep issues; I'll give a *tentative* answer here. Let's see what happens... My main point is: a unified KR allows people to *work together*. On 3/12/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: 1. Fix a knowledge representation scheme (eg CycL, or

Re: [agi] My proposal for an AGI agenda

2007-03-11 Thread YKY (Yan King Yin)
On 3/12/07, Ben Goertzel [EMAIL PROTECTED] wrote: Yeah, and what you will find is that these more efficient algorithms are more efficient only if you let them work with non-logical knowledge representations ;-p In NM, we do in fact have a unified logical representation -- but we also have a

Re: [agi] Logical representation

2007-03-12 Thread YKY (Yan King Yin)
On 3/12/07, Russell Wallace [EMAIL PROTECTED] wrote: Represented in logic can mean a number of different things, just checking to see if everyone's talking about the same thing. Consider, say, a 5 megapixel image. A common programming language representation would be something like: struct

[agi] The Reading Helvetica Problem

2007-03-12 Thread YKY (Yan King Yin)
The problem: 1. To be able to read many fonts 2. even totally new and strange-looking ones 3. even for the FIRST time one encounters a new, strange font; and 4. To be able to improve proficiency for a familiar font. The NeoLego or modular approach is very vague and computationally

[agi] Re: The Reading Helvetica Problem

2007-03-12 Thread YKY (Yan King Yin)
So why do we need superfluous and bloated things like NeoLego or Helvetica modules?? ...or GA-based algorithms, for that matter! YKY - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] My proposal for an AGI agenda

2007-03-12 Thread YKY (Yan King Yin)
On 3/12/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: Certainly. Yet I am convinced that that's how it actually works. Someone who came from a theoretical pure communist economy where there was only one organization that everybody worked for, would be aghast at the random madhouse of a

Re: [agi] The Reading Helvetica Problem

2007-03-13 Thread YKY (Yan King Yin)
On 3/13/07, Chuck Esterbrook [EMAIL PROTECTED] wrote: When your AGI sees A for the first time(s) in Helvetica and learns rules to recognize Helvetica A, then it only has rules for Helvetica A. As opposed to having rules for A in general and rules for A in Helvetica. When Times Roman Italic A

Re: [agi] The Reading Helvetica Problem

2007-03-13 Thread YKY (Yan King Yin)
On 3/13/07, Bob Mottram [EMAIL PROTECTED] wrote: Early stage vision involves the detection of primitive types of geometry - edges, lines of different orientation, blobs, corners, colours and motion in different directions. These seem to arise from simple self-organisation due to the physical

Re: [agi] Logical representation

2007-03-15 Thread YKY (Yan King Yin)
3 issues have been raised in this thread, by different people: 1. Richard Loosemore: Symbol names -- should they be system-generated or human-entered? This is a good question. In Cyc there are so-called pretty names (English terms that describe Cyc concepts) but they are not sophisticated

Re: [agi] Logical representation

2007-03-16 Thread YKY (Yan King Yin)
On 3/16/07, David Clark [EMAIL PROTECTED] wrote: Is very complicated a good reason to have 1 cognitive engine? Why not have many and even use many on the same problem and then accept the best answer? Best answer might change for a single problem depending on other issues outside the actual

Re: [agi] Competing AGI approaches

2007-03-17 Thread YKY (Yan King Yin)
On 3/17/07, Ben Goertzel [EMAIL PROTECTED] wrote: 4) So, the question is not whether DARPA, M$ or Google will enter the AI race -- they are there. The question is whether they will adopt a workable approach and put money behind it. History shows that large organizations often fail to do so,

Re: [agi] Competing AGI approaches

2007-03-17 Thread YKY (Yan King Yin)
On 3/18/07, Ben Goertzel [EMAIL PROTECTED] wrote: If we succeed at creating the first AGI, it will not be because anything fell into our hands. It will be because we a) put in the many years of hard thinking to create a working AGI design b) put in the many years of hard, often tedious, work

Re: [agi] structure of the mind

2007-03-20 Thread YKY (Yan King Yin)
On 3/20/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: There is one way you can form a coherent, working system from a congeries of random agents: put them in a marketplace. This has a fairly rigorous discipline of its own and most of them will not survive... and of course the system has

Re: [agi] My proposal for an AGI agenda

2007-03-24 Thread YKY (Yan King Yin)
On 3/23/07, rooftop8000 [EMAIL PROTECTED] wrote: Suppose there was an AGI framework that everyone could add their ideas to.. What properties should it have? I listed some points below. What would it take for you to use the framework? You can add points if you like. On 3/24/07, Jey

Re: [agi] My proposal for an AGI agenda

2007-03-24 Thread YKY (Yan King Yin)
On 3/25/07, rooftop8000 [EMAIL PROTECTED] wrote: The richer your set of algorithms and representations, the more likely the correct ones will emerge/pop out as you put it. I don't really like the idea of hoping for extra functionality to emerge. This particular version of emergence does not

Re: [agi] Glocal knowledge representation?

2007-03-26 Thread YKY (Yan King Yin)
I've never heard of it used for knowledge representation. Can you explain what's the deal? IMO we should first delineate the AGI problem in a conventional framework and then try to find out where is the computational bottleneck. And then focus our innovation on that particular area, rather

Re: [agi] AGI interests

2007-03-27 Thread YKY (Yan King Yin)
I have been working on AGI seriously since 2004. I also believe that the core algorithms needed for AGI could be very compact (though I won't make an estimation of its size), with the rest of the information encoded declaratively in the knowledgebase. In a nutshell my current approach is

Re: [agi] small code small hardware

2007-03-29 Thread YKY (Yan King Yin)
Let's take a poll? I believe that a minimal AGI core, *sans* KB content, may be around 100K lines of code. What are other people's estimates? YKY - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to:

Re: [agi] AGI interests

2007-03-29 Thread YKY (Yan King Yin)
On 3/28/07, Russell Wallace [EMAIL PROTECTED] wrote: Do you have a source of finance? This is not a rhetorical question; if you have, I'd be very interested in working for money. Yes, I think I have seed capital, that is enough to get a conventional startup started. Also I believe getting

Re: [agi] small code small hardware

2007-03-29 Thread YKY (Yan King Yin)
On 3/29/07, Jean-Paul Van Belle [EMAIL PROTECTED] wrote: I guess (50 to 100 modules) x (500 to 2500 locs) x fudge factor x language factor with fudge factor = 2 to 4 and language factor = 1 for eg Python; 5 for eg C++ 50-100 modules? Sounds like you have a very unconventional architecture.

[agi] AGI and Web 2.0

2007-03-29 Thread YKY (Yan King Yin)
How does the new phenomenon of web-based collaboration change the way we build an AGI? I feel that something is amiss in a business model if we don't make use of some form of Web 2.0. I think rooftop8000 is on the right track by thinking this way, but he may not have it figured out yet.

Re: [agi] AGI and Web 2.0

2007-03-29 Thread YKY (Yan King Yin)
On 3/30/07, Russell Wallace [EMAIL PROTECTED] wrote: I think there's at least one good practical reason to avoid doing that, or at least to do it at arm's length in a potential users discussing potential features mailing list rather than here's our code as we write it. In the early stages of

[agi] knowledge representation, Cyc

2007-03-29 Thread YKY (Yan King Yin)
I just talked to some Cyc folks, and they assured me that CycL is adequate to represent entire stories like Little Red Riding Hood. The AGI framework has to operate on a knowledge representation language, and building that language is not a programming task, rather a ontology engineering task,

Re: [agi] knowledge representation, Cyc

2007-03-29 Thread YKY (Yan King Yin)
On 3/30/07, Matt Mahoney [EMAIL PROTECTED] wrote: Wouldn't it save time in the long run to build a system that could translate English into your KR? Yes, that's the goal. I'm just doing a human translation of the first paragraph or so, to get the feel of CycL. It can also be compared with

Re: [agi] Little Red Ridinghood

2007-04-01 Thread YKY (Yan King Yin)
On 3/31/07, rooftop8000 [EMAIL PROTECTED] wrote: How do you write and verify in cycl? Download OpenCyc. Install. Open the Cyc browser. Read online tutorials. =) The IRC chatroom #OpenCyc on freenode.net may be helpful. It may take some time to learn Cyc. I'm not good at it either, but I

[agi] OWL/Semantic Web as knowledge representation

2007-04-13 Thread YKY (Yan King Yin)
Hi all, I'm not very familiar with OWL and the Semantic Web... I'm wondering if OWL has the potential to become a knowledge representation for AGI? In principle, it seems that OWL is expressive enough -- OWL Full is more expressive than DL (description logic) but I'm not sure how it compares

Re: [agi] AGI interests

2007-04-17 Thread YKY (Yan King Yin)
On 4/18/07, James Ratcliff [EMAIL PROTECTED] wrote: Mark, This is the closest Ive seen so far to my work and what I believe in, Have you got some more specific information / code / algorithm / papers on gathering and processing world information and discovery of? I have been working with

Re: [agi] AGI interests

2007-04-17 Thread YKY (Yan King Yin)
On 4/18/07, James Ratcliff [EMAIL PROTECTED] wrote: It should be a combination fo the two, even as Cyc is finding out now with their use of Google to search out new terms and facts. A really simple example of that is related objects... a book scraping can generate a listed of objects it

  1   2   3   4   >