Re: [agi] standard way to represent NL in logic?

2008-05-07 Thread YKY (Yan King Yin)
On 5/7/08, Matt Mahoney [EMAIL PROTECTED] wrote: No. But it hasn't stopped people from trying. The meaning of sentences and even paragraphs depends on context that is not captured in logic. Consider the following examples, where a different word is emphasized in each case: - I didn't

Re: [agi] standard way to represent NL in logic?

2008-05-07 Thread YKY (Yan King Yin)
On 5/7/08, Stephen Reed [EMAIL PROTECTED] wrote: To my knowledge there is a standard style but there is of course no standard ontology. Roughly the standard style is First Order Predicate Calculus (FOPC) and within the linguistics community this is called logical form. For reference see

Re: [agi] standard way to represent NL in logic?

2008-05-07 Thread YKY (Yan King Yin)
On 5/7/08, Stephen Reed [EMAIL PROTECTED] wrote: I have not heard about Rus form. Could you provide a link or reference? This is one of the papers: http://citeseer.ist.psu.edu/cache/papers/cs/22812/http:zSzzSzwww.seas.smu.eduzSz~vasilezSzictai2001.pdf/rus01high.pdf you can find some examples

Re: [agi] organising parallel processes

2008-05-06 Thread YKY (Yan King Yin)
On 5/4/08, Stephen Reed [EMAIL PROTECTED] wrote: As perhaps you know, I want to organize Texai as a vast multitude of agents situated in a hierarchical control system, grouped as possibly redundant, load-sharing, agents within an agency sharing a specific mission. I have given some thought to

[agi] jamming with OpenCog / Novamente

2008-05-06 Thread YKY (Yan King Yin)
I'm wondering if it's possible to plug in my learning algorithm to OpenCog / Novamente? The main incompatibilities stem from: 1. predicate logic vs term logic 2. graphical KB vs sentential KB If there is a way to somehow bridge these gaps, it may be possible YKY

Re: [agi] organising parallel processes

2008-05-06 Thread YKY (Yan King Yin)
On 5/6/08, Stephen Reed [EMAIL PROTECTED] wrote: I believe the opposite of what you say I hope that my following explanation will help converge our thinking. Let me first emphasize that I plan a vast multitude of specialized agencies, in which each agency has a particular

[agi] about Texai

2008-05-04 Thread YKY (Yan King Yin)
@Stephen Reed and others: I'm writing a prototype of my AGI in Lisp, with special emphasis on the inductive learning algorithm. I'm looking for collaborators. It seems that Texai is the closed to my AGI theory, so it would be easier for us to jam. I wonder if Texai has already developed

Re: [agi] about Texai

2008-05-04 Thread YKY (Yan King Yin)
On 5/4/08, Stephen Reed [EMAIL PROTECTED] wrote: Interesting that you should ask about Texai and reasoning / learning algorithms. As you know, my initial approach to learning is learning by being taught. Therefore I do not have much yet to offer with regard to machine learning, learning

Re: [agi] Other AGI-like communities

2008-04-26 Thread YKY (Yan King Yin)
(I'm kind of busy with personal matters... so will be brief) I want to know where can we have an AGI project that allows collaboration, and is also commercial? I think many of the other AI communities are strongly academical. This list is slightly different in that respect. YKY

Re: **SPAM** Re: [agi] Re: Language learning

2008-04-24 Thread YKY (Yan King Yin)
On Thu, Apr 24, 2008 at 6:22 AM, Mark Waser [EMAIL PROTECTED] wrote: I think a person thinks in his/her first language, and when talking in a second language there is some extra processing going on (though it may not be exactly a translation process), which slow things down, giving the

Re: [agi] Re: Language learning

2008-04-23 Thread YKY (Yan King Yin)
There is no doubt that learning new languages at an older age is much more difficult than younger. I wonder if there are some hard computational constraints that we must observe in order for the learning algorithm to be tractable. Perhaps sensory / linguistic learning should be most intense

Re: [agi] Re: Language learning

2008-04-23 Thread YKY (Yan King Yin)
On Thu, Apr 24, 2008 at 2:20 AM, J. Andrew Rogers [EMAIL PROTECTED] wrote: On Apr 22, 2008, at 11:55 PM, YKY (Yan King Yin) wrote: There is no doubt that learning new languages at an older age is much more difficult than younger. I seem to recall that recent research does not support

Re: [agi] database access fast enough?

2008-04-18 Thread YKY (Yan King Yin)
On 4/18/08, J. Andrew Rogers [EMAIL PROTECTED] wrote: On Apr 17, 2008, at 3:32 PM, YKY (Yan King Yin) wrote: Disk access rate is ~10 times faster than ethernet access rate. IMO, if RAM is not enough the next thing to turn to should be the harddisk. Eh? Ethernet latency is sub-millisecond

Re: [agi] database access fast enough?

2008-04-18 Thread YKY (Yan King Yin)
On 4/18/08, Mark Waser [EMAIL PROTECTED] wrote: Um. Neither side is arguing that the whole KB fit into RAM. I'm arguing that the necessary *core* for intelligence plus enough cached chunks (as you phrase it) to support the current thought processes WILL fit into RAM. It's obviously ludicrous

Re: [agi] database access fast enough?

2008-04-18 Thread YKY (Yan King Yin)
On 4/18/08, Matt Mahoney [EMAIL PROTECTED] wrote: What is your estimate of the quantity of all the world's knowledge? (Or the amount needed to achieve AGI or some specific goal?) Matt, The world's knowledge is irrelevant to the goal of AGI. What we need is to build a commonsense AGI and then

Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread YKY (Yan King Yin)
On 4/19/08, Richard Loosemore [EMAIL PROTECTED] wrote: PREMISES: (1) AGI is one of the most complicated problem in the history of science, and therefore requires substantial funding for it to happen. Potentially, though, massively distributed, collaborative open-source

Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread YKY (Yan King Yin)
On 4/19/08, Ben Goertzel [EMAIL PROTECTED] wrote: Though it is unlikely to do so, because collaborative open-source projects are best suited to situations in which the fundamental ideas behind the design has been solved. I believe I've solved the fundamental issues behind the

Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread YKY (Yan King Yin)
On 4/19/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: we lack such a consensus. So the theorists are not working together. I correct that. Theorists do not need to work together; theories can be applied anywhere. It's the *designers* who are not working together. YKY

Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread YKY (Yan King Yin)
On 4/19/08, Pei Wang [EMAIL PROTECTED] wrote: Not all theoretical problems can or need to be solved by practical testing. Also, in this field, no infrastructure is really theoretically neutral --- OpenCog is clearly not suitable to test all kinds of AGI theories, though I like the project, and

Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread YKY (Yan King Yin)
On 4/19/08, Ben Goertzel [EMAIL PROTECTED] wrote: I don't claim that the Novamente/OpenCog design is the **only** way ... but I do note that the different parts are carefully designed to interoperate together in subtle ways, so replacing any one component w/ some standard system won't work.

Re: [agi] database access fast enough?

2008-04-17 Thread YKY (Yan King Yin)
On 4/17/08, J. Andrew Rogers [EMAIL PROTECTED] wrote: No, you are not correct about this. All good database engines use a combination of clever adaptive cache replacement algorithms (read: keeps stuff you are most likely to access next in RAM) and cost-based optimization (read: optimizes

Re: [agi] database access fast enough?

2008-04-17 Thread YKY (Yan King Yin)
To use an example, If a lot of people search for Harry Porter, then a conventional database system would make future retrieval of the Harry Porter node faster. But the requirement of the inference system is such that, if Harry Porter is fetched, then we would want *other* things that are

Re: [agi] database access fast enough?

2008-04-17 Thread YKY (Yan King Yin)
On 4/17/08, Mark Waser [EMAIL PROTECTED] wrote: You *REALLY* need to get up to speed on current database systems before you make more ignorant statements. First off, *most* databases RARELY go to the disk for reads. Memory is cheap and the vast majority of complex databases are generally

Re: [agi] database access fast enough?

2008-04-17 Thread YKY (Yan King Yin)
Hi Stephen, Thanks for sharing this! VERY few people have experience with this stuff... On 4/17/08, Stephen Reed [EMAIL PROTECTED] wrote: 4. I began writing my own storage engine, for a fast, space-efficient, partitioned and sharded knowledge base, soon realizing that this was far too big

Re: [agi] database access fast enough?

2008-04-17 Thread YKY (Yan King Yin)
On 4/17/08, J. Andrew Rogers [EMAIL PROTECTED] wrote: Again, most good database engines can do this, as it is a standard access pattern for databases, and most databases can solve this problem multiple ways. As an example, clustering and index-organization features in databases address your

Re: [agi] database access fast enough?

2008-04-17 Thread YKY (Yan King Yin)
On 4/18/08, Mark Waser [EMAIL PROTECTED] wrote: Yes. RAM is *HUGE*. Intelligence is *NOT*. Really? I will believe that if I see more evidence... right now I'm skeptical. Also, I'm designing a learning algorithm that stores *hypotheses* in the KB along with accepted rules. This will

Re: [agi] database access fast enough?

2008-04-17 Thread YKY (Yan King Yin)
On 4/18/08, Stephen Reed [EMAIL PROTECTED] wrote: I agree with your side of the debate about whole KB not fitting into RAM. As a solution, I propose to partition the whole KB into the tiniest possible cached chunks, suitable for a single agent running on a host computer with RAM resources

[agi] database access fast enough?

2008-04-16 Thread YKY (Yan King Yin)
For those using database systems for AGI, I'm wondering if the data retrieval rate would be a problem. Typically we need to retrieve many nodes from the DB to do inference. The nodes may be scattered around the DB. So it may require *many* disk accesses. My impression is that most DBMS are

Re: [agi] would anyone want to use a commonsense KB?

2008-03-05 Thread YKY (Yan King Yin)
On 3/5/08, david cash [EMAIL PROTECTED] wrote: In my opinion, instead of having to cherry-pick desirable and undesirable traits in an unconscious AGI entity, that we, of course, wish to have consciousness and cognitive abilites like reasoning, deductive and inductive logic comprehension skills,

Re: [agi] would anyone want to use a commonsense KB?

2008-03-05 Thread YKY (Yan King Yin)
On 3/4/08, Mark Waser [EMAIL PROTECTED] wrote: But the question is whether the internal knowledge representation of the AGI needs to allow ambiguities, or should we use an ambiguity-free representation. It seems that the latter choice is better. An excellent point. But what if the

Re: [agi] would anyone want to use a commonsense KB?

2008-03-05 Thread YKY (Yan King Yin)
On 3/4/08, Ben Goertzel [EMAIL PROTECTED] wrote: Rather, I think the right goal is to create an AGI that, in each context, can be as ambiguous as it wants/needs to be in its representation of a given piece of information. Ambiguity allows compactness, and can be very valuable in this regard.

Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread YKY (Yan King Yin)
I'm increasingly convinced that the human brain is not a statistical learner, but a logical learner. There are many examples of humans learning concepts/rules from one or two examples, rather than thousands of examples. So I think that at a high level, AGI should be logic-based. But it would be

Re: [agi] would anyone want to use a commonsense KB?

2008-03-03 Thread YKY (Yan King Yin)
On 2/28/08, Mark Waser [EMAIL PROTECTED] wrote: I think Ben's text mining approach has one big flaw: it can only reason about existing knowledge, but cannot generate new ideas using words / concepts There is a substantial amount of literature that claims that *humans* can't generate new

Re: [agi] would anyone want to use a commonsense KB?

2008-03-03 Thread YKY (Yan King Yin)
On 3/4/08, Mike Tintner [EMAIL PROTECTED] wrote: Good example, but how about: language is open-ended, period and capable of infinite rather than myriad interpretations - and that open-endedness is the whole point of it?. Simple example much like yours : handle. You can attach words for objects

Re: [agi] would anyone want to use a commonsense KB?

2008-02-28 Thread YKY (Yan King Yin)
My latest thinking tends to agree with Matt that language and common sense are best learnt together. (Learning langauge before common sense is impossible / senseless). I think Ben's text mining approach has one big flaw: it can only reason about existing knowledge, but cannot generate new ideas

Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread YKY (Yan King Yin)
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote: I'm going to try and elucidate my approach to building an intelligent system, in a round about fashion. This is the problem I am trying to solve. Imagine you are designing a computer system to solve an unknown problem, and you have these

Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread YKY (Yan King Yin)
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote: Note I want something different than computational universality. E.g. Von Neumann architectures are generally programmable, Harvard architectures aren't. As they can't be reprogrammed at run time. It seems that you want to build the AGI from

Re: [agi] would anyone want to use a commonsense KB?

2008-02-27 Thread YKY (Yan King Yin)
On 2/27/08, Ben Goertzel [EMAIL PROTECTED] wrote: YKY I thought you were talking about the extraction of information that is explicitly stated in online text. Of course, inference is a separate process (though it may also play a role in direct information extraction). I don't think the

Re: [agi] would anyone want to use a commonsense KB?

2008-02-26 Thread YKY (Yan King Yin)
On 2/25/08, Ben Goertzel [EMAIL PROTECTED] wrote: Hi, There is no good overview of SMT so far as I know, just some technical papers... but SAT solvers are not that deep and are well reviewed in this book... http://www.sls-book.net/ But that's *propositional* satisfiability, the results may

Re: [agi] reasoning knowledge

2008-02-26 Thread YKY (Yan King Yin)
On 2/15/08, Pei Wang [EMAIL PROTECTED] wrote: To me, the following two questions are independent of each other: *. What type of reasoning is needed for AI? The major answers are: (A): deduction only, (B) multiple types, including deduction, induction, abduction, analogy, etc. *. What type

Re: [agi] would anyone want to use a commonsense KB?

2008-02-26 Thread YKY (Yan King Yin)
On 2/26/08, Ben Goertzel [EMAIL PROTECTED] wrote: Obviously, extracting knowledge from the Web using a simplistic SAT approach is infeasible However, I don't think it follows from this that extracting rich knowledge from the Web is infeasible It would require a complex system involving at

Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread YKY (Yan King Yin)
On 2/19/08, Ben Goertzel [EMAIL PROTECTED] wrote: If we need a KB orders of magnitude larger to make that approach work, doesn't that mean we should use another approach? But do you agree that a KB orders of magnitude larger is required for all AGI, regardless of *how* the knowledge is

Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread YKY (Yan King Yin)
On 2/21/08, Ben Goertzel [EMAIL PROTECTED] wrote: Feeding all the ambiguous interpretations of a load of sentences into a probabilistic logic network, and letting them get resolved by reference to each other, is a sort of search for the most likely solution of a huge system of simultaneous

Re: [agi] would anyone want to use a commonsense KB?

2008-02-19 Thread YKY (Yan King Yin)
On 2/19/08, Stephen Reed [EMAIL PROTECTED] wrote: Pei: Resolution-based FOL on a huge KB is intractable. Agreed. However Cycorp spend a great deal of programming effort (i.e. many man-years) finding deep inference paths for common queries. The strategies were: prune the rule set according

Re: [agi] would anyone want to use a commonsense KB?

2008-02-19 Thread YKY (Yan King Yin)
On 2/19/08, Matt Mahoney [EMAIL PROTECTED] wrote: Why would this approach succeed where Cyc failed? Cyc paid people to build the knowledge base. Then when they couldn't sell it, the tried giving it away. Still, nobody used it. For an AGI to be useful, people have to be able to communicate

Re: [agi] would anyone want to use a commonsense KB?

2008-02-19 Thread YKY (Yan King Yin)
On 2/19/08, Pei Wang [EMAIL PROTECTED] wrote: A purely resolution-based inference engine is mathematically elegant, but completely impractical, because after all the knowledge are transformed into the clause form required by resolution, most of the semantic information in the knowledge

Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread YKY (Yan King Yin)
On 2/18/08, Mike Tintner [EMAIL PROTECTED] wrote: I believe I offered the beginning of a v. useful way to conceive of this whole area in an earlier post. The key concept is inventory of the world. First of all, what is actually being talked about here is only a VERBAL/SYMBOLIC KB. One of

Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread YKY (Yan King Yin)
On 2/17/08, Lukasz Stafiniak [EMAIL PROTECTED] wrote: On Feb 17, 2008 2:11 PM, Russell Wallace [EMAIL PROTECTED] wrote: Before you embark on such a project, it might be worth first looking closely at the question of why Cyc hasn't been useful, so that you don't end up making the same

Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread YKY (Yan King Yin)
On 2/17/08, Pei Wang [EMAIL PROTECTED] wrote: There is no similar plan for OpenNARS. When the time comes, it probably will get its knowledge, in a mixed manner, (1) from various existing sources of formatted knowledge, including Cyc, (2) from the Internet, using information

Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread YKY (Yan King Yin)
On 2/18/08, Pei Wang [EMAIL PROTECTED] wrote: I raised this issue before: by logical rules, do you mean inference rules (like Derive conclusion C from premises A and B), or implication statements (like If A and B are true, then C is true)? These two are very often confused with each

Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread YKY (Yan King Yin)
On 2/18/08, Ben Goertzel [EMAIL PROTECTED] wrote: I strongly suspect there is enough information in the text online for an AGI to learn that water flows downhill in most circumstances, without having explicit grounding... I strongly suspect the contrary =) for the simple reason that adults

Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread YKY (Yan King Yin)
On 2/18/08, Pei Wang [EMAIL PROTECTED] wrote: Only statements containing in a KB as content have truth-value, or need acceptance. An inference rule is part of the system, which just applies, and does not need acceptance within the system. An inference rule has no truth-value. If it is still

Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread YKY (Yan King Yin)
Yesterday I didn't give a clear explanation of what I mean by rules, so here is a better try: 1. If I see a turkey inside the microwave, I immediately draw the conclusion that it's NOT empty. 2. However, if I see some katchup on the inside walls of the microwave, I'd say it's dirty but it's

Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread YKY (Yan King Yin)
On 2/18/08, Stephen Reed [EMAIL PROTECTED] wrote: Yes, I would be very glad to incorporate any content that I can then republish using a Wikipedia-compatible license, e.g. GNU Free Documentation License. Any weaker license, such as Apache, BSD would be OK too. Heh... I was thinking to make

Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread YKY (Yan King Yin)
On 2/18/08, Matt Mahoney [EMAIL PROTECTED] wrote: Heh... I think you could give away read-only access and charge people to update it. Information has negative value, you know. Well, the idea is to ask lots of people to contribute to the KB, and pay them with virtual credits. (I expect such

[agi] OpenCog's business model

2008-01-19 Thread YKY (Yan King Yin)
How much of OpenCog will be finally opensource? 100%? Or will it be partially open? IMO the partial-open model still has the problems of both open and closed source models: the open parts cannot make much money, and the closed parts cannot recieve public input. Though, I appreciate that Ben

Re: [agi] OpenCog

2007-12-29 Thread YKY (Yan King Yin)
On 12/28/07, Jean-Paul Van Belle [EMAIL PROTECTED] wrote: IMHO more important than working towards contributing clean code would be to *publish the (required) interfaces for the modules as well as give standards for/details on the knowledge representation format*. I am sure that you have those

Re: [agi] OpenCog

2007-12-28 Thread YKY (Yan King Yin)
OpenCog is definitely a positive thing to happen in the AGI scene. It's been all vaporware so far. I wonder what would be the level of participation? Also I think it's going to increase the chance of a safe takeoff, by exposing users and developers gradually to AGI. But we also need to have

Re: [agi] NL interface

2007-12-27 Thread YKY (Yan King Yin)
On Dec 21, 2007 11:08 PM, Matt Mahoney [EMAIL PROTECTED] wrote: What is the goal of your system. What application? Sorry about the delay, and Merry Xmas =) The goal is to provide an easy input for AGI, temporarily until full NL capacity is achievable. I guess most AGIers would have realized by

Re: [agi] NL interface

2007-12-21 Thread YKY (Yan King Yin)
On 12/21/07, Stephen Reed [EMAIL PROTECTED] wrote: The above propositions have terms expressed in RDF, but are presented in the lispy fashion desired by the original Fluid Construction Grammar implementers. (predicate subject object). Note that I include discourse axioms (e.g.

[agi] NL interface

2007-12-20 Thread YKY (Yan King Yin)
I'm planning to write an NL interface that uses templates to eliminate parsing and thus achieve 100% accuracy for a restricted subset of English (for example, asking the user to disambiguate parts of speech, syntax etc). It seems that such a program doesn't exist yet. It looks like AGI-level NLP

Re: [agi] NL interface

2007-12-20 Thread YKY (Yan King Yin)
On 12/21/07, Stephen Reed [EMAIL PROTECTED] wrote: Hi YKY, I hope that by this time next year the Texai project will have a robust English parser suitable for your project. I am working in collaboration with the Air Force Research Laboratory's Synthetic Teammate Project

Re: [agi] List of Java AI tools librarie

2007-12-17 Thread YKY (Yan King Yin)
Thanks a lot, that's good stuff! Let me add: JavaBayes: *JavaBayes* is a system that handles Bayesian networks: it calculates marginal probabilities and expectations, produces explanations, performs robustness analysis, and allows the user to import, create, modify and export networks.

Re: [agi] Where are the women?

2007-11-29 Thread YKY (Yan King Yin)
My collaborative platform is designed mainly with the aim of minimizing discrimination (be it racial, gender, nationalistic, etc) by being open and democratic. If there're other ideas that may help reduce discrimination, I'd be eager to try them. My observation is that when things are not

Re: [agi] Upper Ontologies

2007-11-10 Thread YKY (Yan King Yin)
Linas: I agree with your approach: don't use formal ontologies. I have not implemented my system yet, but I suspect that not using ontologies may decrease inference speed, especially for subsumption-type queries. Hopefully the price to pay is not too great. YKY - This list is sponsored

[agi] question about algorithmic search

2007-11-10 Thread YKY (Yan King Yin)
I have the intuition that Levin search may not be the most efficient way to search programs, because it operates very differently from human programming. I guess better ways to generate programs can be achieved by imitating human programming -- using techniques such as deductive reasoning and

Re: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-08 Thread YKY (Yan King Yin)
My impression is that most machine learning theories assume a search space of hypotheses as a given, so it is out of their scope to compare *between* learning structures (eg, between logic and neural networks). Algorithmic learning theory - I don't know much about it - may be useful because it

Re: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-08 Thread YKY (Yan King Yin)
Thanks for the input. There's one perplexing theorem, in the paper about the algorithmic complexity of programming, that the language doesn't matter that much, ie, the algorithmic complexity of a program in different languages only differ by a constant. I've heard something similar about the

Re: [agi] Connecting Compatible Mindsets

2007-11-07 Thread YKY (Yan King Yin)
On 11/8/07, Linas Vepstas [EMAIL PROTECTED] wrote: And the serious contenders are a handful of small companies that seem unlikely to fill out a self-assesment status report card revealing thier weaknesses and strengths to the competition. If you don't find some like-minded partners, you may not

Re: [agi] Connecting Compatible Mindsets

2007-11-04 Thread YKY (Yan King Yin)
I think we can use the AGIRI wiki for this purpose: http://www.agiri.org/wiki/AGI_Projects Afterall we've been using this list for several years, and the list has maintained a fairly neutral stance throughout. My entry for G0 is here: http://www.agiri.org/wiki/Generic_AI AGIRI should let wiki

Re: [agi] NLP + reasoning?

2007-11-01 Thread YKY (Yan King Yin)
On 11/1/07, Mike Tintner [EMAIL PROTECTED] wrote: Vladimir: AI that is capable of general learning should be able to also learn language processing, from the letters up. Sounds wonderful. Anyone attempting that or even close? In my G0 architecture, NLP is based on logical deduction and

Re: [agi] NLP: natural aspect of AGI?

2007-07-07 Thread YKY (Yan King Yin)
On 7/7/07, Vladimir Nesov [EMAIL PROTECTED] wrote: His argument is primarily against underestimation of problem of multiplicity of representations, specific incarnation of lurking combinatorial explosion problem. Under logic-based paradigm, combinatorial explosion is due to high branching

Re: [agi] NLP: natural aspect of AGI?

2007-07-06 Thread YKY (Yan King Yin)
On 6/30/07, Vladimir Nesov [EMAIL PROTECTED] wrote: NLP is often regarded as some sort of peripheral I/O system, potentially allowing AGI to communicate, but in itself not part of AGI, not even worth developing early on. But maybe NLP can be just an aspect of AGI reasoning, and can be teached

Re: [agi] NLP: natural aspect of AGI?

2007-07-06 Thread YKY (Yan King Yin)
On 7/6/07, Vladimir Nesov [EMAIL PROTECTED] wrote: YKY I know what you're talking about -- using NL directly as a KR language. I suspect it's not that. Problems you talk about are specific for particular bet on system bootstrap method. It assumes that you can code enough capability in system

Re: [agi] Re: HTM vs. IHDR

2007-06-29 Thread YKY (Yan King Yin)
On 6/29/07, Mike Tintner [EMAIL PROTECTED] wrote: YKY: I've talked to John Weng many times before, and I found that his AGI has some problems but he wasn't very eager to talk about them. MT: Is anyone in AGI eager to talk about their problems? My impression is it's a universal failing.

Re: [agi] Extra-terrestrial Computer Language (article)

2007-06-27 Thread YKY (Yan King Yin)
On 6/27/07, Keta Meme [EMAIL PROTECTED] wrote: This is from an article about a supposedly extra-terrestrial device. I do not know how real or fictional is the device, but I think this section relates to AGI and software in an entertaining sci-fi way. http://isaaccaret.fortunecity.com/

Re: [agi] AGI introduction

2007-06-26 Thread YKY (Yan King Yin)
Hi Pei, I'm giving a presentation to CityU of Hong Kong new week, on AGI in general and about my project. Can I use your listing of representative AGIs in one slide? Also, if I spend 1 slide to talk about NARS, what phrases would you recommand? ;) Thanks a lot! YKY - This list is

Re: [agi] Pure reason is a disease.

2007-06-20 Thread YKY (Yan King Yin)
On 6/19/07, Eric Baum [EMAIL PROTECTED] wrote: The modern feature is that whole peoples have chosen to reproduce at half replacement level. In case you haven't thought about the implications of that, that means their genes, for example, are vanishing from the pool by a factor of 2 every 20

Re: [agi] my project

2007-06-16 Thread YKY (Yan King Yin)
On 6/16/07, Panu Horsmalahti [EMAIL PROTECTED] wrote: Perhaps people are more interested in the design of this AGI project, and the likelyhood of it ever creating an human-level AGI. Also, are design changes chosen democratically, by a leader/lead designer (you?), or how? I'm also concerned,

Re: [agi] Books

2007-06-15 Thread YKY (Yan King Yin)
On 6/11/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: I'll try to answer this and Mike Tintner's question at the same time. The typical GOFAI engine over the past decades has had a layer structure something like this: Problem-specific assertions Inference engine/database Lisp on top of the

Re: [agi] AGI Consortium

2007-06-14 Thread YKY (Yan King Yin)
On 6/13/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: YKY, I think there are two better schemes for collaboration than the one you've proposed: -- the traditional for-profit company with equity and options based compensation determined by a committee of trusted individuals -- the

Re: [agi] AGI Consortium

2007-06-14 Thread YKY (Yan King Yin)
On 6/14/07, John G. Rose [EMAIL PROTECTED] wrote: Even if YKY was to succeed in coming up with a new hybrid organizational structure, which could likely happen as there are definitely opportunities for innovation judging by existing structures, there still needs to be the traditional open

Re: [agi] AGI Consortium

2007-06-12 Thread YKY (Yan King Yin)
On 6/12/07, Mark Waser [EMAIL PROTECTED] wrote: If you think my scheme cannot be fair then the alternative of traditional management can only be worse (in terms of fairness, which in turn affects the quality of work being done). The situation is quite analogous to that between a state-command

Re: [agi] AGI Consortium

2007-06-12 Thread YKY (Yan King Yin)
On 6/13/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: A successful AI could do a superior job of dividing up the credit from available historical records. (Anyone who doesn't spot this is not thinking recursively.) During the pre-AGI interim, people have got to make money and to enjoy

Re: [agi] AGI Consortium

2007-06-12 Thread YKY (Yan King Yin)
On 6/13/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: I wouldn't bother working with anyone who was seriously worried over who got the credit for building a Singularity-class AI - no other kind matters. There are two reasons for this, not just the obvious one. Come on, there're no obvious

[agi] META: spam? ZONEALARM CHALLENGE

2007-06-12 Thread YKY (Yan King Yin)
I keep getting the following message whenever I post to [agi]. It looks like spam. Can we get rid of it? Or is it just me? YKY -- Forwarded message -- From: [EMAIL PROTECTED] [EMAIL PROTECTED] Date: Jun 13, 2007 12:19 PM Subject: Re: Re: [agi] AGI Consortium [ZONEALARM

Re: [agi] AGI Consortium

2007-06-11 Thread YKY (Yan King Yin)
On 6/11/07, Mark Waser [EMAIL PROTECTED] wrote: I'm sorry about the confusion. Let me correct by saying: it *is* to your advantage to exaggerate your contributions, but your peers won't allow it. Cool. I'll then move back to my other point that is probably better phrased as I don't

Re: [agi] AGI Consortium

2007-06-11 Thread YKY (Yan King Yin)
An additional idea: each member's vote could be weighted by the member's total amount of contributions. This way, we can establish a network of genuine contributors via self-organization, and protect against mischief-makers, nonsense, or sabotage, etc. YKY - This list is sponsored by

Re: [agi] AGI Consortium

2007-06-10 Thread YKY (Yan King Yin)
On 6/9/07, Mark Waser [EMAIL PROTECTED] wrote: Think: if you have contributed something, it'd be in your best interest to give accurate estimates rather than exaggerate or depreciate them Why wouldn't it be to my advantage to exaggerate my contributions? But your peers in the network won't

Re: [agi] AGI Consortium

2007-06-10 Thread YKY (Yan King Yin)
Obviously innovation comes from all walks of life, be they opensource or commercial people. But some entrepreneurs are more capable of appropriating their inventions, eg Edison did *not* invent the light bulb, but he got famous for commercializing and patenting it. Many people simply don't have

Re: [agi] AGI Consortium

2007-06-10 Thread YKY (Yan King Yin)
On 6/10/07, Mark Waser [EMAIL PROTECTED] wrote: YKY Think: if you have contributed something, it'd be in your best interest to give accurate estimates rather than exaggerate or depreciate them MW Why wouldn't it be to my advantage to exaggerate my contributions? YKY But your peers in the

Re: [agi] AGI Consortium

2007-06-10 Thread YKY (Yan King Yin)
On 6/11/07, Mark Waser [EMAIL PROTECTED] wrote: I'm going to temporarily ignore my doubts about accurate assessments to try to get my initial question answered yet again. Why wouldn't it be to my advantage to exaggerate my contributions? I'm sorry about the confusion. Let me

Re: [agi] AGI Consortium

2007-06-08 Thread YKY (Yan King Yin)
I noticed a serious problem with credit attribution and allowing members to branch outside of the mother project. For example, there may be a collection of contributions, from many members, that is worth $C in the consortium. Suppose someone decides to start an external project, then adding $c

Re: [agi] AGI Consortium

2007-06-08 Thread YKY (Yan King Yin)
On 6/8/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: I noticed a serious problem with credit attribution and allowing members to branch outside of the mother project. For example, there may be a collection of contributions, from many members, that is worth $C in the consortium. Suppose

Re: [agi] Books

2007-06-08 Thread YKY (Yan King Yin)
On 6/7/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote: Reasoning about Uncertainty (Paperback) by Joseph Y. Halpern BTW, the .chm version of this book can be easily obtained on the net, as are many others you listed... I also recommand J Pearl's 2 books (Probabilistic Reasoning and Causality).

Re: [agi] Open AGI Consortium

2007-06-06 Thread YKY (Yan King Yin)
On 6/6/07, Joel Pitt [EMAIL PROTECTED] wrote: YKY, A suggestion, if you really are motivated by $$ and getting rich, why not focus on other much easier problems that will still potentially make you bucket-loads money? I have this slightly crazy idea of selling the project's AGI prototype

[agi] about AGI designers

2007-06-06 Thread YKY (Yan King Yin)
There're several reasons why AGI teams are fragmented and AGI designers don't want to join a consortium: A. believe that one's own AGI design is superior B. want to ensure that the global outcome of AGI is friendly C. want to get bigger financial rewards (A) does not really prevent a founder

Re: [agi] about AGI designers

2007-06-06 Thread YKY (Yan King Yin)
Hi Ben and Peter Voss, I understand that transitioning to a Web 2.0-style will not be easy for well-established projects. But sometimes a company needs to cannibalize its own past... Due to the fewer number of employees, conventional companies can only explore a small subspace of the huge AGI

Re: [agi] about AGI designers

2007-06-06 Thread YKY (Yan King Yin)
On 6/7/07, William Pearson [EMAIL PROTECTED] wrote: There is also D. The other members of the consortiums philosophical approaches to AGI share little in common with your own and the time spent trying to communicate with the consortium about which class of system to investigate would be better

Re: [agi] credit attribution method

2007-06-05 Thread YKY (Yan King Yin)
On 6/5/07, Panu Horsmalahti [EMAIL PROTECTED] wrote: Now, all we need to do is find 2 AGI designers who agree on something. My guess is that *after* people see and discuss each other's ideas, they'll be more likely to change their views and be able to synthesize them. At first we may see a

<    1   2   3   4   5   >