Re: Yawn. More definitions of intelligence? [WAS Re: [agi] Ben's Definition of Intelligence]

2008-01-14 Thread Benjamin Goertzel
Richard, I don't think Shane and Marcus's overview of definitions-of-intelligence is poor quality. I think it is just doing something different than what you think it should be doing. The overview is exactly that: A review of what researchers have said about the definition of intelligence.

[agi] Re: [singularity] The establishment line on AGI

2008-01-14 Thread Benjamin Goertzel
Also, this would involve creating a close-knit community through conferences, journals, common terminologies/ontologies, email lists, articles, books, fellowships, collaborations, correspondence, research institutes, doctoral programs, and other such devices. (Popularization is not on the

Re: Yawn. More definitions of intelligence? [WAS Re: [agi] Ben's Definition of Intelligence]

2008-01-14 Thread Benjamin Goertzel
Your job is to be diplomatic. Mine is to call a spade a spade. ;-) Richard Loosemore I would rephrase it like this: Your job is to make me look diplomatic ;-p - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to:

Re: [agi] Ben's Definition of Intelligence

2008-01-12 Thread Benjamin Goertzel
On definitions of intelligence, the canonical reference is http://www.vetta.org/shane/intelligence.html which lists 71 definitions. Apologies if someone already pointed out Shane's page in this thread, I didn't read every message carefully. An AGI definition of intelligence surely has, by

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Benjamin Goertzel
I'll be a lot more interested when people start creating NLP systems that are syntactically and semantically processing statements about words, sentences and other linguistic structures and adding syntactic and semantic rules based on those sentences. Depending on exactly what you mean by

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Benjamin Goertzel
On Jan 10, 2008 10:26 AM, William Pearson [EMAIL PROTECTED] wrote: On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote: I'll be a lot more interested when people start creating NLP systems that are syntactically and semantically processing statements *about* words, sentences

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Benjamin Goertzel
Hi, Yes, the Texai implementation of Incremental Fluid Construction Grammar follows the phrase structure approach in which leaf lexical constituents are grouped into a structure (i.e. construction) hierarchy. Yet, because it is incremental and thus cognitively plausible, it should scale to

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Benjamin Goertzel
http://texai.org/blog http://texai.org 3008 Oak Crest Ave. Austin, Texas, USA 78704 512.791.7860 - Original Message From: Benjamin Goertzel [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, January 10, 2008 11:06:45 AM Subject: Re: [agi] Incremental Fluid Construction

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Benjamin Goertzel
On Jan 10, 2008 10:03 PM, Matt Mahoney [EMAIL PROTECTED] wrote: All this discussion of building a grammar seems to ignore the obvious fact that in humans, language learning is a continuous process that does not require any explicit encoding of rules. I think either your model should learn

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Benjamin Goertzel
And how would a young child or foreigner interpret on the Washington Monument or shit list? Both are physical objects and a book *could* be resting on them. Sorry, my shit list is purely mental in nature ;-) ... at the moment, I maintain a task list but not a shit list... maybe I need to

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Benjamin Goertzel
,you) Note that the RelEx output is already abstracted and semantified compared to what comes out of a grammar parser. -- Ben On Jan 9, 2008 5:59 PM, Benjamin Goertzel [EMAIL PROTECTED] wrote: Can you give about ten examples of rules? (That would answer a lot of my

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Benjamin Goertzel
Can you give about ten examples of rules? (That would answer a lot of my questions above) That would just lead to really long list of questions that I don't have time to answer right now In a month or two, we'll write a paper on the rule-encoding approach we're using, and I'll post it to

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Benjamin Goertzel
Processing a dictionary in a useful way requires quite sophisticated language understanding ability, though. Once you can do that, the hard part of the problem is already solved ;-) Ben On Jan 9, 2008 7:22 PM, William Pearson [EMAIL PROTECTED] wrote: On 09/01/2008, Benjamin Goertzel [EMAIL

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Benjamin Goertzel
On Jan 7, 2008 9:12 AM, Mike Tintner [EMAIL PROTECTED] wrote: Robert, Look, the basic reality is that computers have NOT yet been creative in any significant way, and have NOT yet achieved AGI - general intelligence, - or indeed any significant rulebreaking adaptivity; (If you disagree,

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Benjamin Goertzel
On Jan 7, 2008 12:08 PM, David Butler [EMAIL PROTECTED] wrote: Would two AGI's with the same initial learning program, same hardware in a controlled environment (same access to a specific learning base-something like an encyclopedia) learn at different rates and excel in different tasks? Yes

Re: [agi] Re: AGI-08 - Call for Participation

2008-01-07 Thread Benjamin Goertzel
Nothing of that nature is planned at present ... as we the conference organizers are rather busy with other stuff, we've been pretty much fully whelmed with the organization of the First Life conference... It might be fun to do an in-world AGI meet-up a couple weeks after AGI-08, with an aim of

Re: [agi] Re: AGI-08 - Call for Participation

2008-01-07 Thread Benjamin Goertzel
I'll forward this request to those who will be handling such things... thx ben On Jan 7, 2008 3:35 PM, Vladimir Nesov [EMAIL PROTECTED] wrote: Ben, I'm certainly not in position to ask for it, but if it's possible, can some kind of microphones be used during presentations on agi-08 (if

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
On Jan 5, 2008 10:52 PM, Mike Tintner [EMAIL PROTECTED] wrote: I think I've found a simple test of cog. sci. I take the basic premise of cog. sci. to be that the human mind - and therefore its every activity, or sequence of action - is programmed. No. This is one perspective taken by some

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
I don't really understand what you mean by programmed ... nor by creative You say that, according to your definitions, a GA is programmed and ergo cannot be creative... How about, for instance, a computer simulation of a human brain? That would be operated via program code, hence it would be

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
On Jan 6, 2008 4:00 PM, Mike Tintner [EMAIL PROTECTED] wrote: Ben, Sounds like you may have missed the whole point of the test - though I mean no negative comment by that - it's all a question of communication. A *program* is a prior series or set of instructions that shapes and determines

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
Mike, The short answer is that I don't believe that computer *programs* can be creative in the hard sense, because they presuppose a line of enquiry, a predetermined approach to a problem - ... But I see no reason why computers couldn't be briefed rather than programmed, and freely associate

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
If you believe in principle that no digital computer program can ever be creative, then there's no point in me or anyone else rambling on at length about their own particular approach to digital-computer-program creativity... One question I have is whether you would be convinced that digital

Re: [agi] NL interface

2007-12-30 Thread Benjamin Goertzel
Matt, I agree w/ your question... I actually think KB's can be useful in principle, but I think they need to be developed in a pragmatic way, i.e. where each item of knowledge added can be validated via how useful it is for helping a functional intelligent agent to achieve some interesting

Re: [agi] OpenCog

2007-12-28 Thread Benjamin Goertzel
On Dec 28, 2007 5:59 AM, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: OpenCog is definitely a positive thing to happen in the AGI scene. It's been all vaporware so far. Yes, it's all vaporware so far ;-) On the other hand, the code we hope to release as part of OpenCog actually exists, but

Re: [agi] OpenCog

2007-12-28 Thread Benjamin Goertzel
On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote: Benjamin Goertzel wrote: I wish you much luck with your own approach And, I would imagine that if you create a software framework supporting your own approach in a convenient way, my own currently favored AI

Re: [agi] OpenCog

2007-12-27 Thread Benjamin Goertzel
Loosemore wrote: I am sorry, but I have reservations about the OpenCog project. The problem of building an open-source AI needs a framework-level tool that is specifically designed to allow a wide variety of architectures to be described and expressed. OpenCog, as far as I can see, does not

[agi] OpenCog

2007-12-27 Thread Benjamin Goertzel
Re the recent discussion of OpenCog -- this recent post I made to the OpenCog mailing list may perhaps help clarify the intentions underlying the project further. -- Ben -- Forwarded message -- From: Benjamin Goertzel [EMAIL PROTECTED] Date: Dec 27, 2007 11:07 AM Subject: Re

Re: [agi] BMI/BCI Growing Fast

2007-12-26 Thread Benjamin Goertzel
I think that at first sight this goes to support my position in the original argument with Ben- namely that there are all kinds of ways to get at or read minds, and there is now an increasing momentum to do that. Being able to read the stream of subvocalizations coming out from a person's

[agi] AGI, NLP, embodiment and gesture

2007-12-26 Thread Benjamin Goertzel
Hi all, Here you'll find a paper http://goertzel.org/new_research/WCCI_AGI.pdf that I've submitted to the WCCI 2008 Special Session on Human-level AI. It tries to summarize the big picture about how advanced AI can be achieved via synthesizing NLP and virtual embodiment... The paper refers to

[agi] Mizar translated to TPTP !

2007-12-22 Thread Benjamin Goertzel
For those interested in automated theorem-proving, I'm pleased to announce a major advance in tools has occurred... The Mizar library of formalized math has finally been translated into a sensible format, usable for training automated theorem-proving systems ;-) Josef Urban informed me that

Re: [agi] BMI/BCI Growing Fast

2007-12-15 Thread Benjamin Goertzel
I would add that the Chinese universities are extremely eager to recruit Western professors to lead research labs in AI and other areas. Hugo DeGaris relocated there a year or so ago, and is quite relieved to be supplied with a bunch of excellent research assistants and loads of computational

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
Mike, My comment is that this is GREAT research and development, but, for the near and probably medium future is very likely to be about perception and action rather than cognition. I.e., we are sort of on the verge of understanding how to hook up new sensors to the brain, and hook the brain up

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
Mike wrote: Personally, my guess is that serious mindreading machines will be a reality in the not too distant future - before AGI and seriously autonomous mobile robots. No way. Tell that to the neuroscientists in your local university neuro lab, and they'll get a good laugh ;-) The

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
Hi, From Bob Mottram on the AGI list: However, I'm not expecting to see the widespread cyborgisation of human society any time soon. As the article suggests the first generation implants are all devices to fulfill some well defined medical need, and will have to go through all the usual

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
Bear in mind that science has used very little imagination here to date. Science only started studying consciousness ten years ago. It still hasn't started studying Thought - the actual contents of consciousness: the streams of thought inside people's heads. In both cases, the reason has been

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
Mike: Making the general public smarter is not in the best interest of government, who wants to keep us fat dumb and (relatively) happy (read: distracted). If we're not making people smarter with currently available resources, why would we invest in research to discover expensive new

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
Is China pushing its people into being smarter? Are they giving incentives beyond the US-style capitalist reasons for being smart? The incentive is that if you get smart enough, you may figure out a way to get out of China ;-) Thus, they let the top .01% out, so as to keep the rest of the

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
, Benjamin Goertzel [EMAIL PROTECTED] wrote: Is China pushing its people into being smarter? Are they giving incentives beyond the US-style capitalist reasons for being smart? The incentive is that if you get smart enough, you may figure out a way to get out of China ;-) Thus

Re: [agi] The Function of Emotions is Torture

2007-12-12 Thread Benjamin Goertzel
Mike In case you're curious I wrote down my theory of emotions here http://www.goertzel.org/dynapsyc/2004/Emotions.htm (an early version of text that later became a chapter in The Hidden Pattern) Among the conclusions my theory of emotions leads to are, as stated there: * * AI systems

Re: [agi] The same old explosions?

2007-12-11 Thread Benjamin Goertzel
Self-organizing complexity and computational complexity are quite separate technical uses of the word complexity, though I do think there are subtle relationships. As an example of a relationship btw the two kinds of complexity, look at Crutchfield's work on using formal languages to model the

Re: [agi] AGI communities and support

2007-12-08 Thread Benjamin Goertzel
Thanks Bob. But I meant, it looks more likely that robots will achieve - and have already taken the first concrete steps to achieve - the goals of AGI - the capacity to learn a range of abilities and activities. Can you point to any single robot that has demonstrated the capability to learn a

Re: [agi] AGI communities and support

2007-12-08 Thread Benjamin Goertzel
Yes I expect to see more narrow AI robotics in future, but as time goes on there will be pressures to consolidate multiple abilities into a single machine. Ergonomics dictates that people will only accept a limited number of mobile robots in their homes or work spaces. Physical space is at a

Re: [agi] AGI communities and support

2007-12-08 Thread Benjamin Goertzel
So I reckon roboticists ARE actually focussed on an AGI challenge - whereas, as I've pointed out before, there is nothing comparable in pure AGI. To my knowledge, none of the work on the ICRA Robotic Challenge is at this point taking a strong AGI approach And with all those millions of

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Benjamin Goertzel
On Dec 7, 2007 7:09 AM, Mike Tintner [EMAIL PROTECTED] wrote: Matt,:AGI research needs special hardware with massive computational capabilities. Could you give an example or two of the kind of problems that your AGI system(s) will need such massive capabilities to solve? It's so good -

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Benjamin Goertzel
On Dec 7, 2007 10:21 AM, Bob Mottram [EMAIL PROTECTED] wrote: If I had 100 of the highest specification PCs on my desktop today (and it would be a big desk!) linked via a high speed network this wouldn't help me all that much. Provided that I had the right knowledge I think I could produce a

Re: [agi] None of you seem to be able ...

2007-12-07 Thread Benjamin Goertzel
On Dec 6, 2007 8:06 PM, Ed Porter [EMAIL PROTECTED] wrote: Ben, To the extent it is not proprietary, could you please list some of the types of parameters that have to be tuned, and the types, if any, of Loosemore-type complexity problems you envision in Novamente or have experienced with

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Benjamin Goertzel
Clearly the brain works VASTLY differently and more efficiently than current computers - are you seriously disputing that? It is very clear that in many respects the brain is much less efficient than current digital computers and software. It is more energy-efficient by and large, as Read

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Benjamin Goertzel
On Dec 5, 2007 6:23 PM, Mike Tintner [EMAIL PROTECTED] wrote: Ben: To publish your ideas in academic journals, you need to ground them in the existing research literature, not in your own personal introspective observations. Big mistake. Think what would have happened if Freud had

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Benjamin Goertzel
There is no doubt that complexity, in the sense typically used in dynamical-systems-theory, presents a major issue for AGI systems. Any AGI system with real potential is bound to have a lot of parameters with complex interdependencies between them, and tuning these parameters is going to be a

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Benjamin Goertzel
Conclusion: there is a danger that the complexity that even Ben agrees must be present in AGI systems will have a significant impact on our efforts to build them. But the only response to this danger at the moment is the bare statement made by people like Ben that I do not think that the

Re: Last word for the time being [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Benjamin Goertzel
Show me ONE other example of the reverse engineering of a system in which the low level mechanisms show as many complexity-generating characteristics as are found in the case of intelligent systems, and I will gladly learn from the experience of the team that did the job. I do not believe

Re: [agi] None of you seem to be able ...

2007-12-05 Thread Benjamin Goertzel
Tintner wrote: Your paper represents almost a literal application of the idea that creativity is ingenious/lateral. Hey it's no trick to be just ingenious/lateral or fantastic. Ah ... before creativity was what was lacking. But now you're shifting arguments and it's something else that is

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
More generally, I don't perceive any readiness to recognize that the brain has the answers to all the many unsolved problems of AGI - Obviously the brain contains answers to many of the unsolved problems of AGI (not all -- e.g. not the problem of how to create a stable goal system under

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
On Dec 4, 2007 8:38 PM, Richard Loosemore [EMAIL PROTECTED] wrote: Benjamin Goertzel wrote: [snip] And neither you nor anyone else has ever made a cogent argument that emulating the brain is the ONLY route to creating powerful AGI. The closest thing to such an argument that I've seen

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Benjamin Goertzel
Thus: building a NL parser, no matter how good it is, is of no use whatsoever unless it can be shown to emerge from (or at least fit with) a learning mechanism that allows the system itself to generate its own understanding (or, at least, acquisition) of grammar IN THE CONTEXT OF A MECHANISM

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
Richard, Well, I'm really sorry to have offended you so much, but you seem to be a mighty easy guy to offend! I know I can be pretty offensive at times; but this time, I wasn't even trying ;-) The argument I presented was not a conjectural assertion, it made the following coherent case:

[agi] Re: A global approach to AI in virtual, artificial and real worlds

2007-12-04 Thread Benjamin Goertzel
What makes anyone think OpenCog will be different? Is it more understandable? Will there be long-term aficionados who write books on how to build systems in OpenCog? Will the developers have experience, or just adolescent enthusiasm? I'm watching the experiment to find out. Well, OpenCog

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Benjamin Goertzel
OK, understood... On Dec 4, 2007 9:32 PM, Richard Loosemore [EMAIL PROTECTED] wrote: Benjamin Goertzel wrote: Thus: building a NL parser, no matter how good it is, is of no use whatsoever unless it can be shown to emerge from (or at least fit with) a learning mechanism that allows

Re: [agi] Funding AGI research

2007-11-30 Thread Benjamin Goertzel
On Nov 30, 2007 7:57 AM, Mike Tintner [EMAIL PROTECTED] wrote: Ben: It seems to take tots a damn lot of trials to learn basic skills Sure. My point is partly that human learning must be pretty quantifiable in terms of number of times a given action is practised, Definitely NOT ... it's very

Re: FW: [agi] AGI DARPA-style

2007-11-30 Thread Benjamin Goertzel
Yeah, I've been following that for a while. There are some very smart people involved, and it's quite possible they'll make a useful software tool, but I don't feel they have a really viable unified cognitive architecture. It's the sort of architecture where different components are written in

Re: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel
[What related principles govern the Novamente's figure's trial and error learning of how to pick up a ball?] Pure trial and error learning is really slow though... we are now relying on a combination of -- reinforcement from a teacher -- imitation of others' behavior -- trial and error --

Re: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel
On Nov 29, 2007 11:35 PM, Mike Tintner [EMAIL PROTECTED] wrote: Presumably, human learning isn't that slow though - if you simply count the number of attempts made before any given movement is mastered at a basic level (.e.g crawling/ walking/ grasping/ tennis forehand etc)? My guess would be

Re: Re[12]: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel
On Nov 30, 2007 12:03 AM, Dennis Gorelik [EMAIL PROTECTED] wrote: Benjamin, That proves my point [that AGI project can be successfully split into smaller narrow AI subprojects], right? Yes, but it's a largely irrelevant point. Because building a narrow-AI system in an AGI-compatible

Re: Re[12]: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel
So far only researchers/developers who picked narrow-AI approach accomplished something useful for AGI. E.g.: Google, computer languages, network protocols, databases. These are tools that are useful for AGI RD but so are computer monitors, silicon chips, and desk chairs. Being a useful tool

Re: Re[10]: [agi] Funding AGI research

2007-11-28 Thread Benjamin Goertzel
EDI must admit, I have never heard cortical column described as containing 10^5 neurons. The figure I have commonly seen is 10^2 neurons for a cortical column, although my understanding is that the actual number could be either less or more. I guess the 10^5 figure would relate to so-called

Re: Re[4]: [agi] Funding AGI research

2007-11-27 Thread Benjamin Goertzel
Still, this is the most resource-intensive part of the Novamente system (the part that's most likely to require supercomputers to achieve human-level AI). Why is it the most resource intensive, is it the evolutionary computational cost? Is this where MOSES is used? Correct, this is

Re: Re[10]: [agi] Funding AGI research

2007-11-27 Thread Benjamin Goertzel
Nearly any AGI component can be used within a narrow AI, That proves my point [that AGI project can be successfully split into smaller narrow AI subprojects], right? Yes, but it's a largely irrelevant point. Because building a narrow-AI system in an AGI-compatible way is HARDER than

Re: Re[8]: [agi] Funding AGI research

2007-11-27 Thread Benjamin Goertzel
My claim is that it's possible [and necessary] to split massive amount of work that has to be done for AGI into smaller narrow AI chunks in such a way that every narrow AI chunk has it's own business meaning and can pay for itself. You have not addressed my claim, which has massive evidence

Re: Re[4]: [agi] Funding AGI research

2007-11-26 Thread Benjamin Goertzel
Well, there is a discipline of computer science devoted to automatic programming, i.e. synthesizing software based on specifications of desired functionality. State of the art is: -- Just barely, researchers have recently gotten automated program learning to synthesize an nlogn sorting algorithm

Re: Re[6]: [agi] Funding AGI research

2007-11-25 Thread Benjamin Goertzel
Linas: I find it telling that no one is saying I've got the code, I just need to scale it up 1000-fold to make it impressive ... Yes, that's an accurate comment. Novamente will hopefully reach that point in a few years. For now, we will need (and use) a lotta machines for commercial product

Re: Re[6]: [agi] Funding AGI research

2007-11-25 Thread Benjamin Goertzel
Cassimatis's system is an interesting research system ... it doesn't yet have lotsa demonstrated practical functionality, if that's what you mean by work... He wants to take a bunch of disparately-functioning agents, and hook them together into a common framework using a common logical

Re: Re[4]: [agi] Funding AGI research

2007-11-20 Thread Benjamin Goertzel
Are you asking for success stories regarding research funding in any domain, or regarding research funding in AGI? Any domain, please. OK, so your suggestion is that research funding, in itself, is worthless in any domain? I don't really have time to pursue this kind of silly

Re: Re[6]: [agi] Funding AGI research

2007-11-20 Thread Benjamin Goertzel
No. My point is that massive funding without having a prototype prior to funding is worthless most of the times. If prototype cannot be created at reasonably low cost then fully working product most likely cannot be created even with massive funding. Well, this seems to dissolve into a

Re: Re[6]: [agi] Funding AGI research

2007-11-20 Thread Benjamin Goertzel
On Nov 20, 2007 11:22 PM, Dennis Gorelik [EMAIL PROTECTED] wrote: Jiri, AGI is IMO possible now but requires very different approach than narrow AI. AGI requires properly tune some existing narrow AI technologies, combine them together and may be add couple of more. That's massive

Re: Re[8]: [agi] Funding AGI research

2007-11-20 Thread Benjamin Goertzel
Could you describe a piece of technology that simultaneously: - Is required for AGI. - Cannot be required part of any useful narrow AI. The key to your statement is the word required Nearly any AGI component can be used within a narrow AI, but, the problem is, it's usually a bunch easier

Re: Re[2]: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
On Nov 18, 2007 12:50 AM, Dennis Gorelik [EMAIL PROTECTED] wrote: Benjamin, Do you have any success stories of such research funding in the last 20 years? Something that resulted in useful accomplishments. Are you asking for success stories regarding research funding in any domain, or

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
Novamente as a whole is definitely a research project, albeit one with a very well fleshed out research plan. I have a strong hypothesis about how the project will come out, and arguments in favor of this hypothesis; but I don't have the level of confidence I'd have in, say, the stability of a

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
Proactively minimizing risk in as many areas as possible make a venture much more salable, but most AI ventures tend to be very apparently risky at many levels that have no relation to the AI research per se and the inability of these ventures to minimize all that unnecessary risk is a giant

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
I have not heard a *creative* new idea here that directly addresses and shows the power to solve even in part the problem of creating general intelligence. To be quite frank, the most creative and original ideas inside the Novamente design are quite technical. I suspect you don't have the

Re: [agi] Multi-agent learning

2007-11-18 Thread Benjamin Goertzel
On Nov 18, 2007 6:45 PM, Lukasz Stafiniak [EMAIL PROTECTED] wrote: Ben, Have you already considered what form of multi-agent epistemic logic (or whatever extension to PLN) Novamente will use to merge knowledge from different avatars? Well, standard PLN handles this in principle via

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
Hi, The majority of VC's do, as you say, want a technology that is sewn up, from the point of view of technical feasibility. But this is not always true. There is always a gray area at the fringe of feasibility where the last set of questions has not been *fully* answered before money is

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
: On Nov 18, 2007, at 7:06 PM, Benjamin Goertzel wrote: Navigating complex social and business situations requires a quite different set of capabilities than creating AGI. Potentially they could be combined in the same person, but one certainly can't assume that would be the case. I

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
On Nov 18, 2007 11:24 PM, Benjamin Goertzel [EMAIL PROTECTED] wrote: There are a lot of worthwhile points in your post, and a number of things I don't fully agree with, but I don't have time to argue them all right now... Instead I'll just pick two points: er, looks like that was three

Re: [agi] Funding AGI research

2007-11-17 Thread Benjamin Goertzel
On Nov 17, 2007 1:08 PM, Dennis Gorelik [EMAIL PROTECTED] wrote: Jiri, Give $1 for the research to who? Research team can easily eat millions $$$ without producing any useful results. If you just randomly pick researchers for investment, your chances to get any useful outcome from the

Re: [agi] Funding AGI research

2007-11-17 Thread Benjamin Goertzel
Richard, Though we have theoretical disagreements, I largely agree with your analysis of the value of prototypes for AGI. Experience has shown repeatedly that prototypes displaying apparently intelligent behavior in various domains are very frequently dead-ends, because they embody various sorts

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
About PolyWorld and Alife in general... I remember playing with PolyWorld 10 years ago or so And, I had a grad student at Uni. of Western Australia build a similar system, back in my Perth days... (it was called SEE, for Simple Evolving Ecology. We never published anything on it, as I left

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
I think that linguistic interaction with human beings is going to be what lifts Second Life proto-AGI's beyond the glass ceiling... Our first SL agents won't have language generation or language learning capability, but I think that introducing it is really essential, esp. given the limitations

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
On Nov 15, 2007 8:57 PM, Bryan Bishop [EMAIL PROTECTED] wrote: On Thursday 15 November 2007 08:16, Benjamin Goertzel wrote: non-brain-based AGI. After all it's not like we know how real chemistry gives rise to real biology yet --- the dynamics underlying protein-folding remain ill

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
in ways no one understands really.. etc. etc. etc. ;-) ben On Nov 15, 2007 10:07 PM, Bryan Bishop [EMAIL PROTECTED] wrote: On Thursday 15 November 2007 20:02, Benjamin Goertzel wrote: On Nov 15, 2007 8:57 PM, Bryan Bishop [EMAIL PROTECTED] wrote: Can anybody elaborate on the actual problems

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
Hi, No: the real concept of lack of grounding is nothing so simple as the way you are using the word grounding. Lack of grounding makes an AGI fall flat on its face and not work. I can't summarize the grounding literature in one post. (Though, heck, I have actually tried to do that in

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
Richard, So here I am, looking at this situation, and I see: AGI system intepretation (implicit in system use of it) Human programmer intepretation and I ask myself which one of these is the real interpretation? It matters, because they do not necessarily match up. That

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
On Nov 14, 2007 1:36 PM, Mike Tintner [EMAIL PROTECTED] wrote: RL:In order to completely ground the system, you need to let the system build its own symbols Correct. Novamente is designed to be able to build its own symbols. what is built-in, are mechanisms for building symbols, and for

[agi] Human uploading

2007-11-13 Thread Benjamin Goertzel
Richard, I recently saw a talk by Todd Huffman at the Foresight Unconference on the topic of mind uploading technology, and he was specifically showing off techniques for imaging slices of brain, that *do* give the level of biological detail you're thinking of. Topics of discussions were, for

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel
or b) too fragile to succeed -- particularly since I'm pretty sure that you couldn't convince me without making some serious additions to Novamente - Original Message - *From:* Benjamin Goertzel mailto:[EMAIL PROTECTED] *To:* agi@v2.listbox.com mailto:agi@v2

Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Benjamin Goertzel
For example, what is the equivalent of the activation control (or search) algorithm in Google sets. They operate over huge data. I bet the algorithm for calculating their search or activation is relatively simple (much, much, much less than a PhD theses) and look what they can do. So I

Re: [agi] Human uploading

2007-11-13 Thread Benjamin Goertzel
Yes, I thought I had heard of people trying more ambitious techniques, but in the cases I heard of (can't remember where now) the tradeoffs always left the approach hanging on one of the issues: for example, was he talking about scanning microchondrial activity in vivo, in real time,

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel
On Nov 13, 2007 2:37 PM, Richard Loosemore [EMAIL PROTECTED] wrote: Ben, Unfortunately what you say below is tangential to my point, which is what happens when you reach the stage where you cannot allow any more vagueness or subjective interpretation of the qualifiers, because you have to

Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Benjamin Goertzel
This is the thing that I think is relevent to Robin Hanson's original question. I think we can build 1+2 is short order, and maybe 3 in a while longer. But the result of 1+2+3 will almost surely be an idiot-savant: knows everything about horses, and can talk about them at length, but, like

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel
But has a human, asking Wen out on a date, I don't really know what Wen likes cats ever really meant. It neither prevents me from talking to Wen, or from telling my best buddy that ...well, I know, for instance, that she likes cats... yes, exactly... The NLP statement Wen likes cats is

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel
So, vagueness can not only be important imported, I meant into an AI system from natural language, but also propagated around the AI system via inference. This is NOT one of the trickier things about building probabilistic AGI, it's really kind of elementary... -- Ben G -

  1   2   3   4   >