RE: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Derek Zahn
Some might say that if they get conservation of mass and newton's law then they skipped all the useless stuff! OK, but those some probably don't include any preschool teachers or educational theorists. That hypothesis is completely at odds with my own intuition from having raised 3

RE: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Derek Zahn
Ben: Right. My intuition is that we don't need to simulate the dynamics of fluids, powders and the like in our virtual world to make it adequate for teaching AGIs humanlike, human-level AGI. But this could be wrong.I suppose it depends on what kids actually learn when making cakes, skipping

RE: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Derek Zahn
Oh, and because I am interested in the potential of high-fidelity physical simulation as a basis for AI research, I did spend some time recently looking into options. Unfortunately the results, from my perspective, were disappointing. The common open-source physics libraries like ODE,

RE: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Derek Zahn
Hi Ben. OTOH, if one wants to go the virtual-robotics direction (as is my intuition), then it is possible to bypass many of the lower-level perception/actuation issues and focus on preschool-level learning, reasoning and conceptual creation. And yet, in your paper (which I enjoyed),

RE: [agi] The Future of AGI

2008-11-26 Thread Derek Zahn
narrow AI I suppose, though it's kind of on the borderline. It does seem like one of the ways to commercialize incremental progress toward AGI. Derek Zahn supermodelling.net --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

RE: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Derek Zahn
Pei Wang: --- I have problem with each of these assumptions and beliefs, though I don't think anyone can convince someone who just get a big grant that they are moving in a wrong direction. ;-) With his other posts about the Singularity Summit and his invention of the word Synaptronics, Modha

RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Derek Zahn
that considerably more thought. I look forward to being in the audience when you present the paper at AGI-09. Derek Zahn agiblog.net --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify

RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Derek Zahn
Oh, one other thing I forgot to mention. To reach my cheerful conclusion about your paper, I have to be willing to accept your model of cognition. I'm pretty easy on that premise-granting, by which I mean that I'm normally willing to go along with architectural suggestions to see where they

RE: AW: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Derek Zahn
Matthias Heger: If chess is so easy because it is completely described, complete information about state available, fully deterministic etc. then the more important it is that your AGI can learn such an easy task before you try something more difficult. Chess is not easy. Becoming

RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Derek Zahn
As somebody who considers consciousness, qualia, and so on to be poorly-defined anthropomorphic mind-traps, I am not interested in any such discussions. Other people are, and I have no problem ignoring them, like I ignore a number of individual cranks and critics who post things of similarly

RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Derek Zahn
I bet if you tried very hard to move the group to the forum (for example, by only posting there yourself and periodically urging people to use it), people could be moved there. Right now, nobody posts there because nobody else posts there; if one wants one's stuff to be read, one sends it to

RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Derek Zahn
How about this: Those who *do* think it's worthwhile to move to the forum: Instead of posting email responses to the mailing list, post them to the forum and then post a link to the response to the email list, thus encouraging threads to continue in the more advanced venue. I shall do this

RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Derek Zahn
Oh, also: When I try to register a form account, it says:Sorry, an error occurred. If you are unsure on how to use a feature, or don't know why you got this error message, try looking through the help files for more information. The error returned was: To register, please send your request to

RE: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Derek Zahn
I am reminded of this: http://www.serve.com/bonzai/monty/classics/MissAnneElk Date: Tue, 14 Oct 2008 17:14:39 -0400From: [EMAIL PROTECTED]: [EMAIL PROTECTED]: Re: [agi] Advocacy Is no Excuse for Exaggeration OK, but you have not yet explained what your theory of consciousness is, nor what

RE: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread Derek Zahn
It has been explained many times to Tintner that even though computer hardware works with a particular set of primitive operations running in sequence, a hardwired set of primitive logical operations operating in sequence is NOT the theory of intelligence that any AGI researchers are proposing

RE: [agi] The Necessity of Embodiment

2008-08-22 Thread Derek Zahn
By embodied I think people usually mean a dense sensory connection (with a feedback loop) to the physical world. The feedback could be as simple as aiming a camera. However, it seems to me that an AI program connected to YouTube could maybe have a dense enough link to the real world to charge

RE: [agi] OpenCog Prime wikibook and roadmap posted (moderately detailed design for an OpenCog-based thinking machine)

2008-08-01 Thread Derek Zahn
Ben, Thanks for the large amount of work that must have gone into the production of the wikibook. Along with the upcoming PLN book (now scheduled for Sept 26 according to Amazon) and re-reading The Hidden Pattern, there should be enough material for a diligent student to grok your approach.

RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-01 Thread Derek Zahn
Thanks again Richard for continuing to make your view on this topic clear to those who are curious. As somebody who has tried in good faith and with limited but nonzero success to understand your argument, I have some comments. They are just observations offered with no sarcasm or insult

RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-01 Thread Derek Zahn
Oh, one last point: I find your thoughts in this message quite interesting personally because I think that puzzling out exactly what concept builders need to do, and how they might be built to do it, is the most interesting thing in the whole world. I am resistant to the idea that it is

RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-01 Thread Derek Zahn
Sorry for three messages in short succession. Regarding concept builders, I have been writing in my bumbling way about this (and will continue to muse on fundamental issues) in my little blog: http://agiblog.net --- agi Archives:

RE: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Derek Zahn
I agree that the hardware advances are inspirational, and it seems possible that just having huge hardware around could change the way people think and encourage new ideas. But what I'm really looking forward to is somebody producing a very impressive general intelligence result that was just

RE: [agi] Approximations of Knowledge

2008-06-25 Thread Derek Zahn
Richard, If I can make a guess at where Jim is coming from: Clearly, intelligent systems CAN be produced. Assuming we can define intelligent system well enough to recognize it, we can generate systems at random until one is found. That is impractical, however. So, we can look at the

[agi] Roadrunner PetaVision

2008-06-16 Thread Derek Zahn
Brain modeling certainly does seem to be in the news lately. Checking out nextbigfuture.com, I was reading about that petaflop computer Roadrunner and articles about it say that they are or will soon be emulating the entire visual cortex -- a billion neurons. I'm sure I'm not the only one

RE: [agi] Definition of AGI - comparison with animals

2008-06-14 Thread Derek Zahn
Dr. Matthias Heger: Which animal has the smallest level of intelligence which still would be sufficient for a robot to be an AGI-robot? You ask for opinions, we got lots of those! I believe most people on this list would consider that humans are the only animals with

RE: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-12 Thread Derek Zahn
TeslasTwo things I think are interesting about these trends in high-performance commodity hardware: 1) The flops/bit ratio (processing power vs memory) is skyrocketing. The move to parallel architectures makes the number of high-level operations per transistor go up, but bits of memory per

RE: [agi] Pearls Before Swine...

2008-06-08 Thread Derek Zahn
Gary Miller writes: We're thinking Don't feed the Trolls! Yeah, typical trollish behavior -- upon failing to stir the pot with one approcah, start adding blanket insults. I put Steve Richfield in my killfile a week ago or so, but I went back to the archive to read the message in question.

RE: [agi] Ideological Interactions Need to be Studied

2008-06-02 Thread Derek Zahn
Speaking of neurons and simplicity, I think it's interesting that some of the how much cpu power needed to replicate brain function arguments use the basic ANN model, assuming a MULADD per synapse, updating at say 100 times per second (giving a total computing power of about 10^16 OPS). But

RE: [agi] More Info Please

2008-05-27 Thread Derek Zahn
Mark Waser: Does anybody have any interest in and/or willingness to program in a different environment? I haven't decided to what extent I'll participate in OpenCog myself yet. For me, it depends more on whether the capabilities of the system seem worth exploring, which in turn depends as

RE: [agi] Re: Merging - or: Multiplicity

2008-05-27 Thread Derek Zahn
Steve Richfield: It is sure nice that this is a VIRTUAL forum, for if we were all in one room together, my posting above would probably get me thrashed by the true AGI believers here. Does anyone here want to throw a virtual stone? Sure. *plonk*

RE: [agi] Pattern extrapolation as a method requiring limited intelligence

2008-05-22 Thread Derek Zahn
John Rose writes: So I feel that much of our brain mass is there due to the natural richness of nature, and there may be quite a bit of overkill compared to what would be needed in software AGI. Are we satisfied building AGIs that cannot cope with the actual world because it is too rich?

RE: [agi] Pattern extrapolation as a method requiring limited intelligence

2008-05-22 Thread Derek Zahn
Vladimir Nesov: I think sterile texture of artificial environments hides the richness of their structure from our intuition, since we already have it imprinted by experience with the real world. Anything less than capable of dealing with the real world won't understand cleaned up environments

[agi] AI in virtual worlds -- popular press

2008-05-19 Thread Derek Zahn
For those who might not have seen it yet, seems this concept is becoming rather popular: http://www.msnbc.msn.com/id/24668099/ --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/

RE: Symbol Grounding [WAS Re: [agi] AGI-08 videos]

2008-05-05 Thread Derek Zahn
Richard Loosemore writes: some very useful text about the symbol grounding problem. Thank you Richard. For once I don't feel like a complete idiot. I am familiar with these Harnad papers and find them quite clear. Beyond that I understand your further explanation and even agree

RE: [agi] AGI-08 videos

2008-05-05 Thread Derek Zahn
Richard Loosemore: So, for example, if I were organizing a conference on AGI I would want people to address such questions as: I find your list of questions to be quite fascinating, and I'd love to participate in an active list or conference devoted to these Foundations of Cognitive

[agi] AGI-08 videos

2008-05-04 Thread Derek Zahn
I noticed yesterday that most of the videos of talks and panels from AGI-08 have been uploaded (http://www.agi-08.org/schedule.php). Big thanks to the organizers for that! I have some difficulty getting into some of the papers but the 10-ish minute overview talks are by and large quite

RE: [agi] AGI-08 videos

2008-05-04 Thread Derek Zahn
One other observation I forgot to mention: Several people brought up the desirability of some kind of benchmark problem area to help compare the methods and effectiveness of various approaches. For a bunch of reasons I think it will be difficult to define such things in a way that researchers

RE: [agi] AGI-08 videos

2008-05-04 Thread Derek Zahn
Bob Mottram writes: I havn't watched all of the AGI-08 videos, but of those that I have seen the 15 minute format left me non the wiser. With limited time I would have preferred longer talks with more depth but perhaps fewer in number, especially on the more mathematical topics. Another

RE: [agi] AGI-08 videos

2008-05-04 Thread Derek Zahn
Richard Loosemore writes: Prompted by your enthusiastic write-up, I just wasted one and a half hours scanning through all of the AGI-08 papers that I downloaded previously. I have 28 of them; they did not include anything from Stephen Reed, nor any NARS paper, so I guess my collection must

RE: [agi] AGI-08 videos

2008-05-04 Thread Derek Zahn
Richard Loosemore: I read Pei's paper and there was nothing horrifying about it (please spare the sarcasm). No sarcasm intended. If I had just come to the conclusion that 28 papers in a row were a waste of time, I'd be horrified at the prospect of a 29th that would also not give me what I

RE: [agi] AGI-08 videos

2008-05-04 Thread Derek Zahn
Richard Loosemore: My god, Mark: I had to listen to people having a general discussion of grounding (the supposed them of that workshop) without a single person showing the slightest sign that they had more than an amateur's perspective on what that concept actually means. I was not at

RE: [agi] help me,please for books for agi and mind in pdf

2008-05-02 Thread Derek Zahn
Bruno Frandemiche asked for online AGI-related text. If you're adventurous, I'd recommend the Workshop proceedings from 2006: http://www.agiri.org/wiki/Workshop_Proceedings and the conference proceedings from AGI-08: http://www.agi-08.org/papers ---

RE: [agi] An interesting project on embodied AGI

2008-04-28 Thread Derek Zahn
Thanks, what an interesting project. Purely on the mechanical side, it shows how far away we are from truly flexible house-friendly robust mobile robotic devices. I'm a big fan of the robotic approach myself. I think it is quite likely that dealing with the messy flood of dirty data coming

RE: [agi] How general can be and should be AGI?

2008-04-26 Thread Derek Zahn
I assume you are referring to Mike Tintner. As I described a while ago, I *plonk*ed him myself a long time ago, most mail programs have the ability to do that. and it's a good idea to figure out how to do it with your own email program. He does have the ability to point at other thinkers and

RE: [agi] Why Symbolic Representation P.S.

2008-04-25 Thread Derek Zahn
The little Barsalou I have read so far has been quite interesting, and I think there are a lot of good points there, even if it is a rather extreme position. The issue of how concepts (which is likely a nice suitcase word lumping a lot of discrete or at least overlapping cognitive functions

RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Derek Zahn
J Andrew Rogers writes: Most arguments and disagreements over complexity are fundamentally about the strict definition of the term, or the complete absence thereof. The arguments tend to evaporate if everyone is forced to unambiguously define such terms, but where is the fun in that. I agree

RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Derek Zahn
Richard: I get tripped up on your definition of complexity: A system contains a certain amount of complexity in it if it has some regularities in its overall behavior that are governed by mechanisms that are so tangled that, for all practical purposes, we must assume that we will never

RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Derek Zahn
Mark Waser: Huh? Why doesn't engineering discipline address building complex devices? Perhaps I'm wrong about that. Can you give me some examples where engineering has produced complex devices (in the sense of complex that Richard means)? --- agi

RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Derek Zahn
Me: Can you give me some examples where engineering has produced complex devices (in the sense of complex that Richard means)? Mark: Computers. Anything that involves aerodynamics. Richard, is this correct? Are human-engineered airplanes complex in the sense you mean?

RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Derek Zahn
Mark Waser: I don't know what is going to be more complex than a variable-geometry-wing aircraft like a F-14 Tomcat. Literally nothing can predict it's aerodynamic behavior. The avionics are purely reactive because it's future behavior cannot be predicted to any certainty even at

RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Derek Zahn
Richard Loosemore: it makes no sense to ask is system X complex?. You can only ask how much complexity, and what role it plays in the system. Yes, I apologize for my sloppy language. When I say is system X complex? what I mean is whether the RL-complexity of the system is important in

RE: [agi] For robotics folks: Seeking thoughts about integration of OpenSim and Player

2008-04-21 Thread Derek Zahn
Ben Goertzel writes: it might be valuable to have an integration of Player/Stage/Gazebo with OpenSim I think this type of project is a good start toward addressing one of the major critiques of the virtual world approach -- the temptation to (unintentionally) cheat -- those canned

RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Derek Zahn
One more bit of ranting on this topic, to try to clarify the sort of thing I'm trying to understand. Some dude is telling my AGI program: There's a piece called a 'knight'. It moves by going two squares in one direction and then one in a perpendicular direction. And here's something neat:

RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Derek Zahn
Stephen Reed writes: Hey Texai, let's program [Texai] I don't know how to program, can you teach me by yourself? Sure, first thing is that a program consists of statements that each does something [Texai] I assume by program you mean a sequence of instructions that a computer can interpret and

RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Derek Zahn
Vladimir Nesov writes: Generating concepts out of thin air is no big deal, if only a resource-hungry process. You can create a dozen for each episode, for example. If I am not certain of the appropriate mechanism and circumstances for generating one concept, it doesn't help to suggest that a

RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Derek Zahn
Richard Loosemore: I do not laugh at your misunderstanding, I laugh at the general complacency; the attitude that a problem denied is a problem solved. I laugh at the tragicomedic waste of effort. I'm not sure I have ever seen anybody successfully rephrase your complexity argument back at

RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-21 Thread Derek Zahn
Josh writes: You see, I happen to think that there *is* a consistent, general, overall theory of the function of feedback throughout the architecture. And I think that once it's understood and widely applied, a lot of the architectures (repeat: a *lot* of the architectures) we have floating

RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-21 Thread Derek Zahn
Richard Loosemore: I'll try to tidy this up and put it on the blog tomorrow. I'd like to pursue the discussion and will do so in that venue after your post. I do think it is a very interesting issue. Truthfully I'm more interested in your specific program for how to succeed than this

RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-20 Thread Derek Zahn
William Pearson writes: Consider an AI learning chess, it is told in plain english that... I think the points you are striving for (assuming I understand what you mean) are very important and interesting. Even the first simplest steps toward this clear and (seemingly) simple task baffle me.

RE: [agi] associative processing

2008-04-17 Thread Derek Zahn
Steve Richfield writes: Hmm, I haven't seen a reference to those core publications. Is there a semi-official list? This list is maintained by the Artificial General Intelligence Research Instutute. See www.agiri.org . On that site there are several semi-official lists -- under

RE: [agi] associative processing

2008-04-17 Thread Derek Zahn
Note that the Instead of an AGI Textbook section is hardly fleshed out at all at this point, but it does link to a more-complete similar effort to be found here: http://nars.wang.googlepages.com/wang.AGI-Curriculum.html --- agi Archives:

RE: [agi] associative processing

2008-04-16 Thread Derek Zahn
Steve Richfield, writing about J Storrs Hall: You sound like the sort that once the things is sort of roughed out, likes to polish it up and make it as good as possible. I don't believe your characterization is accurate. You could start with this well-done book to check that opinion:

RE: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Derek Zahn
Jim Bromer writes: With God's help, I may have discovered a path toward a method to achieve a polynomial time solution to Logical Satisfiability If you want somebody to talk about the solution, you're more likely to get helpful feedback elsewhere as it is not a topic that most of us on this

RE: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Derek Zahn
[EMAIL PROTECTED] writes: But it should be quite clear that such methods could eventually be very handy for AGI. I agree with your post 100%, this type of approach is the most interesting AGI-related stuff to me. An audiovisual perception layer generates semantic interpretation on the

RE: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Derek Zahn
Stephen Reed writes: How could a symbolic engine ever reason about the real world *with* access to such information? I hope my work eventually demonstrates a solution to your satisfaction. Me too! In the meantime there is evidence from robotics, specifically driverless cars,

[agi] Symbols

2008-03-30 Thread Derek Zahn
Related obliquely to the discussion about pattern discovery algorithms What is a symbol? I am not sure that I am using the words in this post in exactly the same way they are normally used by cognitive scientists; to the extent that causes confusion, I'm sorry. I'd rather use words in

RE: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Derek Zahn
Mark Waser writes: True enough, that is one answer: by hand-crafting the symbols and the mechanics for instantiating them from subsymbolic structures. We of course hope for better than this but perhaps generalizing these working systems is a practical approach. Um. That is what is

[agi] Novamente study

2008-03-25 Thread Derek Zahn
Ben, It seems to me that Novamente is widely considered the most promising and advanced AGI effort around (at least of the ones one can get any detailed technical information about), so I've been planning to put some significant effort into understanding it with a view toward deciding whether

RE: [agi] Novamente study

2008-03-25 Thread Derek Zahn
Ben Goertzel writes: The PLN book should be out by that date ... I'm currently putting in some final edits to the manuscript... Also, in April and May I'll be working on a lot of documentation regarding plans for OpenCog. Thanks, I look forward to both of these.

RE: [agi] Complexity in AGI design

2007-12-07 Thread Derek Zahn
Dennis Gorelik writes: Derek, I quoted this Richard's article in my blog: http://www.dennisgorelik.com/ai/2007/12/reducing-agi-complexity-copy-only-high.html Cool. Now I'll quote your blogged response: So, if low level brain design is incredibly complex - how do we copy it? The answer is:

RE: [agi] Solution to Grounding problem

2007-12-07 Thread Derek Zahn
Richard Loosemore writes: This becomes a problem because when we say of another person that they meant something by their use of a particular word (say cat), what we actually mean is that that person had a huge amount of cognitive machinery connected to that word cat (reaching all the way

RE: [agi] None of you seem to be able ...

2007-12-06 Thread Derek Zahn
Richard Loosemore writes: Okay, let me try this. Imagine that we got a bunch of computers [...] Thanks for taking the time to write that out. I think it's the most understandable version of your argument that you have written yet. Put it on the web somewhere and link to it whenever the

RE: [agi] What best evidence for fast AI?

2007-11-10 Thread Derek Zahn
Hi Robin. In part it depends on what you mean by fast. 1. Fast - less than 10 years. I do not believe there are any strong arguments for general-purpose AI being developed in this timeframe. The argument here is not that it is likely, but rather that it is *possible*. Some AI researchers,

RE: [agi] What best evidence for fast AI?

2007-11-10 Thread Derek Zahn
Bryan Bishop: Looks like they were just simulating eight million neurons with up to 6.3k synapses each. How's that necessarily a mouse simulation, anyway? It isn't. Nobody said it was necessarily a mouse simulation. I said it was a simulation of a mouse-brain-like structure. Unfortunately,

RE: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-08 Thread Derek Zahn
Edward, For some reason, this list has become one of the most hostile and poisonous discussion forums around. I admire your determined effort to hold substantive conversations here, and hope you continue. Many of us have simply given up. - This list is sponsored by AGIRI:

RE: [agi] Connecting Compatible Mindsets

2007-11-07 Thread Derek Zahn
A large number of individuals on this list are architecting an AGI solution (or part of one) in their spare time. I think that most of those efforts do not have meaningful answers to many of the questions, but rather intend to address AGI questions from a particular perspective. Would such

RE: [agi] Poll

2007-10-18 Thread Derek Zahn
1. What is the single biggest technical gap between current AI and AGI? I think hardware is a limitation because it biases our thinking to focus on simplistic models of intelligence. However, even if we had more computational power at our disposal we do not yet know what to do with it, and

RE: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Derek Zahn
Tim Freeman writes: Let's take Novamente as an example. ... It cannot improve itself until the following things happen: 1) It acquires the knowledge and skills to become a competent programmer, a task that takes a human many years of directed training and practical experience. 2) It is

RE: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Derek Zahn
Tim Freeman: No value is added by introducing considerations about self-reference into conversations about the consequences of AI engineering. Junior geeks do find it impressive, though. The point of that conversation was to illustrate that if people are worried about Seed AI exploding, then

RE: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Derek Zahn
Linas Vepstas: Let's take Novamente as an example. ... It cannot improve itself until the following things happen:1) It acquires the knowledge and skills to become a competent programmer, a task that takes a human many years of directed training and practical experience. Wrong. This

RE: [agi] RSI

2007-10-03 Thread Derek Zahn
Edward W. Porter writes: As I say, what is, and is not, RSI would appear to be a matter of definition. But so far the several people who have gotten back to me, including yourself, seem to take the position that that is not the type of recursive self improvement they consider to be RSI. Some

RE: [agi] RSI

2007-10-03 Thread Derek Zahn
I wrote: If we do not give arbitrary access to the mind model itself or its implementation, it seems safer than if we do -- this limits the extent that RSI is possible: the efficiency of the model implementation and the capabilities of the model do not change. An obvious objection to this

RE: [agi] Religion-free technical content

2007-10-02 Thread Derek Zahn
Richard Loosemore: a) the most likely sources of AI are corporate or military labs, and not just US ones. No friendly AI here, but profit-making and mission-performing AI. Main assumption built into this statement: that it is possible to build an AI capable of doing anything except dribble

RE: [agi] Religion-free technical content

2007-10-01 Thread Derek Zahn
Richard Loosemore writes: You must remember that the complexity is not a massive part of the system, just a small-but-indispensible part. I think this sometimes causes confusion: did you think that I meant that the whole thing would be so opaque that I could not understand *anything* about

RE: [agi] Religion-free technical content

2007-10-01 Thread Derek Zahn
Edward W. Porter writes: To Matt Mahoney. Your 9/30/2007 8:36 PM post referred to mine in reply to Derek Zahn and implied RSI (which I assume from context is a reference to Recursive Self Improvement) is necessary for general intelligence. So could you, or someone, please define exactly

RE: [agi] Religion-free technical content

2007-10-01 Thread Derek Zahn
it a lot. Date: Mon, 1 Oct 2007 11:34:09 -0400 From: [EMAIL PROTECTED] To: agi@v2.listbox.com Subject: Re: [agi] Religion-free technical content Derek Zahn wrote: Richard Loosemore writes: You must remember that the complexity is not a massive part of the system, just a small

RE: [agi] Religion-free technical content

2007-09-30 Thread Derek Zahn
I suppose I'd like to see the list management weigh in on whether this type of talk belongs on this particular list or whether it is more appropriate for the singularity list. Assuming it's okay for now, especially if such talk has a technical focus: One thing that could improve safety is to

RE: [agi] Religion-free technical content

2007-09-30 Thread Derek Zahn
Richard Loosemore writes: It is much less opaque. I have argued that this is the ONLY way that I know of to ensure that AGI is done in a way that allows safety/friendliness to be guaranteed. I will have more to say about that tomorrow, when I hope to make an announcement. Cool. I'm sure

RE: [agi] HOW TO CREATE THE BUZZ THAT BRINGS THE BUCKS

2007-09-28 Thread Derek Zahn
Don Detrich writes: AGI Will Be The Most Powerful Technology In Human History – In Fact, So Powerful that it Threatens Us Admittedly there are many possible dangers with future AGI technology. We can think of a million horror stories and in all probability some of the problems that will

RE: [agi] Selfish promotion of AGI

2007-09-27 Thread Derek Zahn
Responding to Edward W. Porter: Thanks for the excellent message! I am perhaps too interested in seeing what the best response from the field of AGI might be to intelligent critics, and probably think of too many conversations in those terms; I did not mean to attack or criticise your

RE: [agi] NVIDIA GPU's

2007-06-21 Thread Derek Zahn
Ben Goertzel writes: http://www.nvidia.com/page/home.html Anyone know what are the weaknesses of these GPU's as opposed to ordinary processors? They are good at linear algebra and number crunching, obviously. Is there some reason they would be bad at, say, MOSES learning? These parallel

RE: [agi] NVIDIA GPU's

2007-06-21 Thread Derek Zahn
Moshe Looks writes: This is not quite correct; it really depends on the complexity of the programs one is evolving and the structure of the fitness function. For simple cases, it can really rock; see http://www.cs.ucl.ac.uk/staff/W.Langdon/ That's interesting work, thanks for the link!

RE: [agi] Another attempt to define General Intelligence, and some AGI design thoughts.

2007-06-15 Thread Derek Zahn
Robert Wensman writes: Has there been any work done previously in statistical, example driven deduction? Yes. In this AGI community, Pei Wang's NARS system is exactly that: http://nars.wang.googlepages.com/ Also, Ben Goertzel (et. al.) is building a system called Novamente

RE: [agi] Another attempt to define General Intelligence, and some AGI design thoughts.

2007-06-14 Thread Derek Zahn
Robert Wensman writes: Databases: 1. Facts: Contains sensory data records, and actuator records. 2. Theory: Contains memeplexes that tries to model the world. I don't usually think of 'memes' as having a primary purpose of modeling the world... it seems to me like the key to your whole

RE: [agi] poll: what do you look for when joining an AGI group?

2007-06-13 Thread Derek Zahn
9. a particular AGI theoryThat is, one that convinces me it's on the right track. Now that you have run this poll, what did you learn from the responses and how are you using this information in your effort? - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe

RE: [agi] Symbol Grounding

2007-06-12 Thread Derek Zahn
I think probably AGI-curious person has intuitions about this subject. Here are mine: Some people, especially those espousing a modular software-engineering type of approach seem to think that a perceptual system basically should spit out a token for chair when it sees a chair, and then a

RE: [agi] Symbol Grounding

2007-06-12 Thread Derek Zahn
One last bit of rambling in addition to my last post: When I assert that almost everything important gets discarded while merely distilling an array of rod and cone firings into a symbol for chair, it's fair to ask exactly what that other stuff is. Alas, I believe it is fundamentally

RE: [agi] Pure reason is a disease.

2007-06-11 Thread Derek Zahn
Matt Mahoney writes: Below is a program that can feel pain. It is a simulation of a programmable 2-input logic gate that you train using reinforcement conditioning. Is it ethical to compile and run this program? - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe

RE: [agi] Get your money where your mouth is

2007-06-08 Thread Derek Zahn
Josh writes: http://www.netflixprize.com Thanks for bringing this up! I had heard of it but forgot about it. While I read about other people's projects/theories and build a robot for my own project, this will be a fun way to refresh myself on statistical machine learning techniques and

RE: [agi] about AGI designers

2007-06-06 Thread Derek Zahn
YKY writes: There're several reasons why AGI teams are fragmented and AGI designers don't want to join a consortium: A. believe that one's own AGI design is superior B. want to ensure that the global outcome of AGI is friendly C. want to get bigger financial rewards D. There are

RE: [agi] Pure reason is a disease.

2007-06-05 Thread Derek Zahn
Mark Waser writes: BTW, with this definition of morality, I would argue that it is a very rare human that makes moral decisions any appreciable percent of the time Just a gentle suggestion: If you're planning to unveil a major AGI initiative next month, focus on that at the moment.

  1   2   >