Hi YKY,
I agree with you that we (the human race) are theoretically close to AGI, in the sense that < 5 years of concerted effort by <10 of the right people, implementing, testing and teaching the right software code, could bring us to a human-level AGI.
And, I agree that there is no "one true path" to AGI, but rather many workable routes.
However, I am also sure that there are many unworkable routes -- including, probably, many that initially look workable but ultimately prove not to be ;-)
How close we are **pragmatically** to AGI is a different question, because it is not obvious at what point in time the right amount of resources will be focused on the right AGI design.... For instance, I believe that the Novamente design is a workable software design, but at our current rate of progress we are unlikely to achieve human-level AI in < 10 years. We need more funding to focus more highly expert human effort on completing the implementation, testing and teaching of the system. And I know of others who also have pretty good AGI designs (though IMO not as well fleshed out as Novamente), who are in roughly similar positions...
Beyond the above issues, in your email you mention at least two points of relevance...
1) that Novamente, like some other AGI designs, has not been described in detail in the public literature
2) that different researchers have different ideas about various issues related to AGI, preventing collaborative work
Regarding point 1: this is indeed something I've been pondering for a while now. Making public the details of the Novamente design would serve at least two purposes:
-- making it easier for us to get government grant funding, via getting our project more academic legitimacy [though it does have nonzero academic legitimacy due to a series of overview papers, that is not the same as what would be achieved via a series of deeper books and papers discussing the details of the approach]
-- making it easier to recruit volunteers into the project
Opening up the source-code would make it even easier to get coding on Novamente done, of course. There are plenty of folks, largely in academia, who would contribute to a project like Novamente if it were open-source but don't want to join a privately held AI effort.
On the other hand, the dark side is obvious.
The main problem is not the commercial one (that once you've finished your AGI, if it's privately held, you can more easily use it to make money). While I like money as much as the next guy, $$ is not the reason to make an AGI. There are other, easier (I'm not saying easy) ways to make money if one wishes to devote many years to the pursuit.
The main problem is the "AGI safety" issue. I am afraid that if details of how to make an AGI are disclosed to the world, then someone else with a lot of $$ for staff and hardware will take the ideas and build an AGI really fast. They might build the AGI solely with a view toward getting their first, with insufficient care to ensuring the outcome is a beneficent one. This scares me.
Next, regarding your point that every theorist has different ideas about AGI. I will discuss this in a separate post on the Mind Ontology...
-- Ben
On 10/15/06,
YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:
Hi Ben and othersThe way I see it, we are close to building a complete AGI, but there are gaps to be filled in the details. In my opinion one thing that Ben can do better to become a leader in AGI R&D is to delegate tasks to other people / groups, ie adopt a division-of-labor strategy.I think the main obstacle is that we have developed different ways of tackling similar problems. For example, for some historical reasons I tend to prefer symbolic logic over neural networks, predicate logic over term logic, sentential representation over graphical representation, Prolog and C# over Lisp and Java, and a mix of probability and fuzzy logic over pure Bayesianism, to name a few idiosyncratic preferences. I don't think my set of preferences is the only way of building an AGI, and there are probably many ways to achieve the same goal.The reason why I don't use Ben's hypergraph representation is simply because I don't even know what exactly he's doing with hypergraphs.So, the way I see it, the question is how to reconcile different ways of doing things so that we can work together and achieve our common goal more effectively.Since there is no unique solution to the AGI problem, and each of us may have some near-optimal solutions in some domains and not-so-optimal solutions in other domains, and probably no one has THE optimal solution, we can perhaps make some compromising and eclectic arrangements.I know that doing this would involve some pain. It's a tautology that everyone thinks his/her solution is the best solution (otherwise s/he would have changed it).How about this: we can make a list of conflict issues, and try to resolve them by having a mix of decision making by different parties. It doesn't have to follow rigid rules.At least I'm willing to make the first step...YKY
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
