On 3/25/07, rooftop8000 <[EMAIL PROTECTED]> wrote:
The richer your set of algorithms and representations, the more likely the correct ones will emerge/pop out as you put it. I don't really like the idea of hoping for extra functionality to emerge.
This particular version of emergence does not seem to work because: 1. You need a massive number of participants (> 1,000, say) which we definitely don't have on this list or on the net. 2. You cannot simply throw a bunch of algorithms and representations together and make an AGI. There has got to be some common communication protocols. *If* you are willing to enforce communication protocols, then why not instead enforce a common knowledge representation scheme, which is more direct? Trying to establish *any* concensus is hard, but it seems to be a *necessary* step.
Because i think we need more than 1 knowledge base in the system, and more than 1 type of communication. Why should neuralnet-Bayesian talk use the same representation as communication between logic modules. But maybe making a framework that is general enough for all those things is impossible?
Well, mine and Ben's approaches are similar: we try to *combine* logic, probability, and graphical models / neural networks. It's not really as hard as it sounds. What do you think about such an approach? I'm also open to other alternatives, if they are simpler. But it seems that it is impossible to simply let a bunch of AGIers collaborate by everyone "doing their own thing" without any kind of imposed structure / organization. Or am I missing some very ingenious ideas?
> I think Jey's comment is reasonable. It seems impractical to start a > collaborative AI project without having an AGI design which specifies
what
> modules are there and how they communicate. I hoped someone on the list was smart enough to find one
I have actually proposed such an architecture, in outline. I'm sure Ben G and Peter Voss also have their respective architectures. One question is whether we can synthesize these different theories. If not, we'd end up with a number of isolated groups that do not collaborate in any meaningful / significant way.
my vote goes to any framework that is broad enough to make -rule based/ logic parts -parts with number-based neural networks etc -... and allows different parts to be developed independently and added easily
A *probabilistic* logic-based system is very much numerical. Me, Pei Wang, and Ben all advocate the use of some form of numerical logic for commonsense reasoning. This type of systems cannot be easily classified into "logic" or "neural". Some form of unifying "framework", whatever that is, is of course desirable. But the problem is how to get people to *agree* to work within your framework (or any particular one). In fact, a whole bunch of people on this list may claim to have some unifying framework for everyone else to work in. Simply voting on individual features cannot work because all the features of an AGI are inter-related; they have to work together synergistically. I'd make a bronze statue of anyone who can solve this problem!! YKY ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
