Derek Zahn wrote:
Richard Loosemore:

 > I read Pei's paper and there was nothing horrifying about it (please
 > spare the sarcasm).
No sarcasm intended. If I had just come to the conclusion that 28 papers in a row were a waste of time, I'd be horrified at the prospect of a 29th that would also not give me what I was looking for. I was merely trying for a light tone while expressing my belief that you were unlikely to change your conclusion about the field of Artificial General Intelligence by spending further time with that paper or the videos. I shall, however, spare you such glib language in future correspondence.

My apologies, Derek, if no sarcasm was intended. Sometimes the context can make a tone appear where none was meant.

Pei's paper was one of the 28. The truth is that his paper was, on reflection, in a category all of its own. It addressed background issues where other people (for the most part) simply swept background issues under the carpet and went straight to hacking.

I happen not to think that Pei's analysis was the whole story, but that level of disagreement was a million miles removed from my beef with most of the other papers. Those others, in my view, were just avoiding the real issues.

So, for example, if I were organizing a conference on AGI I would want people to address such questions as:

- What exactly is the grounding problem, and how does it impact the construction of an AGI?

- What assumptions do we have to buy into if we go with bayesian nets as a choice of reasoning/representation formalism? And how would we go about finding out if those assumptions are valid enough to make it safe to use bayes nets?

 - Ditto for other choices of representation.

- What are the different ways that a logical reasoning formalism can be linked to a learning mechanism, and how

- How can we distinguish between AGI models that are just arbitrary pet designs that seem good to their creators, and models that have some independent justification to them? How do we do better than "You'll see how good my formalism is when I get it working!"?

 - The complex systems issue that I have raised here repeatedly.

- Control issues that arise with AGI systems that do not arise with narrow AI systems.

- What is the relationship between probabilities (or similar parameters attached to terms in a representation) and the meaning of those probabilities in the real world? Similarly, how do probabilities suffer from real world issues about gathering the data that supplies the numbers? How do uncertainties propagate in such systems?


But instead of deep-foundation topics like these, what do we get? Mostly what we get is hacks. People just want to dive right and make quick assumptions about the answers to all of these issues, then they get hacking and build something - *anything* - to make it look as though they are getting somewhere.




Richard Loosemore

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to