On Thursday 17 May 2007 04:42:33 pm Mike Tintner wrote:
> Won't somebody actually deal with the problem  - how will your AGI system 
decide to invest or not to invest $10,000 in a Chinese mutual fund tomorrow? 
(You guys are supposed to be in the problem-solving business).

Au contraire. Mainstream AI is in the problem-solving business. We're in the 
business of trying to figure out how to build a machine that can *learn* to 
solve problems.

How would a human being decide to invest or not in a mutual fund? If he tried 
to decide based on a small handful of formal definitions and heuristics, he'd 
have a fair chance of losing money. Indeed, it's not uncommon at all for 
humans to lose money with attempted investments. Thus your problem has some 
smell of the "superhuman human" fallacy that has plagued AI for lo these many 
years.

In real life, the humans who make good investments more often than not, do so 
by dint of experience -- their own experiments, and watching other investors 
and gaining second-hand experience. This is the way an AI would have to do 
it. There is no magic formula here -- just lots of hard work. 

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to