On Thu, Aug 23, 2012 at 2:53 PM, [email protected]
<[email protected]> wrote:
>
> In other words, *we* implement the utility function. And we use it daily
> to find new ways to improve itself. I like this line of reasoning, and I
> think you're right, but it leaves me intellectually unsatisfied. I still
> want to build a true AGI, just to prove I can (and to understand how minds
> work better). Safer doesn't always mean better.

Are you really going to build something with more knowledge and
computing power than the internet? It can do a lot, but there are
still hard problems to solve in natural language, vision, robotics,
and modeling human behavior before it can automate the remaining work
still done by humans at a cost of $70 trillion per year because
machines aren't smart enough.

It seems a lot of people are still trying to build artificial human
minds. Why? Do we need to duplicate human weaknesses as well as
strengths? Do we really want machines that won't work nights and
weekends? Do we really need a 10 petaflop calculator that can only add
one digit per second with 95% accuracy? Is there a market for
artificial toddlers?

If that's not what you intend to build, then what?


-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to