On 2/21/20 9:24 PM, Matt Mahoney wrote:


On Fri, Feb 21, 2020, 12:03 PM Stanley Nilsen <[email protected]> wrote:
comments inserted below (always appreciate Matt's posts.)

On 2/20/20 8:05 PM, Matt Mahoney wrote:
The goal of AGI is to automate human labor. It requires solving hard
problems like vision, language, robotics, art, and modeling human
behavior.
We may have different ideas of the goal of AGI.  The definition of "automate human labor" means that we have been doing AGI for hundreds of years - steamshovel replaces human labor; telegraph replaces horse and rider ...

AGI is expensive. We need a good reason to try to solve it.  We pay people $80 trillion per year to do work that machines aren't smart enough to do.

The "I" in AGI means intelligence, right?  Intelligence has the attribute of rightness.  That is, a positive connotation that intelligence does the right thing. 

You're confusing intelligence with morality. Suppose my goal was to wipe out humanity. The smarter I was, the quicker I could achieve my goal.

If you take the morality out of intelligence then you should use the term "power."  Power is not intelligence.  Does the mass shooter appear intelligent to you?  I would say that he was not intelligent because the outcome was not a good outcome. 

Just because you can accomplish a goal does not mean you are intelligent.  Intelligence should at least contain the idea that the outcome was good.  If not, then talk about building greater powers - a new kind of nuclear weapon that works against human freedom - use the powers of surveillance to take away human freedom... power, power, power - but not intelligence.  Yes, computer power is creating new powers.

If we change the meaning of intelligence, so that it no longer has the element of goodness, then we are wrong to embrace it and idolize it.   Pity the world where our greater intelligence is just another hired gun - going to the highest bidder.
So, what's it going to be, AGP or AGI?

 -- off soap box 
stan

Reply via email to