I guess the approach of extracting all that information out ahead of time
reminds me of the old approach to making robots walk. The robot would take
a step and then spend hours performing calculus operations on a model of
the robot's leg to determine how the next step should proceed. The newer
approach takes a simpler approach of letting the leg itself be its own
model, and making real-time corrections to its movement as the robot
recognizes it going the wrong direction. It's this on-the-fly correction
combined with letting the system be its own model that I see as a good way
to deal with impossible-seeming computational problems like determining
exactly what everyone wants ahead of time. Besides, what if someone changes
their mind? I'm sure that, since we're dealing with 7 billion dynamical
systems whose state is perpetually changing, most of us won't even know for
ourselves what we want in advance.

On Fri, Aug 24, 2012 at 4:31 PM, Aaron Hosford <[email protected]> wrote:

> I meant literally, "Make people happy," in those words, maybe with an,
> "Ask people what makes them happy," tacked on or hardcoded in. If the
> system truly understands natural language and is even quasi-intelligent,
> then we can use the system's own intelligence to derive the complicated
> messy details as to what makes people happy by having the system ask people
> for their preferences and actually listen to and think about what they tell
> it -- especially if they take the time to correct it because it messed up
> and did the wrong thing. When you raise a child, you teach the child what's
> expected of it. Children don't come with that built in. They have to be
> taught. The difference is, children have value in and of themselves,
> and they have many goals in addition to making Mom or Dad happy. This
> system would only have value derived from its service to humanity (meaning
> it doesn't itself deserve moral consideration, since it's just a tool) and
> it's one and only goal would be to make its creators happy, in whatever way
> they define that.
>
>
>
> On Fri, Aug 24, 2012 at 4:21 PM, Matt Mahoney <[email protected]>wrote:
>
>> On Fri, Aug 24, 2012 at 2:24 PM, Aaron Hosford <[email protected]>
>> wrote:
>> >
>> > So why wouldn't we design a system that attempts to attain a nice simple
>> > goal like "make people happy" and build in the awareness that in order
>> to
>> > define that goal in all its complexity, it needs to *ask* us what we
>> want.
>>
>> Because that's not a shortcut. The goal "make people happy" is not
>> nice and simple. It is 10^17 bits, unless you mean make people happy
>> by giving them drugs or inserting an electrode into the nucleus
>> accumbens.
>>
>> > Then the system iteratively refines that goal as new information comes
>> in at
>> > the measly rate of "1 to 5 bits per second through
>> > speech, writing, or typing", as time is available and the need arises,
>> > making do with a less individualized but still highly effective
>> definition
>> > of the general goal in the meantime. People recognize the value of
>> > information vs. the time it takes to communicate it, and will point out
>> the
>> > most inconvenient misunderstandings first, so the system can rely on the
>> > users to selectively identify and convey the information it needs to
>> know in
>> > order to meet their needs. In other words, if you want the system to be
>> > individualized to your preferences, you pay the cost of gathering &
>> > transmitting a description of your preferences. This is the current
>> model
>> > for all those apps you mention: you go to the preferences page and
>> check the
>> > boxes according to what you prefer. In the future, it will be
>> communicated
>> > via natural language, but it will be the same principle at work.
>>
>> I thought we were already doing that. But yes, the cost of
>> communicating our preferences will be the most expensive part of AGI
>> once Moore's Law makes the hardware cheap enough. (Right now, it would
>> cost $1 quintillion if you could buy it. In 15 years the same
>> computing power should cost $1 quadrillion, low enough to make it cost
>> effective to replace most human labor). Natural language is better
>> than filling out an online survey. Observing your behavior is better
>> still. Guessing based on the preferences of other people with similar
>> behavior is better still. We already do all of these things because it
>> is so expensive.
>>
>>
>> -- Matt Mahoney, [email protected]
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed:
>> https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to