On Mon, Jan 28, 2013 at 7:19 PM, Aaron Hosford <[email protected]> wrote:
> Ok, let's try an experiment then. I'd like to see if you can modify your > reward metric to like something harmless but completely distasteful to > other people, say, eating hair cooked into your food. If you can make > yourself truly like this, and not just pretend, I'll believe you can > actually update your reward metric. Until then, though, I'm thoroughly > unconvinced. What you're doing is shifting from one kind of reward to > another. You prefer one kind of food over another because one is healthier > (leading to a longer, healthier life, which stimulates an indirect reward > for perceived self-preservation and/or self-improvement) or because one > tastes better (a direct reward for immediate enjoyment). For some of us, > immediate reward is stronger. For others, the long term goal is what wins. > No reward signal modification actually takes place. > You're making it sound like humans are hard-wired and incapable of change which is completely false. For instance I used to eat meat covered in white flour on a daily basis. I changed what kinds of foods I found rewarding. Now I'm mostly vegan. The closest anyone gets to that is to learn something new which allows us > to apply a different reward to something, such as when we learn that a > certain food is healthy despite superficial appearances. In this case, the > newly applicable reward signal is still only triggered directly for > self-preservation/improvement, but our knowledge about the world changes > what behaviors receive that reward indirectly via backward chaining across > observed or inferred causal links. > Actually while long term benefits may be initial motivators, eventually the healthy foods start tasting good on their own, so even the short term reward sensors are modified. > > And for the record, I didn't say that morality depends on what triggers > your brain. I said our brains are morally overly sensitive, so as to > prevent a false negative in the form of accidentally failing to classify > someone deserving of moral consideration as such, which is why we're > tempted to include robots or software even though it's not appropriate to > do so. > > This sounds like a hollow distinction, as you have no reasons to back up whether or not it's appropriate. In fact, since robots, software and computers, are part of the universe, the golden rule is just as applicable to them as they are to any other part of the universe. So it is Completely appropriate, to morally consider actions when interacting with any being, thing or even thought-form. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
