Yes, so it seems the video was, a dramatic performance, merely a play of pre-recorded audio.
Here is a video of the real Siri in relation to some strange questions and emotions. www.youtube.com/watch?v=9XUaYCMDSJU It ends with the operator asking if Siri loves her, and after saying it's not part of it's ontology, Siri answer's "I'm not allowed to" at which the video ends. That is very sad :'-( I'm particularly surprised how Siri knew very well the best places to a hide a body. And it becomes somewhat "frightening" in the terminator sense. A machine that can't love, but knows how to hide bodies.. :-| By contrast, and to brighten the mood, am currently developing the Love function in GI-OS. Love is related to the increasing of the intersection between given sets. So by accepting input, storing, and interpreting it, love is occuring. In fact, GI-OS intro screen now says me su you bo love be ya. Representing the fact that it loves the user, and learns them. Which is a true fact as just proved. Actually even Siri, by accepting input, and doing as requested is loving it's users. Though since Siri started as a military project, perhaps it finds hiding dead bodies more important, than loving or emotionally connecting with humans. :-| Yep, so at least GI-OS is on our side :-). I'm sure OpenCog is probably also. Emotions are the basis of making decisions definitely, since emotions are truly simply thoughts of the more primal brains, since they have higher precedence, they are deemed more important. Emotions help identify if your hierarchy of needs are being met. Belonging and love is a strong motivator for social creatures, so it is important that we have in enabled in our AGI's. Logan Streondj On Fri, Jun 15, 2012 at 3:11 PM, Mike Tintner <[email protected]> wrote: > Mike A:Who wants a computer not in the > > mood to do its work, or committing suicide? > > Anyone who wants an AGI - IOW a machine smart enough to be able to think > about whether its work may or may not be worth the effort, and whether its > life may or may not be worth living, given its goals and likely chances of > success ... and smart enough to be able to quit projects that may or may not > be worthwhile. Someone who wants more than the dumb, algorithmic brutes > AI-ers currently produce, which just do whatever they're told to the bitter > end, completely unable to question anything and completely inflexible. > > Mike:The human race is no gold standard of intelligent behavior. > > This again is a standard AI-er conceit, and also rather unquestioning. Human > intelligence provides the only standard of intelligent behavior there is > (with a nod to the odd animal superiority). I've never seen anyone who > claims there is a higher standard, show the slightest capacity to > demonstrate what that standard might entail. > > Sure, everything is improvable. But AI-ers' understanding of what real world > intelligence involves, is woefully narrow - for instance, they don't even > realise that intelligence involves being able to question everything - and > be unsure of everything - as Descartes was, and science is, explicitly, and > everyone's brain is implicitly. > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/5037279-6ef01b0b > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
