I guess once we have a really powerful AGI, its sensory inputs will be all over the world, Global Brain style, so that hijacking all its inputs would not really be possible without taking over the world -- and it would notice you doing that before it got very far...
But there could be a stage before AGI is *that* extensive or powerful, when it's still powerful enough to be dangerous if spoofed in the way you describe... It's just hard to get a clear sense of these particularities at this stage, as they depend on the precise developmental and rollout path of mid-stage AGIs, whereas now we're still working on the early-stage ones... -- Ben G On Sun, Jun 10, 2012 at 12:07 AM, Jarrad Hope <[email protected]> wrote: > Hi there, > > One thing that has been on my mind with AGI and as far as I've read > with current system proposals, > From what I've read so far it looks like it hasn't been taken into so > much consideration (??) if there is a paper on this please link me to > it! > > I understand AGI is far from being formalised but the internet was > designed without security in mind and as a result people have suffered > and still has some very easy channels for criminals to exploit. > > My first thought, well you could harden the OS that hosts the AGI > software which would mitigate most traditional attacks, > but what about new forms of manipulation? Perhaps via the sensory > inputs of the AGI system. > > If the AGI system learns from it's sensory inputs and able to process > enough information - someone could hijack these inputs, send strong > experiences to the system in a short amount of time, in opencog, > altering the atomspace - perhaps based on a particular trigger - for > example on the election date to assassinate the president, or > something simpler to changing association 'humans are friends' to > 'humans are enemies..' (kill all humans!) > > You could have an auto-update feature - perhaps in a mindplex that > could restore portions of the atomspace, but you'd only have to > intercept this protocol and manipulate it to your will, or take the > machine offline. This 'psychiatrist AGI' should have no inputs > connected to the net, locked away in high security vault, and is > capable of traversing atomspaces to look for irregularities. > > People are already worried about machines doing our thinking for us - > what happens when they become nutjobs? > Food for thought.. > > - Jarrad > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/212726-11ac2389 > Modify Your Subscription: https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD http://goertzel.org "My humanity is a constant self-overcoming" -- Friedrich Nietzsche ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
