Humans developed to live in small communities - we were pretty murderous in 
them and you now are exposed to only a tenth of the chance of dying a 
violent death.  We are not well-equipped for today's global circumstances. 
 He are not much good at large scale collective moral problems.  Moral 
enhancement in traditional form has been about education, religion or short 
term drugs and lobotomy-type intervention.  Artificial intelligence is 
another possibility.

Far from proceeding in the rational way set as an ideal, most of our moral 
views and decisions are made on immediate intuition, emotional response and 
gut reactions. Reasoning, if we do it at all, is often just rationalisation 
of what we intuitively thought anyway. To overcome our biological and 
psychological limitations, we could develop moral artificial intelligence.

Many are very scared of this, perhaps because they know they are not strong 
moral agents.  Some think such machines would recognise us for what we are 
(a danger to the planet) and kill us off.  Given our potential to do this 
to each other, I'm dismissive of the machine problem.  MIA could monitor a 
lot more than we manage as humans and point out personal bias and advise on 
the right course of action according to human moral values.  Agent-tailored 
MIA would preserve moral pluralism and help the individual's autonomy by 
removing the restriction of her psychology.

I have volunteered Gabby for the first MIA chip (no wait, that was Cartman 
with the V chip in South Park).  In fact, AI is a;ready helping with a lot 
of learning.  We are introducing AI into fraud management systems with 
patents being filed - http://www.freepatentsonline.com/20150032589.pdf - 
car driving, medical and dental analysis, narrative generation in 
entertainment - http://eprints.hud.ac.uk/23153/1/118.pdf - Big Data will 
drive Big HPC and Complex Analytics. Supercomputers of the future will need 
to: (1) Quantify the uncertainty associated with the behaviour of complex 
systems-of-systems (e.g. hurricanes, nuclear disaster, seismic exploration, 
engineering design) and thereby predict outcomes (e.g. impact of 
intervention actions, business implications of design choices); (2) Learn 
and refine underlying models based on constant monitoring and past 
outcomes; and (3) Provide real-time interactive visualization and 
accommodate “what if” questions in real-time. This will require an 
evolution in algorithm and system design, as well as even chip 
architectures to manage the power-performance trade-offs needed to attain a 
new era of Cognitive Supercomputing.

Heads in the sand on this folks?  Or would you have the "implant" like me 
if one was available?

-- 

--- 
You received this message because you are subscribed to the Google Groups 
""Minds Eye"" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to