Neil, how can we be so sure, awareness in others is inferred just because
we tend to relate to us. We are aware and so we infer that organisms which
show certain signs must be like us. Maybe in the future when robots walk
around and talk to us, we could not be that sure, there is always a
possibility, isn't there?

On Mon, Mar 2, 2015 at 3:39 PM, archytas <[email protected]> wrote:

> Definitely not like us RP - though we aren't that sure how we process the
> external either.  No machine has yet woken up to speak to me - but they are
> doing things I don't understand and producing results we haven't thought of
> in ways we can't work out the why of.  We can program them to relate to
> sound, sight, smell, touch and taste (and some other sensing) - but
> sentience is missing.  They can learn from sensor input.
>
>
> On Monday, March 2, 2015 at 9:46:30 AM UTC, RP Singh wrote:
>>
>> Neil, are robots aware of sights and sounds like us or do they just
>> recognise such things without awareness?
>>
>> On Mon, Mar 2, 2015 at 1:34 PM, archytas <[email protected]> wrote:
>>
>>> I'm not sure it has to do anything much to us Allan - though potentially
>>> it changes everything.  The machines could soon be biological - they can
>>> already record information as DNA.  Corrupting programs might be stopped by
>>> surveillance routines.  We could look at this as human, even soul
>>> enhancement and as educational.
>>>
>>>
>>> On Monday, 2 March 2015 07:35:34 UTC, Allan Heretic wrote:
>>>>
>>>> AI sounds cool.. several problems though it would be easy to program
>>>> violence in, the manipulation show with out a chip is going to suddenly
>>>> change with a chip.  RIGHT!
>>>>
>>>> The other problem is the soul..  and the mix or no soul  pure AI will
>>>> it contain a soul?
>>>>
>>>> تجنب. القتل والاغتصاب واستعباد الآخرين
>>>> Avoid; murder, rape and enslavement of others
>>>>
>>>> -----Original Message-----
>>>> From: archytas <[email protected]>
>>>> To: [email protected]
>>>> Sent: Mon, 02 Mar 2015 8:09 AM
>>>> Subject: Mind's Eye Moral Enhancement
>>>>
>>>> Humans developed to live in small communities - we were pretty
>>>> murderous in them and you now are exposed to only a tenth of the chance of
>>>> dying a violent death.  We are not well-equipped for today's global
>>>> circumstances.  He are not much good at large scale collective moral
>>>> problems.  Moral enhancement in traditional form has been about education,
>>>> religion or short term drugs and lobotomy-type intervention.  Artificial
>>>> intelligence is another possibility.
>>>>
>>>> Far from proceeding in the rational way set as an ideal, most of our
>>>> moral views and decisions are made on immediate intuition, emotional
>>>> response and gut reactions. Reasoning, if we do it at all, is often just
>>>> rationalisation of what we intuitively thought anyway. To overcome our
>>>> biological and psychological limitations, we could develop moral artificial
>>>> intelligence.
>>>>
>>>> Many are very scared of this, perhaps because they know they are not
>>>> strong moral agents.  Some think such machines would recognise us for what
>>>> we are (a danger to the planet) and kill us off.  Given our potential to do
>>>> this to each other, I'm dismissive of the machine problem.  MIA could
>>>> monitor a lot more than we manage as humans and point out personal bias and
>>>> advise on the right course of action according to human moral values.
>>>> Agent-tailored MIA would preserve moral pluralism and help the individual's
>>>> autonomy by removing the restriction of her psychology.
>>>>
>>>> I have volunteered Gabby for the first MIA chip (no wait, that was
>>>> Cartman with the V chip in South Park).  In fact, AI is a;ready helping
>>>> with a lot of learning.  We are introducing AI into fraud management
>>>> systems with patents being filed - http://www.freepatentsonline.com/
>>>> 20150032589.pdf - car driving, medical and dental analysis, narrative
>>>> generation in entertainment - http://eprints.hud.ac.uk/23153/1/118.pdf
>>>> - Big Data will drive Big HPC and Complex Analytics. Supercomputers of the
>>>> future will need to: (1) Quantify the uncertainty associated with the
>>>> behaviour of complex systems-of-systems (e.g. hurricanes, nuclear disaster,
>>>> seismic exploration, engineering design) and thereby predict outcomes (e.g.
>>>> impact of intervention actions, business implications of design choices);
>>>> (2) Learn and refine underlying models based on constant monitoring and
>>>> past outcomes; and (3) Provide real-time interactive visualization and
>>>> accommodate “what if” questions in real-time. This will require an
>>>> evolution in algorithm and system design, as well as even chip
>>>> architectures to manage the power-performance trade-offs needed to attain a
>>>> new era of Cognitive Supercomputing.
>>>>
>>>> Heads in the sand on this folks?  Or would you have the "implant" like
>>>> me if one was available?
>>>>
>>>>  --
>>>>
>>>> ---
>>>> You received this message because you are subscribed to the Google
>>>> Groups ""Minds Eye"" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to [email protected].
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>  --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups ""Minds Eye"" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
>
> ---
> You received this message because you are subscribed to the Google Groups
> ""Minds Eye"" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
""Minds Eye"" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to