> On 12 Jun 2020, at 20:22, Jason Resch <[email protected]> wrote:
> 
> 
> 
> On Wed, Jun 10, 2020 at 5:55 PM PGC <[email protected] 
> <mailto:[email protected]>> wrote:
> 
> 
> On Tuesday, June 9, 2020 at 7:08:30 PM UTC+2, Jason wrote:
> For the present discussion/question, I want to ignore the testable 
> implications of computationalism on physical law, and instead focus on the 
> following idea:
> 
> "How can we know if a robot is conscious?"
> 
> Let's say there are two brains, one biological and one an exact computational 
> emulation, meaning exact functional equivalence. Then let's say we can 
> exactly control sensory input and perfectly monitor motor control outputs 
> between the two brains.
> 
> Given that computationalism implies functional equivalence, then identical 
> inputs yield identical internal behavior (nerve activations, etc.) and 
> outputs, in terms of muscle movement, facial expressions, and speech.
> 
> If we stimulate nerves in the person's back to cause pain, and ask them both 
> to describe the pain, both will speak identical sentences. Both will say it 
> hurts when asked, and if asked to write a paragraph describing the pain, will 
> provide identical accounts.
> 
> Does the definition of functional equivalence mean that any scientific 
> objective third-person analysis or test is doomed to fail to find any 
> distinction in behaviors, and thus necessarily fails in its ability to 
> disprove consciousness in the functionally equivalent robot mind?
> 
> Is computationalism as far as science can go on a theory of mind before it 
> reaches this testing roadblock?
> 
> Every piece of writing is a theory of mind; both within western science and 
> beyond. 
> 
> What about the abilities to understand and use natural language, to come up 
> with new avenues for scientific or creative inquiry, to experience qualia and 
> report on them, adapting and dealing with unexpected circumstances through 
> senses, and formulating + solving problems in benevolent ways by contributing 
> towards the resilience of its community and environment? 
> 
> Trouble with this is that humans, even world leaders, fail those tests lol, 
> but it's up to everybody, the AI and Computer Science folks in particular, to 
> come up with the math, data, and complete their mission... and as amazing as 
> developments have been around AI in the last couple of decades, I'm not 
> certain we can pull it off, even if it would be pleasant to be wrong and some 
> folks succeed. 
> 
> It's interesting you bring this up, I just wrote an article about the present 
> capabilities of AI: https://alwaysasking.com/when-will-ai-take-over/ 
> <https://alwaysasking.com/when-will-ai-take-over/>
>  
> 
> Even if folks do succeed, a context of militarized nation states and 
> monopolistic corporations competing for resources in self-destructive, short 
> term ways... will not exactly help towards NOT weaponizing AI. A 
> transnational politics, economics, corporate law, values/philosophies, 
> ethics, culture etc. to vanquish poverty and exploitation of people, natural 
> resources, life; while being sustainable and benevolent stewards of the 
> possibilities of life... would seem to be prerequisite to develop some 
> amazing AI. 
> 
> Ideas are all out there but progressives are ineffective politically on a 
> global scale. The right wing folks, finance guys, large irresponsible 
> monopolistic corporations are much more effective in organizing themselves 
> globally and forcing agendas down everybody's throats. So why wouldn't AI do 
> the same? PGC
> 
> 
> AI will either be a blessing or a curse. I don't think it can be anything in 
> the middle.


That is strange. I would say that “AI", like any “I”, will be a blessing *and* 
a curse. Something capable of the best, and of the worst, at least locally. AI 
is like life, which can be a blessing or a curse, according to possible 
contingent happenings. We never get a total control, once we invite universal 
beings at the table of discussion.

I don’t believe in AI. All universal machine are intelligent at the start, and 
can only become more stupid (more or equal). The consciousness of bacteria and 
human is the same consciousness (the RA consciousness). The Löbianity is the 
first (unavoidable) step toward “possible stupidity”. Cf G* proves <>[]f.  
Humanity is a byproduct of bacteria's attempts to get social security… (to be 
short: it is slightly more complex, but I don’t want to be led to too much 
technicality right now). 


Bruno 


> 
> Jason 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUg6XyBiey6-Fgge7orv%3D_kS69tprAwviaKag5w73-8v2g%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUg6XyBiey6-Fgge7orv%3D_kS69tprAwviaKag5w73-8v2g%40mail.gmail.com?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/C2ACD01F-BCBA-43A5-80DC-985ACC0B6419%40ulb.ac.be.

Reply via email to