On Wednesday, December 06, 2023, at 12:50 PM, James Bowery wrote:
> Please note the, uh, popularity of the notion that there is no free will.  
> Also note Matt's prior comment on recursive self improvement having started 
> with primitive technology.  
> 
> From this "popular" perspective, there is no *principled* reason to view "AGI 
> Safety" as distinct from the de facto utility function guiding decisions at a 
> global level.

Oh, that’s bad. Any sort of semblance of freewill is a threat. These far-right 
extremists will be hunted down and investigated as potential harborers of 
testosterone.

It’s flawed thinking where if everyone speaks the same language for example or 
if there is just one world government everything will be better and more 
efficient. The homogenization becomes unbearable. It might be entropy at work, 
squeezing out excess complexity and implementing a control framework onto human 
negentropic slave resources.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mc393cedb2b870e339c30636b
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to