> On Dec 4, 2025, at 5:49 AM, Simon Albrecht <[email protected]> wrote:
> 
> On 03.12.25 23:16, Werner LEMBERG wrote:
>> I think this mailing list is not the right place to discuss the impact
>> of AI to mankind;
> 
> That may well be, but I think the topic is too important to not discuss. Like 
> everyone else, we rely on cultural ecosystems not dying.

I agree that the topic is extremely important and also that this mailing list 
is not the place to debate it's sociopolitical, economic, educational, etc., 
ramifications.

I am an extreme skeptic of AI, considering it "artificial idiocy" rather than 
"artificial intelligence." Training it on the Internet is a profoundly bad 
strategy, given that 90% of Internet content is ignorant BS and the other 25% 
is lies. There is a sliver of conscientious truth and accuracy by comparison to 
the BS and lies. AI appears to be able to deliver BS and lies with great 
efficiency, less so delivering accurate fact based information.

I am also deeply skeptical of the intentions and purposes to which it can be 
put for centralized control of individual behavior and values. At the core of 
my concern is human failing, particularly the greed for power and money that AI 
appears to be representing. At this point, it appears that the primary goal of 
AI is to collect as much information about us individually so as to be able to 
advertise to us more effectively. What a small goal, but like everything else 
those are the economics of all Internet based activity. Advertising is the 
fastest route to stable revenue for AI companies; the second most stable route 
to revenue is selling the service to benefit political power and wealth 
accumulation by those already at the top of the heap.

AI is here. It cannot be put back in into Pandora's box and for good or ill we 
are all going to have to deal with it. The optimistic viewpoint is that 
ultimately it will be sort of like Star Trek, a useful tool to help solve 
dilemmas and problems efficiently and effectively. That's a long ways off and 
the near term potential for abuse is deeply alarming. 

As far as the death of cultural ecosystems, that happens continuously 
throughout human history. As the Buddha noted 2500 years ago, "all Dharmas 
end." Our cultural ecosystems will be profoundly re-organized by the ascendance 
of AI, the question is whether we can put enough guard rails in place to avoid 
self-destruction in the process.

I wonder if, for the intended purposes of the Lilypond users mailing list, 
whether those who wish to continue this discussion should do so privately 
rather than through the list server.

Reply via email to