*Anthropic says AI is improving too fast, and claims government regulation
is needed. It says "legislation should be easy to understand and
implement".  I am EXTREMELY skeptical that regulation could work,  but they
do provide some interesting examples concerning the rate of improvement of
AI. *

*"On the SWE-bench software engineering task, models have improved from
being able to solve 1.96% of a test set of real-world coding problems
(Claude 2, October 2023) to 13.5% (Devin, March 2024) to 49% (Claude 3.5
Sonnet, October 2024)" *

*"AI systems have improved their scientific understanding by nearly 18%
from June to September of this year alone, according to benchmark test
GPQA. OpenAI o1 achieved 77.3%; on the hardest section of the test; human
experts scored 81.2%."*

*Anthropic wants governments to regulate AI to avoid catastrophe 18 months*
<https://www.zdnet.com/article/anthropic-warns-of-ai-catastrophe-if-governments-dont-regulate-in-18-months/>

*John K Clark    See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>*
esm

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2sZPdviChtTGvqr4RzdC3FddQzKcBK9jqLQVKwUoyPpg%40mail.gmail.com.

Reply via email to