On Sat, Nov 2, 2024 at 3:02 PM John Clark <[email protected]> wrote: > > Anthropic says AI is improving too fast, and claims government regulation is > needed. It says "legislation should be easy to understand and implement". I > am EXTREMELY skeptical that regulation could work, but they do provide some > interesting examples concerning the rate of improvement of AI. >
Great. So China will develop AI at full speed. North Korea and Iran will develop AI at full speed. And Russia of course. Rogue corporations will develop AI at full speed. Terrorists and criminals will develop AI at full speed. And we will do nothing. If AI is outlawed, only outlaws will have AI. > "On the SWE-bench software engineering task, models have improved from being > able to solve 1.96% of a test set of real-world coding problems (Claude 2, > October 2023) to 13.5% (Devin, March 2024) to 49% (Claude 3.5 Sonnet, October > 2024)" > > "AI systems have improved their scientific understanding by nearly 18% from > June to September of this year alone, according to benchmark test GPQA. > OpenAI o1 achieved 77.3%; on the hardest section of the test; human experts > scored 81.2%." > > Anthropic wants governments to regulate AI to avoid catastrophe 18 months > > John K Clark See what's on my new list at Extropolis > esm > > > -- > You received this message because you are subscribed to the Google Groups > "extropolis" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion visit > https://groups.google.com/d/msgid/extropolis/CAJPayv2sZPdviChtTGvqr4RzdC3FddQzKcBK9jqLQVKwUoyPpg%40mail.gmail.com. -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAKTCJydnjPB5dJ-%3DWBVXdK8AbnwLNXmCCTZ0CzxZoZDkYDL0hQ%40mail.gmail.com.

