On Fri, Nov 14, 2025 at 12:15 PM Francesco Bonazzi
<[email protected]> wrote:
>
> Let's remember that LLMs may write copyrighted material. There is some risk 
> associated with copy-pasting from an LLM output into SymPy code.
>
> Furthermore, what practical algorithmic improvements can an LLM do to SymPy? 
> Can an LLM finish the implementation of the Risch algorithm? I doubt it.

"Finishing the Risch algorithm" is an enormous task. Of course an LLM
cannot one-shot that, and a human couldn't do it in one PR either. But
I have actually been using Claude Code to do some improvements to the
Risch algorithm and it's been working. So the statement that an LLM
cannot help with algorithmic improvements in SymPy is false.

>
> On Friday, November 14, 2025 at 3:24:50 p.m. UTC+1 [email protected] wrote:
>
> However, it requires Premium requests, so not everyone can use this feature.
>
>
>  Most of these AI-assisted tools are designed to take money from developers. 
> I would strongly advise against paying for these services.

I can't speak to the specific tool being mentioned here, but the best
LLMs right now do require you to pay for them. If a tool is good and
actually improves developer quality of life, we shouldn't be afraid to
pay for it (that applies even outside of AI tools).

FWIW, when it comes to code review, my suggestion would be to use a
local LLM tool like claude code or codex to do the review for you. It
wouldn't be completely automated, but that would give you the best
results. I also agree that writing down the sorts of things you're
looking for in a SymPy review somewhere in the context is going to
make it work better. I would start by having an agent analyze recent
reviews (say, the 50 most recent PR reviews by Oscar), and use that to
construct a Markdown document listing the sorts of things that a good
reviewer is looking for in a review of a SymPy pull request.

>
> LLMs look good at first because most questions had answers in their training 
> set, as soon as you ask an LLM to do anything non-standard or just fix 
> existing code in a way that is not trivial, they fail miserably.

This was true three years ago with GPT 3 but it is not true anymore. I
encourage you to try using GPT-5 codex or Claude 4.5 Sonnet, ideally
using a modern tool like codex CLI, claude code, or Cursor. These
models are very good and can reason about problems they've never seen
before. They still have holes and you have to check everything they do
still, but you can't just assume that something isn't going to work
without trying it.

Even if you have moral qualms against AI (which I personally do not
share), you shouldn't let those give you the wrong impression about
the capabilities of these tools, especially the best-in-class models
like Claude.

Aaron Meurer

>
> --
> You received this message because you are subscribed to the Google Groups 
> "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion visit 
> https://groups.google.com/d/msgid/sympy/3c225245-11be-49d4-8f50-f7c2f0010c44n%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/sympy/CAKgW%3D6JUS-2xxbvd%3DiXmaWv%3DyiJ3gNq1OABrYQ2%3DB6dj31o12g%40mail.gmail.com.

Reply via email to