We will have to do some exploring to figure out how it best fits within our culture of working

How about having an LLM privately generate a 30 second drama about, most-atomically, the checkin history of a given file? It could be user-tuned on [drama..accuracy] to get you through your day. Remember to leave enough error to keep you on your toes.

---
--

Phobrain.com

On 2026-02-06 14:02, Stefan van der Walt via NumPy-Discussion wrote:

This is a very timely discussion for the community.

I put some of my thoughts on the topic in a recent blog post: https://blog.scientific-python.org/scientific-python/community-considerations-around-ai/

Somewhere in between the die-hard-no-AI stance and full-on hype, I think there are careful patterns of working with AI that can be beneficial to our ecosystem. We will have to do some exploring to figure out how it best fits within our culture of working, and what good guardrails are.

Stéfan

On Fri, Feb 6, 2026, at 11:59, Charles R Harris via NumPy-Discussion wrote:

This is a common problem, so I expect there will be a lot of work on using AI to review AI in the next year or two. What I don't see yet is anything that might check for license issues. However, if AI is used to rewrite properly licensed code this is probably less of a problem.

<snip>

Chuck

_______________________________________________
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: [email protected]
_______________________________________________
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: [email protected]

Reply via email to