Hi everyone!

Recently, I have been studying the codebase and contributing to the assumptions 
module. I currently have 3 open PRs related to refine handlers (such as #29173 
and #29183), which helped me get quite familiar with how the assumption system 
works under the hood.
While looking at the Ideas List, “benchmark and performance”project really 
caught my attention. Since I’ve been working heavily with the assumptions 
module, I was thinking about building my GSoC proposal around modernizing the 
benchmark suite, specifically focusing on adding and improving performance 
benchmarks for the assumptions system.
Before I start drafting the official proposal, I wanted to ask the community 
for some quick feedback:

1. I noticed that the sympy_benchmarks repository hasn't seen much recent 
activity (the last open issues date back a couple of years). Is modernizing the 
benchmarking infrastructure still considered a valuable and priority project 
for the core team this year?

2. Would this specific focus (benchmarking the assumptions system) be a solid 
foundation for a GSoC project?

If any mentors have suggestions or would be willing to help me refine this 
idea, I would really appreciate your guidance!

Thanks for your time,

Marco

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/sympy/35CC22AC-1CC1-498D-B095-336F2E8BED48%40gmail.com.

Reply via email to