I’m working on integrating this into my proposal right now.  Feedback would 
be greatly appreciated. Thanks!
Link: GSOC 2024 - Google Docs 
<https://docs.google.com/document/d/1T6suQad3WgNfjGS9AVD9mHe_0WvPZijVFuVOVvSwIRE/edit>
  

On Monday, April 1, 2024 at 4:37:37 AM UTC-5 [email protected] wrote:

> I've updated the ideas page with a link to an issue that discusses some 
> ways that benchmarks on GitHub Actions could be improved. 
>
> Aaron Meurer 
>
> On Mon, Apr 1, 2024 at 2:35 AM Sam Lubelsky <[email protected]> wrote:
>
>> Yeah, I see no good reason for why the benchmark results show the Master 
>> vs previous release.  That information does not seem relevant to the PR and 
>> I would bet it's causing people to ignore the benchmark when it is actually 
>> saying something useful. 
>>
>> I think that Master vs previous release section should be moved to a 
>> separate program which is run everytime there is a new release, because 
>> this information still seems useful just to see the general performance 
>> trend and to see if there are any big regressions.
>>
>> It would be nice if this could be run automatically.  Does this type of 
>> thing seem doable in Github actions?
>> On Monday, April 1, 2024 at 3:11:18 AM UTC-5 [email protected] wrote:
>>
>>> I agree with this. The usability of the current benchmarking output 
>>> needs to be improved a lot. Ideally, it should work in a way that 
>>> people are actually alerted to real performance regressions, and not 
>>> bothered if there aren't any performance regressions. 
>>>
>>> Another issue is that the PR benchmarks comments also list the changes 
>>> in master since the previous release. This is almost always completely 
>>> irrelevant to the PR in question, so we should remove or demote this 
>>> information. 
>>>
>>> If the benchmarking system was robust enough, there would never be a 
>>> regression in master, because regressions in PRs would be disallowed, 
>>> the same as test failures in PRs are currently disallowed. 
>>>
>>> Aaron Meurer 
>>>
>>> On Mon, Apr 1, 2024 at 1:13 AM Jason Moore <[email protected]> wrote: 
>>> > 
>>> > This is my opinion, not sure if it is shared, but I don't think anyone 
>>> uses the information that is displayed on the pull request. This isn't 
>>> because the information isn't accurate or informative, but because of how 
>>> and when it is presented. I haven't looked at all pull requests, of course, 
>>> but I don't recall one where someone noticed the slowdown and it led to 
>>> change in the PR. It has probably happened, but it happens rarely. 
>>> > 
>>> > The current system shows two things: timing differences in the current 
>>> commit vs last release and current commit vs master. The current commit vs 
>>> last release is most helpful for making the new release, but can be 
>>> confusing for the PR because it contains slowdowns/speedups from more than 
>>> your own PR work. The current commit vs master should show the PR author 
>>> that they have made some good or bad change wrt to the benchmarks. That's 
>>> all we really need to tell them (besides which benchmarks are slower and by 
>>> how much). It does this, but it is easy to just not read it. 
>>> > 
>>> > The old way was that some of us monitored the asv generated websites 
>>> and then opened issues about slowdowns and commented on the old PRs. This 
>>> isn't automated but it did lead to specific comments on PRs that PR authors 
>>> then were very aware of. 
>>> > 
>>> > Jason 
>>> > moorepants.info 
>>> > +01 530-601-9791 <(530)%20601-9791> 
>>> > 
>>> > 
>>> > On Mon, Apr 1, 2024 at 2:57 AM Sam Lubelsky <[email protected]> 
>>> wrote: 
>>> >> 
>>> >> Is there any specific problems with the current pull request 
>>> benchmarking system that this project should address? 
>>> >> On Sunday, March 31, 2024 at 1:41:58 PM UTC-5 [email protected] 
>>> wrote: 
>>> >>> 
>>> >>> HI Sam, 
>>> >>> 
>>> >>> I think that idea could be a bit outdated. I'm not sure if the text 
>>> was updated for this year. If it was, then someone else can speak up about 
>>> it. 
>>> >>> 
>>> >>> I think that improving our sympy_benchmarks repository with more and 
>>> better benchmarks and making the benchmarking system that we have setup 
>>> with each pull request to sympy more useful is a better focus. I'm not sure 
>>> we can run the benchmarks on a dedicated machine unless we spend some sympy 
>>> funds to do that. 
>>> >>> 
>>> >>> We basically want to know if a pull request slows down sympy and 
>>> make sure the pull request authors are warned about this in a clear way 
>>> before merging. In the past it was helpful to see the historical speed of 
>>> various SymPy benchmarks (here is an example I used to maintain: 
>>> https://www.moorepants.info/misc/sympy-asv/) but that does require a 
>>> dedicated machine so that benchmarks are comparable over time. 
>>> >>> 
>>> >>> Another thing I thought would be useful in the past, is to run 
>>> benchmarks as part of the release process (or just before) so we can see if 
>>> the upcoming release is slower than the prior release. 
>>> >>> 
>>> >>> Jason 
>>> >>> moorepants.info 
>>> >>> +01 530-601-9791 <(530)%20601-9791> 
>>> >>> 
>>> >>> 
>>> >>> On Sun, Mar 31, 2024 at 8:13 PM Sam Lubelsky <[email protected]> 
>>> wrote: 
>>> >>>> 
>>> >>>> Sorry if it is kinda intimidating that I put so many questions. I 
>>> really just need the answer to the first one to make my proposal. I know I 
>>> am a little late to GSOC, but I've really enjoyed getting to know the Sympy 
>>> community a little bit in this past week and I am committed to putting 
>>> together a good project proposal. 
>>> >>>> Thanks, 
>>> >>>> Sam. 
>>> >>>> On Friday, March 29, 2024 at 4:37:55 PM UTC-5 Sam Lubelsky wrote: 
>>> >>>>> 
>>> >>>>> I put an introduction a few emails down, but to recap my name is 
>>> Sam, I'm a college freshman, and I'm very interested in working on 
>>> improving Sympy's benchmarking services over this summer through GSOC. 
>>> >>>>> 
>>> >>>>> While going through the project description I had a few questions: 
>>> >>>>> 
>>> >>>>> 1) "It also needs an automated system to run them" 
>>> >>>>> What exactly is meant by this. Right now, github actions seems to 
>>> be already automatically running benchmarking after each pr. Why is this 
>>> not an automated system? Is the meaning of automated system something that 
>>> runs weekly/monthly on the whole repo, generates a benchmark report and 
>>> sends it somewhere? 
>>> >>>>> 
>>> >>>>> 2) How to go about hosting benchmarks on a remote, dedicated 
>>> machine? What's the general idea of how to go about this in open source 
>>> project. Is there money available to pay some cloud provider to host it? 
>>> Free hosting options?(doesn't seem reliable enough for benchmarking). 
>>> >>>>> 
>>> >>>>> 3) SymEngine vs SymPy. I'm not familiar with SymEngine. 
>>> Approximately how similar are SymPy and SymEngine? Is making the project 
>>> also work with SymEngine more of a quick fix(≈1-2 weeks) or should I expect 
>>> it to take longer? 
>>> >>>>> 
>>> >>>>> 4) Current Benchmark Suite 
>>> >>>>> "We currently have a benchmarking suite and run the benchmarks on 
>>> GitHub Actions, but this is limited and is often buggy" 
>>> >>>>> 
>>> >>>>> What are the limitation(s) to github actions that this project 
>>> should address? 
>>> >>>>> If we don't use github actions, is there another way to make it 
>>> run after every PR like we have now? 
>>> >>>>> 
>>> >>>>> 5) Where are the tests run now? 
>>> >>>>> On the project description it says " the results are run and 
>>> hosted Ad Hoc", which I assumes means whatever computer is running all the 
>>> other PR tests. Just want to make sure this is correct. 
>>> >>>>> 
>>> >>>>> 
>>> >>>> -- 
>>> >>>> You received this message because you are subscribed to the Google 
>>> Groups "sympy" group. 
>>> >>>> To unsubscribe from this group and stop receiving emails from it, 
>>> send an email to [email protected]. 
>>> >>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/sympy/9c277927-7ac7-4c91-8c10-9ec63263f307n%40googlegroups.com.
>>>  
>>>
>>> >> 
>>> >> -- 
>>> >> You received this message because you are subscribed to the Google 
>>> Groups "sympy" group. 
>>> >> To unsubscribe from this group and stop receiving emails from it, 
>>> send an email to [email protected]. 
>>>
>> >> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/sympy/577753cf-ff9d-4707-a3d3-8695bbf10c77n%40googlegroups.com.
>>>  
>>>
>>> > 
>>> > -- 
>>> > You received this message because you are subscribed to the Google 
>>> Groups "sympy" group. 
>>> > To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to [email protected]. 
>>> > To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/sympy/CAP7f1Aip-XDW6oTK%2B7Po3vVtuYT3-tVncktu%3DfEUJgDo6R-zgA%40mail.gmail.com.
>>>  
>>>
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "sympy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected].
>>
> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/sympy/72f318c2-0c32-40e8-9720-1539ddcb8174n%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/sympy/72f318c2-0c32-40e8-9720-1539ddcb8174n%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/675ae3f3-e31d-44d1-99f9-341a51623abfn%40googlegroups.com.

Reply via email to