This recent blog post from Gunnar Morling provides an alternative approach 
better suited for pre-merge validation and CI systems, however it’s another 
abstraction which may miss drastic performance changes. This upside is 
reproducibility where perf validation is otherwise flaky.

https://www.morling.dev/blog/towards-continuous-performance-regression-testing

I’ve not used the framework yet myself, but it may be useful in this case.

Best,
Carter Kozak

> On Jan 3, 2021, at 6:44 PM, Tatu Saloranta <tsalora...@gmail.com> wrote:
> 
> 
> This is something I have quite often thought about as something that'd be 
> really cool, but never figured out exactly how to go about it. Would love to 
> see something in this space.
> 
> Getting tests to run is probably not super difficult (any CI system could 
> trigger it), and could be also limited to specific branches/versions for 
> practical purposes.
> There would no doubt be some challenges in this part too; possible number of 
> tests is actually huge (even for a single version), across formats, possible 
> test cases, read/write, afterburner/none, string/byte source/target.
> And having dedicated CPU resources would be a must for stable results.
> 
> To me, big challenges seemed to be about result processing, visualization; 
> how to group test runs and so on.
> Jenkins plug-ins tend to be pretty bad (just IMO) in displaying meaningful 
> breakdowns, trends; it is easy to create something to impress a project 
> manager, but less so to produce something to show important actual trends.
> But even without trends, it'd be essential to be able to compare more than 
> one result set to see diffs between certain versions.
> 
> And of course, it would also be great not to require local resources but use 
> cloud platforms iff they could provide fully static cpu resources (tests 
> fortunately do not use lots of i/o or network or even memory).
> 
> -+ Tatu +-
> 
> 
> 
> 
>> On Sun, Jan 3, 2021 at 12:30 PM drewgs...@gmail.com 
>> <drewgsteph...@gmail.com> wrote:
>>> On Sunday, January 3, 2021 at 1:10:55 PM UTC-5 marshall wrote:
>>> SGTM. Arm64 will produce _different_ results than x64, but the point for 
>>> performance regressions is simply to know if things change relative to 
>>> yesterday’s test, so I think a Pi 4 is reasonable as long as it’s in a case 
>>> with a hefty heat sink so it doesn’t downclock when it gets hot.
>> 
>> Indeed, RPi4s really need cooling to maintain their highest clockspeed.  It 
>> would probably be good to check whether any throttling occurred during the 
>> test run.
>> 
>> -Drew
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "jackson-dev" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to jackson-dev+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/jackson-dev/d0285396-5e9f-4788-8c43-e06095d4bbfbn%40googlegroups.com.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "jackson-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to jackson-dev+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/jackson-dev/CAGrxA27T08gNzkuEkXKNWNsKStQnMs3br5%3D43kMJ%3DPtR73W2Ew%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"jackson-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jackson-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jackson-dev/5AF26C56-AB3A-4AD2-8F08-13403F272E6F%40gmail.com.

Reply via email to