Agreed that visualization is the hard part and that the existing Jenkins options aren’t great.
I’ll start by getting the benchmarks project setup to run automatically, a system (probably Jenkins) to do that running (probably just nightly on master & 2.12…I still haven’t run the whole thing to see how long it takes). If I get all that sorted, we can have some ongoing results to figure out how to make some graphs from. A gnuplot graph of total runtime over time should be easy enough to generate, and we could make drill-downs for each test suite or some other simple dimensions that would be useful. Thereafter we can figure out how to figure out how to present the many other dimensions, because you’re definitely right that we’ll want those to be able to really figure out where things have changed. -Drew On January 3, 2021 at 6:44:37 PM, Tatu Saloranta (tsalora...@gmail.com) wrote: This is something I have quite often thought about as something that'd be really cool, but never figured out exactly how to go about it. Would love to see something in this space. Getting tests to run is probably not super difficult (any CI system could trigger it), and could be also limited to specific branches/versions for practical purposes. There would no doubt be some challenges in this part too; possible number of tests is actually huge (even for a single version), across formats, possible test cases, read/write, afterburner/none, string/byte source/target. And having dedicated CPU resources would be a must for stable results. To me, big challenges seemed to be about result processing, visualization; how to group test runs and so on. Jenkins plug-ins tend to be pretty bad (just IMO) in displaying meaningful breakdowns, trends; it is easy to create something to impress a project manager, but less so to produce something to show important actual trends. But even without trends, it'd be essential to be able to compare more than one result set to see diffs between certain versions. And of course, it would also be great not to require local resources but use cloud platforms iff they could provide fully static cpu resources (tests fortunately do not use lots of i/o or network or even memory). -+ Tatu +- On Sun, Jan 3, 2021 at 12:30 PM drewgs...@gmail.com <drewgsteph...@gmail.com> wrote: On Sunday, January 3, 2021 at 1:10:55 PM UTC-5 marshall wrote: SGTM. Arm64 will produce _different_ results than x64, but the point for performance regressions is simply to know if things change relative to yesterday’s test, so I think a Pi 4 is reasonable as long as it’s in a case with a hefty heat sink so it doesn’t downclock when it gets hot. Indeed, RPi4s really need cooling to maintain their highest clockspeed. It would probably be good to check whether any throttling occurred during the test run. -Drew -- You received this message because you are subscribed to the Google Groups "jackson-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to jackson-dev+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jackson-dev/d0285396-5e9f-4788-8c43-e06095d4bbfbn%40googlegroups.com. -- You received this message because you are subscribed to a topic in the Google Groups "jackson-dev" group. To unsubscribe from this topic, visit https://groups.google.com/d/topic/jackson-dev/e3GdN9l7cf4/unsubscribe. To unsubscribe from this group and all its topics, send an email to jackson-dev+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jackson-dev/CAGrxA27T08gNzkuEkXKNWNsKStQnMs3br5%3D43kMJ%3DPtR73W2Ew%40mail.gmail.com. -- You received this message because you are subscribed to the Google Groups "jackson-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to jackson-dev+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jackson-dev/etPan.5ff25cdb.643f9978.418%40dinomite.net.