Hi all

I have been recently writing performance tests and each time I reach a
milestone I come across slightly new challenges.

At first it was capturing the baselines and then pinning the tests to the
new performance numbers.

But then the question arises how do we check if our tests are telling the
right thing if the underlying system or the implementation or both have an
element of flakiness.

Do you run them a few times and then take the average of them or do you run
them a few times and if they pass a set maximum number of times the test is
good or else its a failed test.

I'm sure many of you might have come to this situation when you have
optimised a system and want to regression proof it. And want to ensure that
it should tell you when the underlying implementation has genuinely
regressed due to some changes.

It's not cool if the performance tests randomly fail on CICD or local
machine.

Just want to know how everyone else does it. And what you think of the
above.

Regards
Mani
-- 
-- 
@theNeomatrix369  |  Blogs: https://medium.com/@neomatrix369
| @adoptopenjdk @graalvm @graal @truffleruby  |  Github:
https://github.com/neomatrix369  |  Slideshare:
https://slideshare.net/neomatrix369 | LinkedIn:
https://uk.linkedin.com/in/mani-sarkar

Don't chase success, rather aim for "Excellence", and success will come
chasing after you!

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/mechanical-sympathy/CAGHtMWnr%2BZ1pHwi%2BWw92cnAX8NnQw4YRKop9zU4eoA6w%2BDw4WQ%40mail.gmail.com.

Reply via email to