On Thu, 9 Feb 2023, Fabian Keil via curl-library wrote:

In the mean time I'll keep an eye on the curl commits to see if anyone beats me to it ...

Here's some brainstorming.

We could start out thinking about how to make it work. We need to regularly run a specific set of tasks, get timing numbers and store the those numbers somewhere probably together with some identifier saying which host that ran the test (so that we can use several hosts and still compare the right sets). We also need to store exact curl git version and probably curl -V output or something.

As for what to run, we could make it simple and start with the command line from https://github.com/curl/curl/issues/10389 which times how fast curl gets the first byte back from a remote site. Using an external remote site seems fragile and using localhost makes a sadder test.

Another test could be to download a huge file from localhost.

We could start with something basic to get the thing going, then add more later on.

Running the tests, I presume we can make them run N times and if all results are within a certain margin (M%) we consider them stable and we pick... the median result?

How and where do we store the numbers? We need to be able to get them back easily to analyze trends, make graphs and understand deltas between specific performance diffs. We probably need to run a database somewhere for this.

--

 / daniel.haxx.se
 | Commercial curl support up to 24x7 is available!
 | Private help, bug fixes, support, ports, new features
 | https://curl.se/support.html
--
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html

Reply via email to