michaelkoepf commented on issue #1332: URL: https://github.com/apache/fluss/issues/1332#issuecomment-3083367451
adding to @polyzos comment: > An end-to-end benchmark may involve many components, such as a workload generator in the past, i developed a workload generator for a research project that generates synthetic data with configurable data distributions per field. maybe it makes sense to build upon it, maybe not, depending on what we want to benchmark. however, it is currently not open source and needs some clean up... > case runner, metric collector, report generator, and more. depending on how sophisticated the benchmark suite should be, there is [theodolite](https://www.theodolite.rocks/). everything runs on kubernetes out of the box and the framework takes care of - setting up and tearing down the entire benchmarking environment (incl. the system under test), - allows you to use your own load generators, - has a built-in metric collector that supports prometheus and allows you to define promql queries to collect metrics (in particular, it should be able to collect fluss metrics via prometheus out of the box) - and also exports all defined metrics at the end of the benchmarking process to files. These files can the be subsequently used to generate reports. to run theodolite, there are some lightweight kubernetes distributions out there. e.g., [k3s](https://k3s.io/) which also can run as a [test container module](https://testcontainers.com/modules/k3s/). _disclosure: theodolite was developed by a co-worker of mine and i have used it in the past. that's why i am familiar with the framework._ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
