GitHub user slbotbm added a comment to the discussion: Benchmarks in non-Rust SDKs
> How much bench logic can realistically be shared vs needs per-SDK > customization? Currently, the iggy-bench binary runs as follows: The orchestrator (`BenchmarkRunner`) starts a benchmark, waits for the tasks spawned by the benchmark to finish, and compiles the final report. The benchmark implementation (`Benchmarkable` types) configures the workload and spawns actors. The actors do the actual work, take measurements, and send them to the orchestrator. From this point of view, if we were to use FFI bindings to benchmark other SDKs, creating the actor code in other languages would be required as a minimum. Furthermore, the benchmark layer would have to be refactored to allow it to spawn actors in other laguages as well. The above only applies to non-FFI SDKs (Node, Java, Go, C#). GitHub link: https://github.com/apache/iggy/discussions/2731#discussioncomment-15839738 ---- This is an automatically sent email for [email protected]. To unsubscribe, please send an email to: [email protected]
