merlimat commented on PR #3264: URL: https://github.com/apache/bookkeeper/pull/3264#issuecomment-1147682083
@Shoothzj Just did a quick test. There are 4 bytes per each recorded sample with the new Datasketches. It would be good to understand why that is the case and if there's any way to configure DataSketches to avoid that. ``` DataSketches 0.8.3 Benchmark (statsProvider) Mode Cnt Score Error Units StatsLoggerBenchmark.recordLatency Prometheus thrpt 3 15.203 ± 2.787 ops/us StatsLoggerBenchmark.recordLatency:·gc.alloc.rate Prometheus thrpt 3 0.023 ± 0.368 MB/sec StatsLoggerBenchmark.recordLatency:·gc.alloc.rate.norm Prometheus thrpt 3 0.002 ± 0.027 B/op StatsLoggerBenchmark.recordLatency:·gc.churn.G1_Eden_Space Prometheus thrpt 3 1.603 ± 50.660 MB/sec StatsLoggerBenchmark.recordLatency:·gc.churn.G1_Eden_Space.norm Prometheus thrpt 3 0.116 ± 3.650 B/op StatsLoggerBenchmark.recordLatency:·gc.count Prometheus thrpt 3 1.000 counts StatsLoggerBenchmark.recordLatency:·gc.time Prometheus thrpt 3 2.000 ms StatsLoggerBenchmark.recordLatency:·stack Prometheus thrpt NaN --- DataSketches 3.2.0 Benchmark (statsProvider) Mode Cnt Score Error Units StatsLoggerBenchmark.recordLatency Prometheus thrpt 3 15.965 ± 9.438 ops/us StatsLoggerBenchmark.recordLatency:·gc.alloc.rate Prometheus thrpt 3 63.314 ± 35.780 MB/sec StatsLoggerBenchmark.recordLatency:·gc.alloc.rate.norm Prometheus thrpt 3 4.377 ± 0.023 B/op StatsLoggerBenchmark.recordLatency:·gc.churn.G1_Eden_Space Prometheus thrpt 3 57.793 ± 4.866 MB/sec StatsLoggerBenchmark.recordLatency:·gc.churn.G1_Eden_Space.norm Prometheus thrpt 3 3.998 ± 2.456 B/op StatsLoggerBenchmark.recordLatency:·gc.count Prometheus thrpt 3 3.000 counts StatsLoggerBenchmark.recordLatency:·gc.time Prometheus thrpt 3 4.000 ms StatsLoggerBenchmark.recordLatency:·stack Prometheus thrpt NaN --- ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
