mariofusco commented on PR #5621: URL: https://github.com/apache/incubator-kie-drools/pull/5621#issuecomment-1860946943
@gitgabrio @tarilabs @baldimir I'm sorry but I did some very preliminary checks and it seems that this commit is worsen performances of 2x if not more, at least in CompiledButInterpreted use case. In fact as I reported [here](https://github.com/apache/incubator-kie-benchmarks/pull/279#pullrequestreview-1787082802) the original benchmark was flawed. I created another, simplistic but more correct benchmark like the following: ``` package org.drools.benchmarks.dmn.feel; import org.kie.dmn.feel.FEEL; import org.kie.dmn.feel.lang.CompiledExpression; import org.openjdk.jmh.annotations.Benchmark; import org.openjdk.jmh.annotations.Fork; import org.openjdk.jmh.annotations.Measurement; import org.openjdk.jmh.annotations.OutputTimeUnit; import org.openjdk.jmh.annotations.Param; import org.openjdk.jmh.annotations.Setup; import org.openjdk.jmh.annotations.State; import org.openjdk.jmh.annotations.Warmup; import java.util.Map; import java.util.concurrent.TimeUnit; @Warmup(iterations = 5) @Measurement(iterations = 10) @OutputTimeUnit(TimeUnit.MILLISECONDS) @Fork(1) @State(org.openjdk.jmh.annotations.Scope.Thread) public class FEELInfixOpBenchmark { private FEEL feelInterpreted; private CompiledExpression compiledButInterpretedExpression; private Map<String, Object> map; @Param({"String", "int"}) private String argsType; @Setup public void setup() { feelInterpreted = FEEL.newInstance(); compiledButInterpretedExpression = compileInterpretedExpression("a + b"); map = argsType.equals("String") ? Map.of("a", "1", "b", "2") : Map.of("a", 1, "b", 2); } private CompiledExpression compileInterpretedExpression(String expression) { return feelInterpreted.compile(expression, feelInterpreted.newCompilerContext()); } @Benchmark public Object eval() { return feelInterpreted.evaluate(compiledButInterpretedExpression, map); } } ``` If I run this benchmark against 8.44.0.Final I get the following results: ``` Benchmark (argsType) Mode Cnt Score Error Units FEELInfixOpBenchmark.eval String thrpt 10 3866.743 ± 30.992 ops/ms FEELInfixOpBenchmark.eval int thrpt 10 3367.492 ± 11.529 ops/ms ``` while doing the same on the latest snapshot which includes this change I get: ``` Benchmark (argsType) Mode Cnt Score Error Units FEELInfixOpBenchmark.eval String thrpt 10 1676.167 ± 10.623 ops/ms FEELInfixOpBenchmark.eval int thrpt 10 1748.573 ± 9.847 ops/ms ``` This definitively deserves more investigation and a proper profiling session (I'm willing to help with this), but either we find a way to restore the former performance levels or I strongly suggest to revert this commit before the deployment of the next stable release. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
