2016-12-20 10:28 GMT+01:00 Andres Freund <and...@anarazel.de>: > On 2016-12-20 01:14:10 -0800, Andres Freund wrote: > > On 2016-12-20 09:59:43 +0100, Pavel Stehule wrote: > > > In this case some benchmark can be very important (and interesting). I > am > > > not sure if faster function execution has significant benefit on > Vulcano > > > like executor. > > > > It's fairly to see function calls as significant overhead. In fact, I > > moved things *away* from a pure Vulcano style executor, and the benefits > > weren't huge, primarily due to expression evaluation overhead (of which > > function call overhead is one part). After JITing of expressions, it > > becomes even more noticeable, because the overhead of the expression > > evaluation is reduced. >
For me a Vulcano style is row by row processing. Using JIT or not using has not significant impact. Interesting change can be block level processing. > > As an example, here's a JITed TPCH Q1 profile: > + 15.48% postgres postgres [.] > slot_deform_tuple > + 8.42% postgres perf-27760.map [.] evalexpr90 > + 5.98% postgres postgres [.] float8_accum > + 4.63% postgres postgres [.] slot_getattr > + 3.69% postgres postgres [.] bpchareq > + 3.39% postgres postgres [.] heap_getnext > + 3.22% postgres postgres [.] float8pl > + 2.86% postgres postgres [.] > TupleHashTableMatch.isra.7 > + 2.77% postgres postgres [.] hashbpchar > + 2.77% postgres postgres [.] float8mul > + 2.73% postgres postgres [.] ExecAgg > + 2.40% postgres postgres [.] hash_any > + 2.34% postgres postgres [.] > MemoryContextReset > + 1.98% postgres postgres [.] > pg_detoast_datum_packed > > Our bottle neck is row format and row related processing. > evalexpr90 is the expression that does the aggregate transition > function. float8_accum, bpchareq, float8pl , float8mul, ... are all > function calls, and a good percentage of the overhead in evalexpr90 is > pushing arguments onto fcinfo->arg[nulls]. > > Greetings, > > Andres Freund >