Since v1.12 we have the macro `Kernel.then(value, function)` which expects
an arity-1 function and will call it with the given value.
This makes code which used to be written as follows:
```
def update(params, socket) do
socket =
socket
|> assign(:myvar, params["myvar"])
|>
I've been running my tests on Elixir v1.13.1 built for OTP24 with OTP
24.1.2.
When decompiling the resulting BEAM bytecode, the anonymous functions
are still visible.
I will do some benchmarks to see how the resulting performance is. Maybe
the JIT will do something which is not visible in the
The optimization may happen on the loader. Use erts_debug:df(Mod, Fun,
Arity) and see that.
On Mon, Jan 3, 2022 at 5:03 PM Wiebe-Marten Wijnja wrote:
> I've been running my tests on Elixir v1.13.1 built for OTP24 with OTP
> 24.1.2.
> When decompiling the resulting BEAM bytecode, the anonymous
I have run some benchmarks (comparing OTP23 with JIT-enabled OTP24).
Full results here: https://github.com/Qqwy/elixir-test-benchmrking_then/
It compares, in a situation where no tail recursion optimization is
possible, `Kernel.then/2` vs. writing the same code manually vs. using
Ah, df has no effect on a JIT system, I forgot about that. Is the memory
measurements guaranteed to have consistent effect of the GC across
benchmarks?
On Mon, Jan 3, 2022 at 20:06 Wiebe-Marten Wijnja wrote:
> I have run some benchmarks (comparing OTP23 with JIT-enabled OTP24).
> Full results
Yes, across benchmark runs the memory measurements are the same.
On 03-01-2022 20:17, José Valim wrote:
Ah, df has no effect on a JIT system, I forgot about that. Is the
memory measurements guaranteed to have consistent effect of the GC
across benchmarks?
On Mon, Jan 3, 2022 at 20:06
No worries, thanks a lot for your guidance in this matter! ^_^
I will try to come up with some other, more 'real-world'-like examples
to double-check whether the benchmark's results apply only on quick
snippets or across the board.
Do you happen to know if there is any way to inspect the
Sorry, for the short replies, I was on my phone. :)
What I mean is, are the measurements across examples guaranteed to have the
same amount of garbage collector calls (or no calls at all)? I am worried
that, for quick snippets, the memory measurements are being influenced by
other factors. But
Unfortunately I don't know if there is a way to see the JIT code. But given
that regular profiling tools like prof now work with the BEAM, maybe it is
also possible to use similar tools to see the JITed code?
In any case, I tracked the commit: https://github.com/erlang/otp/pull/4545
- none of the