So, as per

https://github.com/FasterXML/jackson-core/issues/1117

I think it is time to change the default `RecyclerPool` implementation
(mechanism added in 2.16) to something other than the legacy
implementation (which is based on a combination of ThreadLocal and
SoftReference).
So far so good, but which one?

Implementations that we have can be seen in
`com.fasterxml.jackson.core.util.JsonRecyclerPools`

and include

1. ThreadLocal-based pool (`threadLocalPool()`): current default, uses
`ThreadLocal` to hold on to reference to `BufferRecycler` (via
`SoftReference`). Has multiple issues (but I won't get into those
here)
2. Non-recycling pool (`nonRecyclingPool()`): basically, "no-op"
implementation that does not recycle anything.
3. Concurrent Dequeue-based pool (shared / per-factory)
4. Lock-free pool (shared / per-factory)
5. Bounded pool (shared (size 100) / per-factory)

Of these (1) and (2) are not relevant; leaving 3, 4 or 5.

Beyond choice of implementation there is also the question of whether
to default to single ("shared") global pool (which should be more
memory-efficient but could lead to contention) or per-factory
instance.

3 and 4 are also unbounded, in the sense that they can grow to sizes
based on maximum concurrent buffer usage (so typically some multiple
of # of threads); whereas 5 is bounded to specific size.

My initial thinking is to use a per-factory Bounded pool, to avoid
unbounded buffer retention; but then again per-factory lock-free pool
(4) might be a good option.

So... I would like to hear opinions, suggestions on the choice here.
Vast majority of users will simply use the default we choose so the
choice matters quite a bit.

-+ Tatu +-

-- 
You received this message because you are subscribed to the Google Groups 
"jackson-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jackson-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jackson-dev/CAL4a10jHOTmMR9cSgyAjLKAJu4HMWRhdQ2UZJE9_Au2LLkWrgA%40mail.gmail.com.

Reply via email to