On Sat, Oct 16, 2021 at 9:42 AM Jörg Hoh <[email protected]>
wrote:

> Hi Carsten,
>
> Am Sa., 16. Okt. 2021 um 09:41 Uhr schrieb Carsten Ziegeler <
> [email protected]>:
>
> > I don't think that the RR is the right place for this.
> >
> > If the use case is that during a request the exact same query is
> > executed more than once, then caching the result of the query in Sling
> > is probably not resulting in the wanted gain. I think it is much better
> > to avoid executing the query multiple times in the first place -
> >
>
> That's what I want to achieve. But where should I store the result of the
> query so I don't need to execute the query multiple times? As said, I don't
> have access to a store, which allows me to store this result within a
> well-defined scope (which is identical to the lifetime of a resource
> resolver). And I don't want to roll such a logic in my own, because it's
> definitely not easy to get that right.
>
>
Is it safe to assume that there are no concurrency concerns? If there are
no concerns, a field in your adapter object (that you get from
rr.adaptTo()) can store the result of the query. You can do a null check,
and only perform the query if it is null. That is pretty standard logic in
a lot of non-concurrent applications:

<QueryResult> queryResult;
...

<QueryResult> getResult()() {
  if (queryResult == null) {
    queryResult = // some expensive query
  }

  return queryResult;
}

<SomeOtherType> processForResource(Resource r) {
  QueryResult result= getResult();
  // do your logic with the query result & resource
  // possibly storing intermediate results in other fields (maybe protected
with similar null-checks)

  return // ... the result of your computation
}

For the lifetime & scope problem, you get the benefits of the adapter cache
as part of the adaptTo logic.

If concurrency is a concern, then you would need to do some sort of
synchronization to ensure only one thread is performing the query. Given
that this is per-request, it sounds like you will not need synchronization.


> > executing the query is one part but then processing the result is
> > another. And you want to avoid this as well.
> >
>
> In every invocation of my method a different resource is passed, and I need
> to evaluate the result of that query for this resource. To be exact, I
> transform the result of the query into other domain objects, and to
> optimize the performance of this evaluation, I might even store these
> domain objects in a different structure than just a list.
>
>
If concurrency is not a problem, then you can do the same type of "caching"
in the adapter object.


> >
> > In the past 12 years or so we tried various POCs with caching in the RR.
> > Always under the (wrong) assumption that Oak/Jackrabbit is the
> > bottleneck. So we added a RR caching all get resources, all list
> > resources, all queries etc. And it turned out that the gain was close to
> > zero. Each operation in Oak was pretty fast, but if you execute it
> > hundreds of times during a request, even a fast call becomes the
> > bottleneck. So again, usually the problem is in the upper layers, above
> > Sling's resource api.
> >
>
> I don't want to create a transparent caching layer on a RR level to store
> objects which have been requested from the repository. I know that this was
> tried in the past, and did not provide the benefits we intended it to have.
> I just want to have a simple store (map) on the RR, where I can store all
> types of objects, which share the same lifecycle as the resource resolver.
>
> This is handled explicitly, and the developer is responsible to make sure,
> that indeed the lifecycles of the resource resolver and the object which
> are stored in this temporary storage are compatible. If the developer
> decides to store resources in there, and these resources might change over
> the lifetime of this (potentially long-living) resource resolver, so be it.
> There should not be any magic, it's just a simple map.
>
> Jörg
>
> --
> Cheers,
> Jörg Hoh,
>
> https://cqdump.joerghoh.de
> Twitter: @joerghoh
>

-Paul

Reply via email to