[
https://issues.apache.org/jira/browse/OPENJPA-407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12564920#action_12564920
]
Patrick Linskey commented on OPENJPA-407:
-----------------------------------------
> > Generally, the patch looks sound. I agree with Christiaan's concerns; we
> > should probably
> > change the caches to be configurable data structures.
>
> I'm not convinced that a configurable data structure is necessary. We want
> the performance
> of OpenJPA to be excellent "out of the box". We made a similar decision when
> we decided to
> turn the Compilation Cache on by default. Maybe that's what you mean... Make
> it configurable,
> but we can have it turned on by default? If that's the case, then I'm okay
> with that approach.
Yep. This definitely seems like the sort of thing that should be on by default,
with a reasonable default hard ref size and a soft ref spillover.
> We'll post a more complete patch in the near future.
Excellent... I'm looking forward to reading more.
> Cache SQL (or closer precursors to SQL) more aggressively
> ---------------------------------------------------------
>
> Key: OPENJPA-407
> URL: https://issues.apache.org/jira/browse/OPENJPA-407
> Project: OpenJPA
> Issue Type: Improvement
> Components: jdbc, kernel, query, sql
> Affects Versions: 0.9.0, 0.9.6, 0.9.7, 1.0.0
> Reporter: Patrick Linskey
> Fix For: 1.1.0
>
> Attachments: findBy.patch, OPENJPA-407.patch
>
>
> When data is not available in the data cache, OpenJPA dynamically creates SQL
> to look up the requested data. OpenJPA should more aggressively cache this
> SQL to accelerate pathways from a cache miss to the database.
> The generated SQL takes a number of factors into account, including the
> requested records, transaction status, currently-loaded data, and the current
> fetch configuration. Any caching would need to account for these factors as
> well.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.