[
https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15684442#comment-15684442
]
Michael Sun commented on SOLR-9764:
-----------------------------------
Thanks [~dsmiley] [[email protected]] for reviewing. Here are some of my
thoughts.
bq. Why both clone() & cloneMe() methods?
What I try to achieve is make clone() public (protected by default). Meanwhile
it need to be public at DocSet level which is the main interface used.
Unfortunately Java seems not allow this visibility change in interface
definition (can change in class). Therefore the current implementation is a
small workaround for this problem.
There are some discussion online for other workarounds. Also another
alternative is to override clone() in DocSetBase and convert DocSet to
DocSetBase when clone() is used. But I thought the current implementation is
easiest to understand. With that said, it's still a workaround. Any suggestion
is welcome.
bq. What is the issue with intDocSet?
IntDocSet actually works fine. The issue is DocSetBase.equals(), which is
marked as for test purposed only. The equals() can't figure out two equal
DocSet's are equal sometimes. Some work is need in the DocSetBase.equals() to
get this test pass. I would add more details in patch comment for it meanwhile.
> Design a memory efficient DocSet if a query returns all docs
> ------------------------------------------------------------
>
> Key: SOLR-9764
> URL: https://issues.apache.org/jira/browse/SOLR-9764
> Project: Solr
> Issue Type: Improvement
> Security Level: Public(Default Security Level. Issues are Public)
> Reporter: Michael Sun
> Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch
>
>
> In some use cases, particularly use cases with time series data, using
> collection alias and partitioning data into multiple small collections using
> timestamp, a filter query can match all documents in a collection. Currently
> BitDocSet is used which contains a large array of long integers with every
> bits set to 1. After querying, the resulted DocSet saved in filter cache is
> large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14
> days, each collection with one day of data. A filter query for last one week
> data would result in at least six DocSet in filter cache which matches all
> documents in six collections respectively.
> This is to design a new DocSet that is memory efficient for such a use case.
> The new DocSet removes the large array, reduces memory usage and GC pressure
> without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias
> and partition data into multiple small collections using timestamp, the gain
> can be large.
> For further optimization, it may be helpful to design a DocSet with run
> length encoding. Thanks [~mmokhtar] for suggestion.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]