Hi Rahul, This list, [email protected] is for Lucene/Solr development (related to its internals); I think your questions belongs on [email protected].
I've definitely seen users consume search results to put those results into a Lucene RAMDirectory. I've seen it for federated search (i.e. across multiple search providers). Your use-case seems a bit different though.... it seems you would rather not pay the indexing cost at document acquisition time, you would rather pay the CPU cost of this at search time. Of course you can't search/filter the documents in the first place to even return them in the search results... which seems like a non-starter but perhaps your use-case allows for this in some way. Be careful that your first query doesn't return massive results... it could take a long time to index them all at search time. That sounds like another non-starter to me. On Tue, Jan 23, 2018 at 11:52 PM Rahul Chhiber < [email protected]> wrote: > Hi All, > > > > For our business requirement, once our Solr client (Java) gets the results > of a search query from the Solr server, we need to further search across > and also within the content of the returned documents. To accomplish this, > I am attempting to create on the client-side an in-memory lucene index ( > *RAMDirectory*), convert the *SolrDocument* objects into smaller lucene > *Document* objects, add them into the index and then search within it. > > > > Has something like this been attempted yet? And does it sound like a > workable idea ? > > > > P.S. – Reason for this approach is basically that we need search on the > data at a certain fine granularity but don’t want to index the data at such > high granularity for indexing performance reasons i.e. we need to keep the > total number of documents small. > > > > Appreciate any help. > > > > Regards, > > Rahul Chhiber > -- Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker LinkedIn: http://linkedin.com/in/davidwsmiley | Book: http://www.solrenterprisesearchserver.com
