Hi all
In my Solr 6.6 based code, I have the following line that get the total
number of documents in a collection:
totalDocs=indexSearcher.getStatistics().get("numDocs"))
where indexSearcher is an instance of "SolrIndexSearcher".
With Solr 7.2.1, 'getStatistics' is no longer available, and
Thanks, I should have mentioned that I’m doing this in a script URP.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Apr 6, 2018, at 3:06 PM, Steve Rowe wrote:
>
> Hi Walter,
>
> I’ve seen Erik Hatcher recommend using the
Thank you!
On Fri, Apr 6, 2018 at 10:34 PM, Chris Hostetter
wrote:
>
> : In my Solr 6.6 based code, I have the following line that get the total
> : number of documents in a collection:
> :
> : totalDocs=indexSearcher.getStatistics().get("numDocs"))
> ...
> :
Hi Walter,
I’ve seen Erik Hatcher recommend using the StatelessScriptUpdateProcessor for
this purpose, e.g. on slides 10-11 of
https://www.slideshare.net/erikhatcher/solr-indexing-and-analysis-tricks .
More info at https://wiki.apache.org/solr/ScriptUpdateProcessor and
Thanks.
I have moved this to the next stage:
1. 3 fields are to be extracted from the raw files, the location of the raw
file is the fourth field, all the 4 fields become a document
2. Only the 3 fields will be index'ed?
3. The search result should be kind of re-formatted to include the 3 fields
: In my Solr 6.6 based code, I have the following line that get the total
: number of documents in a collection:
:
: totalDocs=indexSearcher.getStatistics().get("numDocs"))
...
: With Solr 7.2.1, 'getStatistics' is no longer available, and it seems that
: it is replaced by
Is there an easy way to define an analyzer chain in schema.xml then run it in
an update request processor?
I want to run a chain ending in the minhash token filter, then take those
minhashes, convert them to hex, and put them in a string field. I’d like the
values stored.
It seems like this
Hi Mikhail et al,
I must say that this complexity question is still bugging me, and I wonder
if it is possible to get even partial answers in Big-O notation..
Say that we have N (for example 10^6) documents, each having 10 SKUs and
each in turn having 10 storage as well as every product having
Patch has not been merged yet, it is available here:
https://github.com/apache/lucene-solr/pull/162
You can try to apply the patch on the current master and see if it fixes.
Please let us know if you have any questions.
Cheers,
Diego
From: solr-user@lucene.apache.org At: 04/05/18
I am using Solr for the following search need:
raw data: in FIX format, it's OK if you don't know what it is, treat it as
csv with a special delimiter.
parsed data: from raw data, all in the same format of a bunch of JSON
format with all 100+ fields.
Example:
Raw data: delimiter is \u001:
As far as logging goes. When setting PKIAuthenticationPlugin,
RuleBasedAuthorizationPlugin, and HttpSolrCall to TRACE. The following is
all that is seen in the log file of host1 for the above request:
2018-04-06 14:51:34.775 DEBUG (qtp329611835-8790) [ ]
o.a.s.s.HttpSolrCall
Lucene tends to omit full scans as possible with leap-frogging on
skiplists. It will enumerate all *matching* docs O(m) and rank every result
with O(log(page size)). ie O(m log p).
Early I remember that BJQ enumerated all matching children even most time
it's enough find only one, potentially it's
12 matches
Mail list logo