Hi,
I’m Facing a issue with expand component when working alongside with elevate
component,
In some of the request (not for all request) expand component is throwing NPE
below is the stacktrace,any idea why the ArrayTimSorter object is null and any
way to avoid that
Solr Log stack trace-
On 2/20/2018 7:54 PM, Ryan Yacyshyn wrote:
I'd like to get a stream of search results using the solrj.io package but
running into a small issue.
Exception in thread "main" java.lang.NoSuchMethodError:
Hello all,
I'd like to get a stream of search results using the solrj.io package but
running into a small issue. It seems to have something to do with the
HttpClientUtil. I'm testing on SolrCloud 7.1.0, using the
sample_techproducts_configs configs, and indexed the manufacturers.xml file.
I'm
On 2/20/2018 3:22 PM, Ritesh Chaman wrote:
> May I know what all filesystems are supported by Solr. For eg ADLS,WASB, S3
> etc. Thanks.
Solr supports whatever your operating system supports. It will expect
file locking to work be fully functional, so things like NFS don't
always work. Local
Ritesh
The filesystems you mention are used by Spark so it can stream huge quantities
of data (corrections please).
By comparison, Solr uses a more 'reasonable' sized filesystem, but needs enough
memory that all the index data can be resident. The regular Linux ext3 or ext4
is fine.
If you
Hi team
May I know what all filesystems are supported by Solr. For eg ADLS,WASB, S3
etc. Thanks.
Ritesh
Say there is a high load and I'd like to bring a new machine and let it
replicate the index, if 100gb and more can be shaved, it will have a
significant impact on how quickly the new searcher is ready and added to
the cluster. Impact on the search speed is likely minimal.
we are investigating
The rollup streaming expression rolls up aggregations on a stream that has
been sorted by the group by fields. This is basically a MapReduce reduce
operation and can work with extremely high cardinality (basically
unlimited). The rollup function is designed to rollup data produced by the
/export
Hi,
We have below field type defined in our schema.xml to support the German
Compound word search . This works find. But even when double quotes are there
in the search term , it gets split . Is there a way not to split the term when
double quotes are present in the query with this field
On 2/20/2018 4:44 AM, Alfonso Muñoz-Pomer Fuentes wrote:
We have a query that we can resolve using either facet or search with rollup.
In the Stream Source Reference section of Solr’s Reference Guide
(https://lucene.apache.org/solr/guide/7_1/stream-source-reference.html#facet)
it says “To
FYI
Thanks
Kalahasthi Satyanarayana
Mobile : 08884581161
From: Kalahasthi Satyanarayana
Sent: Tuesday, February 20, 2018 11:57 AM
To: 'solr-user@lucene.apache.org
Cc: Deepak Udapudi; Venkata MR; v...@delta.org; Nareshkumar P; Nareshkumar P;
Soma Das; Soma Das
Really depends on what you consider too large, and why the size is a big
issue, since most replication will go at about 100mg/second give or take,
and replicating a 300GB index is only an hour or two. What i do for this
purpose is store my text in a separate index altogether, and call on that
Hello,
We have a use case of a very large index (slave-master; for unrelated
reasons the search cannot work in the cloud mode) - one of the fields is a
very large text, stored mostly for highlighting. To cut down the index size
(for purposes of replication/scaling) I thought I could try to save
Dear Apache Enthusiast,
(You’re receiving this message because you’re subscribed to a user@ or
dev@ list of one or more Apache Software Foundation projects.)
We’re pleased to announce the upcoming ApacheCon [1] in Montréal,
September 24-27. This event is all about you — the Apache project
Hi,
We have a query that we can resolve using either facet or search with rollup.
In the Stream Source Reference section of Solr’s Reference Guide
(https://lucene.apache.org/solr/guide/7_1/stream-source-reference.html#facet)
it says “To support high cardinality aggregations see the rollup
Hi,
For those have Sitecore website app having multisite (but only 1 Sitecore code
base), have you separated the index for each multisite? how where you able to
manage it? also do you have archiving since analytics data keep growing?
Thanks
Best Regards,
Jeck
It was not clear at the beginning, but If I understood correctly you could :
*Index Time analysis*
Use whatever charFilter you need, the keyword tokenizer[1] and then token
filters you like ( such as lowercase filter, synonyms ect)
*Query Time Analysis*
You can use a tokenizer you like ( that
Were you able to get a solution to this issue ?
Aaron Daubman wrote
> On a Solr server running 4.10.2 with three cores, two return the expected
> info from /solr/admin/cores?wt=json but the third is missing userData and
> lastModified.
>
> The first (artists) and third (tracks) cores from the
On 2/20/2018 1:18 AM, LOPEZ-CORTES Mariano-ext wrote:
We return a facet list of values in "motifPresence" field (person status).
Status:
[ ] status1
[x] status2
[x] status3
The user then selects 1 or multiple status (It's this step that we called "facet
On 2/19/2018 3:33 PM, Roy Lim wrote:
6 x Solr (3 primary shard, 3 secondary)
3 x ZK
The client is indexing over 16 million documents using 8 threads. Auto-soft
commit is 3 minutes, auto-commit is 10 minutes.
I would probably reduce the autoCommit time to 1 minute, as long as
openSearcher is
Our query looks like this:
...factet=true=motifPresence
We return a facet list of values in "motifPresence" field (person status).
Status:
[ ] status1
[x] status2
[x] status3
The user then selects 1 or multiple status (It's this step that we called
"facet
21 matches
Mail list logo