If you don't mind, my question is what're trying to do in the first place?
And please don't describe it with the technical approach you're already
using (or at least trying to) but rather in basic/business terms.
-Stefan
On Jun 8, 2017 3:03 AM, "arik" wrote:
> Thanks Erick, indeed your hunch i
What does "doesn't work" mean? No documents get indexed? Are you doing a
full import or a delta import? If the latter, the timestamp not having
changed is probably causing the doc to be skipped. What does the Solr log
say?
Best,
Erick
On Wed, Jun 7, 2017 at 9:46 AM, Miller, William K - Norman, OK
bq: it is indeed returning documents with only either one of the two query terms
Uhm, this should not be true. What's the output of adding debug=query?
And are you totally sure the above is true and you're just not seeing
the other term in the return? Or that you have a synonyms file that is
someh
If you require that the facets show both the folded and non-folded
versions, then you have no choice except to index both somehow.
But I think you're saying that you expect "néd" and "ned" to be
counted in one bucket. Then, indeed, you have to somehow pre-apply the
relevant filters. You can do tha
Thanks Erick, indeed your hunch is correct, it's the analyzing filters that
facet.prefix seems to bypass, and getting rid of my
ASCIIFoldingFilterFactory and MappingCharFilterFactory make it work ok.
The problem is I need those filters... otherwise how should I create facets
which match against bo
I'm sorry, there was a mistake.
I previously wrote:
However, these are returning only those documents which have both the terms
> 'tv promotion' in them (there are a few). It's not returning any
> document which have only 'tv' or only 'promotion' in them.
That's not true at all; it is indeed r
Thanks.
Both of these are working in my case:
name:"tv promotion" --> name:"tv promotion"
name:tv AND name:promotion --> name:tv AND name:promotion
(Although I'm assuming, the first might not have worked if my document had
been say 'promotion tv' or 'tv xyz promotion')
However, these are return
sorry, i meant debug query where you would get output like this:
"debug": {
"rawquerystring": "name:tv promotion",
"querystring": "name:tv promotion",
"parsedquery": "+name:tv +text:promotion",
On Wed, Jun 7, 2017 at 4:41 PM, David Hastings wrote:
> well, short answer, use the anal
well, short answer, use the analyzer to see whats happening.
long answer
theres a difference between
name:tv promotion --> name:tv default_field:promotion
name:"tv promotion" --> name:"tv promotion"
name:tv AND name:promotion --> name:tv AND name:promotion
since your default field most lik
Hello,
I have what I would think to be a fairly simple problem to solve, however
I'm not sure how it's done in Solr and couldn't find an answer on Google.
Say I have two documents, "TV" and "TV promotion". If the search query is
"TV promotion", then, obviously, I would like the document "TV prom
Hello, I am new to this mailing list and I am having a problem with
re-indexing. I will run an index on an xml file using the DataImportHandler
and it will index the file. Then I delete the index using the
*:*, , and commands. Then
I attempt to re-index the same file with the same configura
tlogs on Solr, not ZooKeeper. ZooKeeper is not involved in individual
Solr operations (indexing querying and the like), it just keeps the
state of the nodes
While recovery is happening, updates are still forwarded to the node
that is recovering. They're written to the local tlog then replayed
I'll bet your field definition has one of the folding filters in it.
I'm pretty sure that the facet.prefix parameter doesn't send the value
through your analysis chain, it uses it "as is". So my guess (without
looking at the code) is that the facet.prefix value franç is not _in_
your index, rather
bq: All setting are set to default in solrconfig with a change of auto
commit off .
Did you take your 4x solrconfig and just use it? I'd strongly recommend you
take the 6x configs and use those as a base, moving any customizations
over. Secondly, be sure you specify classic schema rather than dat
Thanks Alexandre! Gmail sent it out as HTML formatted (and looked fine on
Gmail), but the mailing list bot must've messed it up trying to convert it
to text only.
On Wed, Jun 7, 2017 at 8:27 PM, Alexandre Rafalovitch
wrote:
> That did not format well. At least for me.
>
> http://www.so
6 June 2017, Apache Solr 6.6.0 available
Solr is the popular, blazing fast, open source NoSQL search platform from
the Apache Lucene project. Its major features include powerful full-text
search, hit highlighting, faceted search and analytics, rich document
parsing, geospatial search, extensive RE
Hi Sir/Mam
Please help me to get out of problem.
Thanks
That did not format well. At least for me.
http://www.solr-start.com/ - Resources for Solr users, new and experienced
On 7 June 2017 at 10:40, Ishan Chattopadhyaya wrote:
> 6 June 2017, Apache Solr 6.6.0 availableSolr is the popular, blazing
> fast, open source NoSQL search platform fro
I'm finding that the facet.prefix query parameter does not seem to support
international characters, regardless of url encoding. All the other
parameters work fine, but that one seems unique in this respect.
For example with this data:
François Nédélec
*These queries produce relevant facets:*
Actually, I found the answer by opening the script file!
On Mon, Jun 5, 2017 at 3:59 PM, Nawab Zada Asad Iqbal
wrote:
> Hi solr community
>
> What are the steps for taking solr to production if Solr installation
> script does not support my environment. Is there a list of all the steps
> done
6 June 2017, Apache Solr 6.6.0 availableSolr is the popular, blazing
fast, open source NoSQL search platform from the Apache Lucene
project. Its major features include powerful full-text search, hit
highlighting, faceted search and analytics, rich document parsing,
geospatial search, extensive REST
We recently had a deployment where we had a ton of errors when calling Solr
and the collections API.
We saw nodes going into recovery mode and generally things were hosed.
Restarting solr didn't help, but restarting zookeeper did.
In our environment Zookeeper and Solr are on google cloud servers, o
Also use solr-user mailing list for general issues / queries / questions
and please subscribe and repost this to solr-user@lucene.apache.org
Refer http://lucene.apache.org/solr/community.html
Thanks,
Susheel
On Wed, Jun 7, 2017 at 9:29 AM, Susheel Kumar wrote:
> Please provide more detail on t
Does 50K batch size is what are you using to ingest into Solr? If that's
the case it may be too high and you may want to start with 100-1000 batch
size depending on your document size and gradually increase until it starts
degrading the performance.
On Wed, Jun 7, 2017 at 5:51 AM, Isart Montane
Thanks for checking Shawn.
So rolling ZK restart is bad, and ZK nodes with different config is bad,
Guess this could still work if
* All ZK config changes are done by stopping ALL zk nodes
* All config changes are done controlled and manual so DC1 don’t come up by
surprise with old config
PS: I
Hi,
The cluster is running on EC2 using 5x r3.xlarge instances and disks are
1TB gp2 EBS.
I will try to get the logs that Susheel requested but it's not an easy task.
When indexing there's very few IO.
Solr is started with the following flags:
```
/usr/lib/jvm/java-8-oracle/bin/java
-server
On Tue, 2017-06-06 at 10:51 +0200, Isart Montane wrote:
> We are using SolrCloud with 5 nodes, 2 collections, 2 shards each.
> The problem we are seeing is a huge drop on writes when the number of
> replicas increase.
>
> When we index (using DIH and batches) a collection with no replicas,
> we ar
27 matches
Mail list logo