Hi Shrinivasa,
Thanks for your reply.
I think I need to investigate more.
Regards,
Prasann.
-Original Message-
From: Srinivasa Meenavali [mailto:smeenav...@professionalaccess.com]
Sent: Thursday, August 11, 2016 6:21 PM
To: solr-user@lucene.apache.org
Subject: RE: using variables in
Thanks!
To answer your questions, while I digest the rest of that information...
I'm using the hon-lucene-synonyms.5.0.4.jar from here:
https://github.com/healthonnet/hon-lucene-synonyms
The config looks like this - and IIRC, is simply a copy from the
recommended cofig on the site mentioned
: First let me say that this is very possibly the "x - y problem" so let me
: state up front what my ultimate need is -- then I'll ask about the thing I
: imagine might help... which, of course, is heavily biased in the direction
: of my experience coding Java and writing SQL...
Thank you so
You have a stemming filter in your analysis chain. Go to the analysis
tab, select the 'text' field, and put "Roche" into both boxes. Click
analyse. I bet you you will see Roch, not Roche, because of your
stemming filter shown below.
That's what Ahmet shrewdly identified above.
Upayavira
On Thu,
Hello,
I am trying to setup a local solr core so that I can perform Spatial
searches on it. I am using version 5.2.1. I have updated my schema.xml file
to include the location-rpt fieldType:
And I have defined my field to use this type:
I also added the jts-1.4.0.jar file to
Hi Ahmet,
Many thanks for your reply. I had a look at the URL you pointed out but,
honestly, I have to admit that I did not fully understand you.
Let's be a bit more concrete. Following the schema snippet for the
corresponding field:
...
First let me say that this is very possibly the "x - y problem" so let me
state up front what my ultimate need is -- then I'll ask about the thing I
imagine might help... which, of course, is heavily biased in the direction
of my experience coding Java and writing SQL...
I have a piece of a
Hi Alexandre,
You can you the CLUSTERSTATUS Collections API (
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api18)
to get a list of live nodes.
-Anshum
On Thu, Aug 11, 2016 at 10:16 AM Alexandre Drouin <
alexandre.dro...@orckestra.com> wrote:
> Hi,
>
> What
This isn’t really a question, although some validation would be nice. It’s more
of a warning.
Tldr is that the insert order of documents in my collection appears to have had
a huge effect on my query speed.
I have a very large (sharded) SolrCloud 5.4 index. One aspect of this index is
a
Actually try this:
select a from b where _query_='a:b'
*This produces the query:*
(_query_:"a:b")
which should run.
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, Aug 11, 2016 at 1:04 PM, Joel Bernstein wrote:
> There are no test cases for this but you can try
Hi,
What would be the best/easiest way to create a collection (only one shard)
using the Collection API and have a replica created on all live nodes?
Using the 'create collection' API, I can use the 'replicationFactor' parameter
and specify the number of replica I want for my collection. So
There are no test cases for this but you can try this syntax:
select a from b where _query_=(a:c AND d:f)
This should get translated to:
_query_:(a:c AND d:f)
This link describes the behavior of _query_
https://lucidworks.com/blog/2009/03/31/nested-queries-in-solr/
Just not positive how the
Joel, one more thing.
Is there anyway to use the sql and the lucene query syntax? The thing is
that my bussiness application is tightly coupled with the lucene query
syntax, so I need a way to use both the sql features (without the where
clause) and the query syntax of lucene.
Thanks.
bq: we post json documents through the curl it takes the time (same time i
would like to say that we are not hard committing ). that curl takes time
i.e. 1.3 sec.
OK, I'm really confused. _what_ is taking 1.3 seconds? When you said
commit, I was thinking of Solr's commit operation, which is
Yes the AnalyticsQuery is being called twice in the logs, which is not a
good thing. Originally I believe this was not the case but changes in the
QueryComponent in later release have caused this to happen. The test cases
aren't broken by this so it didn't get caught.
The actual merge of the
Excellent!
Thanks Joel
2016-08-11 11:19 GMT-03:00 Joel Bernstein :
> There are two ways to do this with SolrJ:
>
> 1) Use the JDBC driver.
>
> 2) Use the SolrStream to send the request and then read() the Tuples. This
> is what the JDBC driver does under the covers. The
OK, some more info ... it's not aggregating because the doc values it's using
for grouping are the unique ID field's. There are some big differences in
the whole flow between searches against a single shard collection, and
searches against a multi-shard collection. In a single shard collection the
There are two ways to do this with SolrJ:
1) Use the JDBC driver.
2) Use the SolrStream to send the request and then read() the Tuples. This
is what the JDBC driver does under the covers. The sample code can be found
here:
Hey,
I'm trying to get the response of solr via QueryResponse using
QueryResponse queryResponse = client.query(solrParams); (where client is a
CloudSolrClient)
The error it thows is:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error
from server at
Hi Chiristian,
The query r?che may not return at least the same number of matches as roche
depending on your analysis chain.
The difference is roche is analyzed but r?che don't. Wildcard queries are
executed on the indexed/analyzed terms.
For example, if roche is indexed/analyzed as roch, the
Hi,
What would be the reasons making the wildcard search for Lucene Query Parser
NOT working?
We are using Solr 5.4.1 and, using the admin console, I am triggering for
instance searches with term 'roche' in a specific core. Everything fine, I am
getting for instance two matches. I would
Hi All,
We are running a SOLR 6.0 in AWS EC2 instance (Windows Server 2012) . Based on
the below URL, I found that we would be able to Authenticate the SOLR in
standalone mode using "Kerberos Authentication".
But since we are having this in the AWS, we don't have the control on the
domain and
Hi Prasanna,
You can use Request Parameters in Solr 5.5 but not in your version .
"these parameters can be passed to the full-import command or defined in the
section in sol
rconfig.xml. This example shows the parameters with the full-import command:
Hi Midas,
1. How many indexing threads?
2. Do you batch documents and what is your batch size?
3. How frequently do you commit?
I would recommend:
1. Move commits to Solr (set auto soft commit to max allowed time)
2. Use batches (bulks)
3. tune bulk size and number of threads to achieve max
Hi,
I have 7 cores.
In each data-config.xml , I have
There is similar structures on production, testing and partner instances.
So if I have to make changes I have to do in all data-config files.
I am looking for a mechanism where some variables
Like
dbname=abcd
Emir,
other queries:
a) Solr cloud : NO
b)
c)
d)
e) we are using multi threaded system.
On Thu, Aug 11, 2016 at 11:48 AM, Midas A wrote:
> Emir,
>
> we post json documents through the curl it takes the time (same time i
> would like to say that we are not hard
Emir,
we post json documents through the curl it takes the time (same time i
would like to say that we are not hard committing ). that curl takes time
i.e. 1.3 sec.
On Wed, Aug 10, 2016 at 2:29 PM, Emir Arnautovic <
emir.arnauto...@sematext.com> wrote:
> Hi Midas,
>
> According to your
27 matches
Mail list logo