Hi All,
It was found that external file, which was getting replicated after every
10 minutes was reloading the core as well. This was increasing the query
time.
Thanks
Kamal Kishore
On Thu, Jul 3, 2014 at 12:48 PM, Kamal Kishore Aggarwal
kkroyal@gmail.com wrote:
With the above
More memory or faster disks will make a much bigger improvement than a forced
merge.
What are you measuring? If it is average query time, that is not a good
measure. Look at 90th or 95th percentile. Test with queries from logs.
No user can see a 10% or 20% difference. If your managers are
Hi Walter,
I wonder why you think SolrCloud isn't necessary if you're indexing once
per week. Isn't the automatic failover and auto-sharding still useful? One
can also do custom sharding with SolrCloud if necessary.
On Wed, Jul 9, 2014 at 11:38 AM, Walter Underwood wun...@wunderwood.org
wrote:
Good point. I will see if I can get the necessary access rights on this
machine to run tcpdump.
Thanks for the suggestion,
Harald.
On 09.07.2014 00:32, Steve McKay wrote:
Sure sounds like a socket bug, doesn't it? I turn to tcpdump when Solr starts
behaving strangely in a socket-related way.
Hi All,
Thanks for your kind suggestions and inputs.
We have been going the optimize way and it has helped. There have been
testing and benchmarking already done around memory and performance.
So while optimizing we see a scope of improvement on it by doing it
parallel so kindly suggest in what
Hi,
I am getting the OutofMemory Error: java.lang.OutOfMemoryError: Java heap
space often in production due to the particular Treemap is taking more memory
in the JVM.
When i looked into the config files I am having the entity called
UserQryDocument where i am fetching the data from certain
Right. Without atomic updates, the client needs to fetch the document (or
rebuild it from the system of record), apply changes, and send the entire
document to Solr, including fields that haven't changed. With atomic updates,
the client sends a list of changes to Solr and the server handles the
Thank's Elaine !
Worked for the GET Method !
I will test soon with the PUT method :)
One strange thing is that is working with a real Solr Instance but not with
an Embedded SolrServer ...
probably it's matter of dependencies, I let you know...
Many thanks
Cheers
2014-07-08 21:59 GMT+01:00
Hi Modassar,
Have you tried hitting the cores for each replica directly (instead of
using the collection)? i.e. if you had col_shard1_replica1 on node1,
then send the optimize command to that core URL directly:
curl -i -v http://host:port/solr/col_shard1_replica1/update; -H
mmm wondering how to pass the payload for the PUT using that structure with
SolrQuery...
2014-07-09 15:42 GMT+01:00 Alessandro Benedetti benedetti.ale...@gmail.com
:
Thank's Elaine !
Worked for the GET Method !
I will test soon with the PUT method :)
One strange thing is that is working
Hi Zane,
re 1: as an alternative to shard splitting, you can just overshard the
collection from the start and then migrate existing shards to new
hardware as needed. The migrate can happen online, see collection API
ADDREPLICA. Once the new replica is online on the new hardware, you
can unload
1 stop all indexing.
2 stop Solr on M1
3 delete M1's data directory
4 temporarily make M1 a slave of M2 and wait for it to sync.
5 make M1 a master again.
But really, this isn't a very good setup. You're wasting a machine that
you could be using. What I'd do is set up a single master and 3
Here's a blog on the topic of creating cores on particular nodes.
http://heliosearch.org/solrcloud-assigning-nodes-machines/
Himanshu:
What you wrote works perfectly well. FYW, this can also be done
with the Collections API. The Collections API is evolving
though, so what commands are available
=== Short-version ===
Is there a way to join on the complement of a query? I want the only the
Solr documents for which the nested join query does not match.
=== Longer-version ===
Query-time joins with {!join} are great at modeling the SQL equivalent of
patterns like this:
SELECT book_name FROM
On 7/9/2014 6:02 AM, yuvaraj ponnuswamy wrote:
Hi,
I am getting the OutofMemory Error: java.lang.OutOfMemoryError: Java heap
space often in production due to the particular Treemap is taking more
memory in the JVM.
When i looked into the config files I am having the entity called
On 7/9/2014 8:49 AM, Timothy Potter wrote:
Hi Modassar,
Have you tried hitting the cores for each replica directly (instead of
using the collection)? i.e. if you had col_shard1_replica1 on node1,
then send the optimize command to that core URL directly:
curl -i -v
Maybe something like q=*:* AND NOT {!join … } would do the trick? (it’ll
depend on your version of Solr for support of the {!…} more natural nested
queries)
Erik
On Jul 9, 2014, at 11:24 AM, Bruce Johnson hbrucejohn...@gmail.com wrote:
=== Short-version ===
Is there a way to join
I have a situation here, when I search with BALANCER the results are
different Compare to Balancer and the order is different When I search
BALANCER then, the documents with Upper Case are first in the List and for
Balancer it is in different order.
I am confused why this behavior, Can some
On 7/9/2014 10:17 AM, EXTERNAL Taminidi Ravi (ETI,
Automotive-Service-Solutions) wrote:
I have a situation here, when I search with BALANCER the results are
different Compare to Balancer and the order is different When I search
BALANCER then, the documents with Upper Case are first in the
Thank you so much for the quick reply, Erik. And wow: I didn't realize you
could use join that fluidly. Very nice.
Is there some trove of Solr doc that I'm missing where this natural syntax
is explained? I wouldn't have asked such a basic question except that I
found no evidence that this was
Hi,
Analysis admin page will tell you the truth. Just a guess: porter stem filter
could be case sensitive and that may cause the difference.
I am pretty sure porter stemming algorithms designed to work on lowercase input.
By the way you have two lowercase filters defined in index analyzer.
Hi Shawn,
Thanks for your valuable inputs.
For your information we are using SQL Server.
Also, we will try to use the JOIN instead of Cache Entity and check it.
Regards
P.Yuvaraj Kumar
On Wed, 9/7/14, Shawn Heisey s...@elyograg.org wrote:
Subject:
I solved the PUT issue using a post ( with adding of more than one field) :
ContentStreamUpdateRequest contentStreamUpdateRequest =
new ContentStreamUpdateRequest(
SCHEMA_SOLR_FIELDS_ENDPOINT);
SolrServer solrServer = this.getCore( core );
I think that’s pretty much a search time param, though it might end being used
on the update side as well. In any case, I know it doesn’t affect commit or
optimize.
Also, to my knowledge, SolrCloud optimize support was never explicitly added or
tested.
--
Mark Miller
about.me/markrmiller
Colleagues,
So far you can either vote or contribute to
https://issues.apache.org/jira/browse/SOLR-5743
Walter,
Usually, index-time tricks loose relationships information, that leads to
wrong counts.
On Tue, Jul 8, 2014 at 2:40 PM, Walter Liguori walter.ligu...@gmail.com
wrote:
Yes, also i've
Here is the schema part.
field name=Name type=text_general indexed=true stored=true /
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Wednesday, July 09, 2014 12:24 PM
To: solr-user@lucene.apache.org
Subject: Re: Lower/UpperCase Issue
On 7/9/2014 10:17 AM,
Do I need to use different algorithm instead of porter stemming..? can you
suggest anything in you mind..?
-Original Message-
From: Ahmet Arslan [mailto:iori...@yahoo.com.INVALID]
Sent: Wednesday, July 09, 2014 12:26 PM
To: solr-user@lucene.apache.org
Subject: Re: Lower/UpperCase Issue
Ahmet is correct: the porter stemmer assumes that your input is lower case,
so be sure to place the lower case filter before stemming.
BTW, this is the kind of detail that I have in my e-book:
Dear all,
I would like some recommendation of companies who work with enterprise
technical support for Solr in Brazil. Could someone help me?
Thanks!
Jefferson Olyntho Neto
jefferson.olyn...@unimedbh.com.brmailto:jefferson.olyn...@unimedbh.com.br
Hi,
Please have look at the below part taken from solr.log file.
INFO - 2014-07-09 15:30:56.243;
org.apache.solr.update.processor.LogUpdateProcessor;
[collection1] webapp=/solr path=/update/extract params={literal.deny_token_
document=DEAD_AUTHORITYliteral.DocIcon=docxresource.name=Anarchism-
Hi,
Field name sent with literal is Modified. In your screenshot, it is
last_modified . Do you use f.map setting in solrconfig.xml?
I think it is better to send use solrconfig.xml file where solr cell handler
defined.
On Thursday, July 10, 2014 12:18 AM, Ameya Aware ameya.aw...@gmail.com
On 07/08/2014 03:17 AM, Poornima Jay wrote:
I'm using the google library which I has mentioned in my first mail saying Im
usinghttp://code.google.com/p/language-detection/. I have downloaded the jar
file from the below url
https://www.versioneye.com/java/org.apache.solr:solr-langid/3.6.1
I run a small solr cloud cluster (4.5) of 3 nodes, 3 collections with 3
shards each. Total index size per node is about 20GB with about 70M
documents.
In regular traffic (27-50 rpm) the performance is ok and response time
ranges from 100 to 500ms.
But when I start loading (overwriting) 70M
On 7/9/2014 2:02 PM, EXTERNAL Taminidi Ravi (ETI,
Automotive-Service-Solutions) wrote:
Here is the schema part.
field name=Name type=text_general indexed=true stored=true /
Your query is *:*, which is a constant score query. You also have a
filter, which does not affect scoring.
Since there
I think any sub-clause can use a local syntax and branch off into
different query parsers. I could not find any examples of it either
but really need to do an advanced search and came up with this:
str name=q
{!switch case='*:*' default=$q_lastName v=$lastName}
AND {!switch case='*:*'
:
: Somebody (with more knowledge) should write up an in-depth article on
: this issue and whether the parent parser has to be default (lucene) or
: whatever.
It's a feature of Solr's standard query parser...
https://cwiki.apache.org/confluence/display/solr/Query+Syntax+and+Parsing
Ok, so cannot be eDisMax at the top.
However, the point I really am trying to make does not seem to be in
those links. All the examples of local parameters I have seen use them
at the start of the query as a standalone component. I haven't seen
examples where a query string contains several of
Hello,
Sematext would be happy to help. Please see signature.
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr Elasticsearch Support * http://sematext.com/
On Jul 9, 2014, at 4:15 PM, Jefferson Olyntho Neto (STI)
jefferson.olyn...@unimedbh.com.br wrote:
Dear all,
I've received a request from our business area to take a look at
emphasising ~0 phrase matches over ~1 (and greater) more that they are
already. I can't see any doco on the subject, and I'd like to ask if anyone
else has played in this area? Or at least is willing to sanity check my
reasoning
I'm pretty much lost, please add some details:
1 27-50 rpm. Queries? Updates?
2 what kinds of updates are happening if 1 is queries?
3 The various mail systems often strip screenshots, I don't see it.
4 What are you measuring anyway? QTime? Time for response to
come back?
5 are your logs
I think this is the Jira that implemented that feature:
SOLR-4093 - localParams syntax for standard query parser
https://issues.apache.org/jira/browse/SOLR-4093
Yeah, I don't think this is fully documented anywhere, other than the Jira
and the patch itself.
I think I had finished my query
Side note. Puttling LowercaseFilter in front of
WordDelimiterFilterFactory is usually a poor
choice. One of the purposes of WDFF is that
it breaks lower-upper case transitions into
separate tokens.
NOTE: This is _not_ germane to your problem
IMO.
But it _is_ an indication that you might want to
From the Solr 4.1 release notes:
* Solr QParsers may now be directly invoked in the lucene query syntax
via localParams and without the _query_ magic field hack.
Example: foo AND {!term f=myfield v=$qq}
-- Jack Krupansky
-Original Message-
From: Jack Krupansky
Sent: Thursday, July
Well, even JIRA and the release notes concentrates on a replacement of
_query_ with {!}. But not about having multiple of them. Was it
possible to have multiple _query_ segments in one 'q' query? I was not
aware of that either.
Basically, I am suggesting that somebody who knows this in depth
Hi,
Is there any way to run multiple queries at the same time?
situation is
1. when query in
2. check synonyms
3. get search results for all synonym queries and original query
even if, I can get search results by looping searcher but, as you know, it is
time consuming.
Thanks,
Chunki.
45 matches
Mail list logo