What I'm doing is to simulate host crashed situation.
Consider this, a host is not connected to the cluster.
So, if a host crashed, I can not delete the down replicas by using
onlyIfDown='true'.
But in solr admin ui, it shows down for these replicas.
And whiteout "onlyIfDown", it still show a
On 7/17/2016 8:13 AM, Sarit Weber wrote:
> We noticed that indexing is much faster without SSL, but we can not
> remove it from distributed search.
Solr doesn't handle the networking. That's Jetty. Jetty sets up one
listening port, and that port either uses SSL or it doesn't. All
requests for
: Yes on both counts. Although it takes a bit of practice, if you add
: =query to the query you'll see a section of the
: response showing you exactly what the resulting query is after
: all the rules are applied.
In addition, something else about edismax that you might find useful (but
isn't
I strongly suspect you're not getting "real" searches, but
are hitting your query result cache or perhaps some other
cache. 1.24ms response times are quite unusual.
So check the Solr queryResultCache hit ratio, whether any
fronting HTTP caching is being hit and the like would be
my first step.
Most important information
solr-spec 5.4.1
solr-impl 5.4.1 1725212 - jpountz - 2016-01-18 11:51:45
lucene-spec 5.4.1
lucene-impl 5.4.1 1725212 - jpountz - 2016-01-18 11:44:59
java version "1.7.0_79"
OpenJDK Runtime Environment (IcedTea 2.5.6) (7u79-2.5.6-0ubuntu1.12.04.1)
OpenJDK 64-Bit Server
Hi All,
I have Solr Server on hardware 60GB RAM split 50GB Solr RAM and the OS. The
search index size is 120GB and built offline. There are no updates to this
index. I have 2 cores setup, they are completely identical. Except they are on
2 different disk drives.
The test run with the same 3
Thanks for taking the time for the detailed response. I completely get what
you are saying. Makes sense.
On Tue, Jul 19, 2016 at 10:56 AM Erick Erickson
wrote:
> Justin:
>
> Well, "kill -9" just makes it harder. The original question
> was whether a replica being
Justin:
Well, "kill -9" just makes it harder. The original question
was whether a replica being "active" was a bug, and it's
not when you kill -9; the Solr node has no chance to
tell Zookeeper it's going away. ZK does modify
the live_nodes by itself, thus there are checks as
necessary when a
Just for the records:
After realizing that with „defType=dismax“ I really do get the expected output
I’ve found out what I need to change in my edismax configuration:
false
Then this will work:
> q=Braun Series 9 9095CC Men's Electric Shaver Wet/Dry with Clean and Renew
> Charger
> // edismax
It sounds like the node-local version of the ZK clusterstate has diverged from
the ZK cluster state. You should check the contents of zookeeper and verify the
state there looks sane. I’ve had issues (v5.4) on a few occasions where leader
election got screwed up to the point where I had to
Pardon me for hijacking the thread, but I'm curious about something you
said, Erick. I always thought that the point (in part) of going through
the pain of using zookeeper and creating replicas was so that the system
could seamlessly recover from catastrophic failures. Wouldn't an OOM
condition
On the nodes that have the replica in a recovering state we now see:
19-07-2016 16:18:28 ERROR RecoveryStrategy:159 - Error while trying to
recover. core=lookups_shard1_replica8:org.apache.solr.common.SolrException:
No registered leader was found after waiting for 4000ms , collection:
lookups
There are 11 collections, each only has one shard, and each node has
10 replicas (9 collections are on every node, 2 are just on one node).
We're not seeing any OOM errors on restart.
I think we're being patient waiting for the leader election to occur.
We stopped the troublesome "leader that is
First of all, killing with -9 is A Very Bad Idea. You can
leave write lock files laying around. You can leave
the state in an "interesting" place. You haven't given
Solr a chance to tell Zookeeper that it's going away.
(which would set the state to "down"). In short
when you do this you have to
CLASSIFICATION: UNCLASSIFIED
SOLR 6.1.0 + Nutch 2.3.1 works?
Thanks,
Kris
~~
Kris T. Musshorn
FileMaker Developer - Contractor - Catapult Technology Inc.
US Army Research Lab
Aberdeen Proving Ground
Application Management & Development Branch
410-278-7251
The fact that your index is 200G is meaningless,
assuming you're talking about disk size. Please
just measure before you make assumptions about
what will work, it'll save you a world of hurt. I'm not
claiming that just using EBS will satisfy your
need, but if you're swapping your search speed will
15M docs may still comfortably fit in a single shard!
I've seen up to 300M docs fit on a shard. Then
again I've seen 10M docs make things unacceptably
slow.
You simply cannot extrapolate from 10K to
5M reliably. Put all 5M docs on the stand-alone
servers and test _that_. Whenever I see numbers
You may want to utilise Document routing (_route_) option to have your
query serve faster but above you are trying to compare apple with oranges
meaning your performance tests numbers have to be based on either your
actual numbers like 3-5 million docs per shard or sufficient enough to see
How many replicas per Solr JVM? And do you
see any OOM errors when you bounce a server?
And how patient are you being, because it can
take 3 minutes for a leaderless shard to decide
it needs to elect a leader.
See SOLR-7280 and SOLR-7191 for the case
where lots of replicas are in the same JVM,
There is a lot of activity in the ParallelSQL world, all being done
by a very few people, so it's a matter of priorities. Can you
consider submitting a patch?
Best,
Erick
On Tue, Jul 19, 2016 at 8:12 AM, Pablo Anzorena wrote:
> Hey,
>
> Is anyone willing to add the
Cloud somebody help me to figure out my problem described below.
http://stackoverflow.com/questions/37946150/could-not-chain-dataimporthandler-and-schemaless-to-add-unknown-fields/37970809#37970809
Hey,
Is anyone willing to add the where exists and in clauses into paraller sql?
Thanks.
Hi all - problem with a SolrCloud 5.5.0, we have a node that has most
of the collections on it marked as "Recovering" or "Recovery Failed".
It attempts to recover from the leader, but the leader responds with:
Error while trying to recover.
Hi Tomás!
Many thanks for responding - I agree, I'd say
https://issues.apache.org/jira/browse/SOLR-7495 is definitely the same issue.
I am working around that issue by using a STR-Field and copyField.
Thanks again,
Sebastian
-Ursprüngliche Nachricht-
Von: Tomás Fernández Löbbe
On 7/19/2016 4:56 AM, kostali hassan wrote:
> I am looking for display for each user: l'utilisateur est crée le
> $date à $time not $document-> name est crée le $document->created
You'll have to do that in your application that queries Solr, splitting
the date and time information that it
Hi again,
Do you think it's possible to do that with server that will be dedicate to
indexing and server that will be dedicate to search but will work on the
same collections?
Thanks,
Sarit Weber
Guardium Software Developer
IBM Israel Software Labs, Jerusalem
Phone: +972-2-649-1712
email:
Hi Sebastian,
This looks like https://issues.apache.org/jira/browse/SOLR-7495
On Jul 19, 2016 3:46 AM, "Sebastian Riemer" wrote:
> May I respectfully refer again to a question I posted last week?
>
> Thank you very much and a nice day to you all!
>
> Sebastian
>
I am looking for display for each user:
l'utilisateur est crée le $date à $time
not
$document-> name est crée le $document->created
2016-07-18 16:48 GMT+01:00 Erick Erickson :
> I don't see how that relates to the original
> question.
>
> bq: when I display the field
I want introdius Morelikethis to get simmilaire document for each query.
I had index rich data pds and msword I guess The fields to use for
similarity
is CONTENT used also for highlighting document content.
In my case what is the best way to build mlt :MoreLikeThisHandler
This is just for performance testing we have taken 10K records per shard. In
live scenario it would be 30L-50L per shard. I want to search document from
all shards, it will slow down and take too long time.
I know in case of solr Cloud, it will query all shard node and then return
result. Is
Hi Mahmoud,
What you can do is use local SSD disk as cache for EBS. You can try
lvmcache or bcache. It will boost your performance while data will
remain on EBS.
Thanks,
Emir
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support *
Hi,
Any update on this please.
Thanks.
On Sun, Jul 17, 2016 at 9:25 PM, Rajesh Kapur
wrote:
> Hi,
>
> Thanks for the reply.
>
> Yes, i tried to search by setting CFQ=abc\-def and also as "abc-def", but
> no luck.
>
> Thanks.
>
> On Sun, Jul 17, 2016 at 9:19 PM, Erick
Sure, here it is:
_id
Hi all,
Here's the situation.
I'm using solr5.3 in cloud mode.
I have 4 nodes.
After use "kill -9 pid-solr-node" to kill 2 nodes.
These replicas in the two nodes still are "ACTIVE" in zookeeper's
state.json.
The problem is, when I try to delete these down replicas with
parameter
May I respectfully refer again to a question I posted last week?
Thank you very much and a nice day to you all!
Sebastian
-
Hi all,
Tested on Solr 6.1.0 (as well as 5.4.0 and 5.5.0) using the "techproducts"
example the following
35 matches
Mail list logo