Absolutely. You haven't said which version of Solr you're using,
but there are several possibilities:
1> create the collection with replicationFactor=1, then use the
ADDREPLICA command to specify exactly what node the replicas
for each shard are created on with the 'node' parameter.
2> For recent
Do not, repeat NOT try to "cure" the "Overlapping onDeckSearchers"
by bumping this limit! What that means is that your commits
(either hard commit with openSearcher=true or softCommit) are
happening far too frequently and your Solr instance is trying to do
all sorts of work that is immediately
Good to meet you!
It looks like you've tried to start Solr a time or two. When you start
up the "cloud" example
it creates
/opt/solr-5.5.0/example/cloud
and puts your SolrCloud stuff under there. It also automatically puts
your configuration
sets up on Zookeeper. When I get this kind of thing, I
Hi Toke
The number of collection is just 10.One of collection has 43 shards,each
shard has two replicas.We continue importing data from oracle all the time
while our systems provide searching service.
There are "Overlapping onDeckSearchers" in my solr.logs. What is the
meaning about the
Hi Jarus,
Have you tried stopping the solr process and restarting the cluster again?
Thanks
Shyam
On Tue, Mar 29, 2016 at 8:36 PM, Jarus Bosman wrote:
> Hi,
>
> Introductions first (as I was taught): My name is Jarus Bosman, I am a
> software developer from South Africa,
bq: where I see that the number of deleted documents just
keeps on growing and growing, but they never seem to be deleted
This shouldn't be happening. The default TieredMergePolicy weights
segments to be merged (which happens automatically) heavily as per
the percentage of deleted docs. Here's a
I'm trying to index some pages of the medium. But I get error 403. I
believe it is because the medium does not accept the user-agent solr. Has
anyone ever experienced this? You know how to change?
I appreciate any help
500
94
Server returned HTTP response code: 403 for URL:
Medium switches from http to https, so you would need the logic for dealing
with https security handshakes.
-- Jack Krupansky
On Tue, Mar 29, 2016 at 7:54 PM, Jeferson dos Anjos <
jefersonan...@packdocs.com> wrote:
> I'm trying to index some pages of the medium. But I get error 403. I
> believe
I'm trying to index some pages of the medium. But I get error 403. I
believe it is because the medium does not accept the user-agent solr. Has
anyone ever experienced this? You know how to change?
I appreciate any help
500
94
Server returned HTTP response code: 403 for URL:
I thought I had sent this reply over the weekend. I had it all ready to
go, but it's still here waiting in my Drafts folder, so I'll send it now.
On 3/25/2016 11:05 AM, Victor D'agostino wrote:
> I am trying to set up a Solr Cloud environment of two Solr 5.4.1 nodes
> but the data are always
Hi Max,
Why not implement org.apache.lucene.analysis.util.ResourceLoaderAware?
Existing implementation all load/read text files.
Ahmet
On Wednesday, March 30, 2016 12:14 AM, Max Bridgewater
wrote:
HI,
I am facing the exact issue described here:
Alright, based on https://issues.apache.org/jira/browse/SOLR-5743 I can
assume that limit and mincount for the BlockJoin part stay an open issue for
some time ...
Therefore, the answer is no as of Solr 5.5.0.
Thanks to Mikhail Khludnev for working on the subject.
>Вторник, 29 марта 2016,
HI,
I am facing the exact issue described here:
http://stackoverflow.com/questions/25623797/solr-plugin-classloader.
Basically I'm writing a solr plugin by extending SearchComponent class. My
new class is part of a.jar archive. Also my class depends on a jar b.jar. I
placed both jars in my own
On 3/29/2016 1:58 AM, Victor D'agostino wrote:
> Thanks for your help, here is what I've done.
>
> 1. I deleted zookeepers and Solr installations.
> 2. I setup zookeepers on my two servers.
> 3. I successfully setup Solr Cloud node 1 with the same API call (1
> collection named db and two cores) :
Mikhail,
I totally see the point: the corresponding wiki page (
https://cwiki.apache.org/confluence/display/solr/BlockJoin+Faceting ) does not
mention it and says it's an experimental feature.
Is it correct that no additional options ( limit, mincount, etc.) can be set
anyhow?
Or more
Alisa,
There is no such thing as child.facet.limit, etc
On Tue, Mar 29, 2016 at 6:27 PM, Alisa Z. wrote:
> So the first issue eventually solved by adding facet: {top_terms_by_doc:
> "unique(_root_)"} AND sorting the outer facet buckets by this faceting:
>
> curl
So the first issue eventually solved by adding facet: {top_terms_by_doc:
"unique(_root_)"} AND sorting the outer facet buckets by this faceting:
curl http://localhost:8985/solr/enron_path_w_ts/query -d
'q={!parent%20which="type_s:doc"}type_s:doc.userData%20%2BSubject_t:california=0&
Hello everyone,
I apologise beforehand if this is a question that has been visited
numerous times on this list, but after hours spent on Google and
talking to SOLR savvy people on #solr @ Freenode I'm still a bit at a
loss about SOLR and deleted documents.
I have quite a few indexes in both
Hi,
Introductions first (as I was taught): My name is Jarus Bosman, I am a
software developer from South Africa, doing development in Java, PHP and
Delphi. I have been programming for 19 years and find out more every day
that I don't actually know anything about programming ;).
My problem:
We
On Tue, 2016-03-29 at 20:12 +0800, YouPeng Yang wrote:
> Our system still goes down as times going.We found lots of threads are
> WAITING.Here is the threaddump that I copy from the web page.And 4 pictures
> for it.
> Is there any relationship with my problem?
That is a lot of
Hi
Our system still goes down as times going.We found lots of threads are
WAITING.Here is the threaddump that I copy from the web page.And 4 pictures
for it.
Is there any relationship with my problem?
https://www.dropbox.com/s/h3wyez091oouwck/threaddump?dl=0
Hi,
I believe the default behavior of creating collections distributed across
shards through the following command
http://
[solrlocation]:8983/solr/admin/collections?action=CREATE=[collection_name]=2=2=2=[configuration_name]
is that Solr will create the collection as follows
*shard1: *leader
Hi guys
It seems I tried to add two additional shards on a existing Solr
ensemble and this is not supported (or I didn't find how).
So after setting ZooKeeper I first setup my node n°2 and then setup my
node n°1 with
wget --no-proxy
Moreover, I have created those new collections as a work around as my past
collections were not coming up after a complete restart for machines
hosting zookeepers and Solr. I would be interested to know what is the
proper procedure of bringing old collections up after a restart of
zookeeper
Thanks Reth for your response. It did work.
Regards,
Salman
On Mon, Mar 28, 2016 at 8:01 PM, Reth RM wrote:
> I think it should be "zkcli.bat" (all in lower case) that is shipped with
> solr not zkCli.cmd(that is shipped with zookeeper)
>
>
Hi Erick
Thanks for your help, here is what I've done.
1. I deleted zookeepers and Solr installations.
2. I setup zookeepers on my two servers.
3. I successfully setup Solr Cloud node 1 with the same API call (1
collection named db and two cores) :
wget --no-proxy
26 matches
Mail list logo