ion I have found JMX in solr to be crazy
> > unreliable but the admin/metrics web endpoint is pretty good.
> >
> > I have some (crappy) python for parsing it for datadog:
> > https://github.com/msporleder/dd-solrcloud you might be able to ship
> > something similar to in
Hi
We use New Relic to monitor Sold Cloud Cluster 7.2.1. we would like to get
alerted on any cluster state change. Like for example degraded shard.
Replica down. New relic can monitor any JMX metrices.
Can you suggest JMX metrics that will help monitor degraded cluster,
replica recovering, shard
We use terraform on EC2 for creating infrastructure as code for solr cloud
and Zookeeper quorum ( using 3 node auto scale target group terra form
module) and solr as well with n node auto scale group module. Auto scale
target group is just to make it easy to create cluster infrastructure. We
need
As detailed below. The collection where we have issues have 16 shards with
2 replica each.
On Sun, May 10, 2020, 9:10 PM matthew sporleder
wrote:
> Why so many shards?
>
> > On May 10, 2020, at 9:09 PM, Ganesh Sethuraman
> wrote:
> >
> > We are using ded
have high response time, it is noticeable when test duration is small,
the response time improve when the test is for longer duration.
Hope this information helps.
Regards
Ganesh
Regards
Ganesh
On Sun, May 10, 2020, 8:14 PM Shawn Heisey wrote:
> On 5/10/2020 4:48 PM, Ganesh Sethuraman wr
your solr logs to insure
> that the difference isn’t outside of Solr.
>
> While I can’t say what the exact problem is, I’m 99% sure that the number
> of deleted docs isn’t the culprit.
>
> Best,
> Erick
>
> > On May 9, 2020, at 6:22 PM, Ganesh Sethuraman
> wro
Hi Solr Users,
We use SolrCloud 7.2.1 with 2 Solr nodes in AWS. The shard size for these
collections does not exceed more than 5G. They have approximately 16 shards
with 2 replicas. We do deletes (ByQuery) as well large updates in some of
these Solr collections. We are seeing slower filter
Any other JVM settings change possible?
On Tue, Apr 28, 2020, 10:15 PM Sethuraman, Ganesh
wrote:
> Hi
>
> We are using SolrCloud 7.2.1 with 3 node Zookeeper ensemble. We have 92
> collection each on avg. having 8 shards and 2 replica with 2 EC2 nodes,
> with JVM size of 18GB (G1 GC). We need
Hi Solr Users,
We are using Solr 7.2.1 with 2 nodes (245GB RAM each) and 3 node ZK cluster
in production. We are using Java 8 with default GC settings (with
NewRatio=3) with 15GB heap, changed to 16 GB after the performance issue
mentioned below.
We have about 90 collections in this (~8 shards
We are using Solr Cloud 7.2.1. Is there a leader aware python client (like
SolrJ for Java), which can send the updates to the leader and it its highly
available?
I see PySolr https://pypi.org/project/pysolr/ project, not able to find any
documentation if it supports leader aware updates.
Regards
.
Regarding the Solrj option, it certainly seems to be best option, do we
have the python solr client to it which can be Solr Leader aware? like how
it is done in the solrj (java) client.
Regards,
Ganesh
On Mon, Feb 25, 2019 at 3:00 PM Shawn Heisey wrote:
> On 2/25/2019 11:15 AM, Ganesh Sethura
Hi
We are using Solr Cloud 7.2.1. We are using Solr CSV update handler to do
bulk update (several Millions of docs) in to multiple collections. When we
make a call to the CSV update handler using curl command line (as below),
we are pointing to single server in Solr. During the problem time, when
Any help on this is much appreciated.
On Wed, Feb 6, 2019 at 11:10 AM Ganesh Sethuraman
wrote:
> Hi
>
> We are using Solr Cloud 7.2.1. We are using the backup and restore
> features, and we finding that the restore is not working as expected. The
> restore is successful, but
Hi
We are using Solr Cloud 7.2.1. We are using the backup and restore
features, and we finding that the restore is not working as expected. The
restore is successful, but does not create the desired replica even though "
replicationFactor" is set to 2, during restore.
1. Create a "test"
We are using Solr 7.2.1 in Solr Cloud mode, with embedded Zookeeper for
test purposes. We enabled SSL and Authentication, and we are able to see
the admin working fine with authentication. But queries through the UI or
otherwise is failing with the following error. Request your help to resolve
the
Hi
We are using Solr Cloud with Sol 7.2.1 with external 3 node Zookeeper
quorum. I had look at the Basic AuthN and Role based authorization and
Enabling SSL. I have few questions at architectural level and requires your
suggestions
1. Is it possible to do expose the read solr queries and real
eshmailbox/har/master/SolrLogFile
Here is the GC Log:
http://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTgvMTAvMy8tLTAxX3NvbHJfZ2MubG9nLjUtLTIxLTE5LTU3
Thanks
Ganesh
On Fri, Oct 5, 2018 at 10:13 AM Shawn Heisey wrote:
> On 10/5/2018 5:15 AM, Ganesh Sethuraman wrote:
> > 1. Does GC and Solr Logs
for reads?
3. Is there any way to get notified/email if the servers has any replica
gets into the recovery mode?
On Wed, Oct 3, 2018 at 5:26 PM Ganesh Sethuraman
wrote:
>
>
>
> On Tue, Oct 2, 2018 at 11:46 PM Shawn Heisey wrote:
>
>> On 10/2/2018 8:55 PM, Ganesh Sethuraman wrote
On Tue, Oct 2, 2018 at 11:46 PM Shawn Heisey wrote:
> On 10/2/2018 8:55 PM, Ganesh Sethuraman wrote:
> > We are using 2 node SolrCloud 7.2.1 cluster with external 3 node ZK
> > ensemble in AWS. There are about 60 collections at any point in time. We
> > have per JVM max h
Hi
We are using 2 node SolrCloud 7.2.1 cluster with external 3 node ZK
ensemble in AWS. There are about 60 collections at any point in time. We
have per JVM max heap of 8GB.
The problem is: We are seeing few collection's few replicas in "recovering"
state and few in the "down". Since we have 2
. But BACKUP is changing this equation now. Are there any best
practices to do backup during off peak hours? and do some kind of warm-up?
(if so how to warm up)
On Tue, Sep 18, 2018 at 5:48 PM Ganesh Sethuraman
wrote:
> Thanks for the information. I thought backup is going to be m
was
better without BACKUP, could be because there was minimal disk access, but
with Backup running, reads (GET) are probably doing disk read for every
request.
Thanks,
Ganesh
On Tue, Sep 18, 2018 at 3:43 PM Shawn Heisey wrote:
> On 9/18/2018 11:00 AM, Ganesh Sethuraman wrote:
> > We are u
Hi
We are using Solr 7.2.1 with SolrCloud with 35 collections with 1 node ZK
ensemble (in lower environment, we will have 3 nodes ensemble) in AWS. We
are testing to see if we have Async Solr Cloud backup (
https://lucene.apache.org/solr/guide/7_2/collections-api.html#backup) done
every time we
Yes, i have the same problem too. DocValues=True and Stored as false does
not come out in CSV. It works fine in JSON format.
On Fri, Jul 6, 2018 at 6:37 PM, Karthikeyan Shanmugasundaram <
ksundaram...@gmail.com> wrote:
> Yea i just tried again and below is the info,
>
> - Solr/Lucene
I would like to implement the Slow Query logging feature (
https://lucene.apache.org/solr/guide/6_6/configuring-logging.html#ConfiguringLogging-LoggingSlowQueries)
across multiple collection without changing solrconfig.xml in each and
every collection. Is that possible? I am using solr 7.2.1
If
Is there a way to print using Linux commands to print top slow performing
queries from Solr 7 logs (/get handler or /select handler)? In the reverse
sorted order across log files will be very useful and handy to trouble shoot
Regards
Ganesh
A Step before creation the schema, the config needs to be uploaded to the
zoo keeper. You might want to upconfig
bin/solr zk upconfig -n -d
more details here...
https://lucene.apache.org/solr/guide/6_6/using-zookeeper-to-manage-configuration-files.html
Regards
Ganesh
On Sun, Apr 29, 2018 at
may be you can check int he Admin UI --> Cloud --> Tree --> /live_nodes. To
see the list of live nodes before running. If it is less than what you
expected, check the Zoo keeper logs? or make sure connectivity between the
shards and zookeeper.
On Thu, Mar 29, 2018 at 10:25 AM, Abhi Basu
egardless of the name of
> the unique key.
>
> The request should be in your case: http://:8080/api/
> collections/col1/get?id=69749398
>
> See: https://lucene.apache.org/solr/guide/7_2/realtime-get.html
>
> Sent from my iPhone
>
> > On Feb 18, 2018, at 9:28 PM, Ganesh Se
omas Fernandez Lobbe <tflo...@apple.com>
wrote:
> I think real-time get should be directed to the correct shard. Try:
> [COLLECTION]/get?id=[YOUR_ID]
>
> Sent from my iPhone
>
> > On Feb 18, 2018, at 3:17 PM, Ganesh Sethuraman <ganeshmail...@gmail.com>
> wrote:
&
Hi
I am using Solr 7.2.1. I have 8 shards in two nodes (two different m/c)
using Solr Cloud. The data was indexed with a unique key (default composite
id) using the CSV update handler (batch indexing). Note that I do NOT have
while indexing. Then when I try to query the
collection col1 based
I am using Solr 7.2.1. I would to perform full outer join (emit documents
from both left and right and if there are common combine them) with solr
streaming decorators on two collections and "update" it to a new
destination collection. I see "merge" decorator option exists, but this
seems to
Also want to add, i am trying to do this on Solr 7.2.1
On Tue, Feb 13, 2018 at 1:38 PM, Ganesh Sethuraman <ganeshmail...@gmail.com>
wrote:
>
> I would to perform full outer join (emit documents from both left and
> right and if there are common combine them) with solr strea
one typo in the above streaming expression sort, it is "id asc" in the
collection col1
On Tue, Feb 13, 2018 at 1:38 PM, Ganesh Sethuraman <ganeshmail...@gmail.com>
wrote:
>
> I would to perform full outer join (emit documents from both left and
> right and if ther
I would to perform full outer join (emit documents from both left and
right and if there are common combine them) with solr streaming decorators
on two collections and "update" it to a new destination collection. I see
merge decorator option exists, but this seems to return two JSON document
for
35 matches
Mail list logo