Re: Solrcloud load balancing / failover

2020-12-14 Thread Shalin Shekhar Mangar
No, the load balancing is based on random selection of replicas and
CPU is not consulted. There are limited ways to influence the replica
selection, see 
https://lucene.apache.org/solr/guide/8_4/distributed-requests.html#shards-preference-parameter

If a replica fails then the query fails and an error is returned. I
think (but I am not sure) that SolrJ retries the request on some
specific errors in which case a different replica may be selected and
the request may succeed.

IMO, these are two weak areas of Solr right now. Suggestions/patches
are welcome :-)

On 12/11/20, Dominique Bejean  wrote:
> Hi,
>
> Is there in Solrcloud any load balancing based on CPU load on Solr nodes ?
>
> If for shard a replica fails to handle a query, the query is sent to
> another replica in order to be completed ?
>
> Regards
>
> Dominique
>


-- 
Regards,
Shalin Shekhar Mangar.


Re: solrcloud with EKS kubernetes

2020-12-14 Thread Shalin Shekhar Mangar
FWIW, I have seen Solr exhaust the IOPS burst quota on AWS causing
slow replication and high latency for search and indexing operations.
You may want to dig into cloud watch metrics and see if you are
running into a similar issue. The default IOPS quota on gp2 is very
low (100?).

Another thing to check is whether you have DNS TTLs for both positive
and negative lookups configured. When nodes go down and come back up
in Kubernetes the address of the pod remains the same but the IP can
change and the JVM caches DNS lookups. This can cause timeouts.

On 12/14/20, Abhishek Mishra  wrote:
> Hi Houston,
> Sorry for the late reply. Each shard has a 9GB size around.
> Yeah, we are providing enough resources to pods. We are currently
> using c5.4xlarge.
> XMS and XMX is 16GB. The machine is having 32 GB and 16 core.
> No, I haven't run it outside Kubernetes. But I do have colleagues who did
> the same on 7.2 and didn't face any issue regarding it.
> Storage volume is gp2 50GB.
> It's not the search query where we are facing inconsistencies or timeouts.
> Seems some internal admin APIs sometimes have issues. So while adding new
> replica in clusters sometimes result in inconsistencies. Like recovery
> takes some time more than one hour.
>
> Regards,
> Abhishek
>
> On Thu, Dec 10, 2020 at 10:23 AM Houston Putman 
> wrote:
>
>> Hello Abhishek,
>>
>> It's really hard to provide any advice without knowing any information
>> about your setup/usage.
>>
>> Are you giving your Solr pods enough resources on EKS?
>> Have you run Solr in the same configuration outside of kubernetes in the
>> past without timeouts?
>> What type of storage volumes are you using to store your data?
>> Are you using headless services to connect your Solr Nodes, or ingresses?
>>
>> If this is the first time that you are using this data + Solr
>> configuration, maybe it's just that your data within Solr isn't optimized
>> for the type of queries that you are doing.
>> If you have run it successfully in the past outside of Kubernetes, then I
>> would look at the resources that you are giving your pods and the storage
>> volumes that you are using.
>> If you are using Ingresses, that might be causing slow connections
>> between
>> nodes, or between your client and Solr.
>>
>> - Houston
>>
>> On Wed, Dec 9, 2020 at 3:24 PM Abhishek Mishra 
>> wrote:
>>
>> > Hello guys,
>> > We are kind of facing some of the issues(Like timeout etc.) which are
>> very
>> > inconsistent. By any chance can it be related to EKS? We are using solr
>> 7.7
>> > and zookeeper 3.4.13. Should we move to ECS?
>> >
>> > Regards,
>> > Abhishek
>> >
>>
>


-- 
Regards,
Shalin Shekhar Mangar.


Solr Collection Reload

2020-12-14 Thread Moulay Hicham
Hi,

I have an issue with the collection reload API. The reload seems to be
hanging. It's been in the running state for many days.

Can you please suggest any documentation which explains the reload
task under the hood steps?

FYI. I am using solr 8.1

Thanks,

Moulay


Re: SolrException: Can't determine a Sort Order with Solr 6.6

2020-12-14 Thread fereira
I've run into the same issue with a Rails application that uses the Rsolr gem
to make calls to Solr.  I will have to check if the issue is in Rsolr or in
my application,  changing the %2B (+ sign) to a %20 (space char) in the
request URL fixes the issue.

I also just wanted to say hello to wunder.  I used to work as a systems
administrator at HP when we were both at Pacific Technology Park in
Sunnyvale.  I moved to NY in 1994 and have been working as a programmer at
Cornell University ever since.



--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: 8.6.1 configuring ssl on centos 7

2020-12-14 Thread Bogdan C.
 Thanks for replying Shawn.Yea etc/default/solr.in.sh is updated for for 8984 
and theres no modification to /etc/init.d/solrThere's no SSL related errors in 
the logs on startup the entry below confuses me even more
2020-12-14 13:24:50.811 INFO  (main) [   ] o.e.j.s.AbstractConnector Started 
ServerConnector@13fd2ccd{SSL, (ssl, http/1.1)}{0.0.0.0:8983}

Thanks,Bogdan


On Sunday, December 13, 2020, 2:26:51 p.m. EST, Shawn Heisey 
 wrote:  
 
 On 12/13/2020 7:21 AM, Bogdan C. wrote:
> Solr is installed and working on http (8983). I (think I) have the keystore 
> configured properly and solr.in.sh modified for the SOLR_SSL_* config 
> settings.
> Not sure how to modify the service startup to listen on 8984 for ssl. solr 
> documentation says to start it using bin/solr -p 8984 its configured to start 
> as a service so nt sure that applies here... I modified solr.in.sh with 
> SOLR_PORT=8984 but it still starts up on 8983.

If you installed Solr as a service, then you'll need to edit 
/etc/default/solr.in.sh ... the one that's in the bin directory is ignored.

If that's the one you did edit, then I do not know why it isn't working 
... unless maybe /etc/init.d/solr has also been modified.  If that has 
happened, you would need to consult with whoever modified it.

Thanks,
Shawn

  

Re: Function Query Optimization

2020-12-14 Thread Jae Joo
Should SubQuery be faster than FunctionQuery?

On Sat, Dec 12, 2020 at 10:24 AM Vincenzo D'Amore 
wrote:

> Hi, looking at this sample it seems you have just one document for '12345',
> one for '23456' and so on so forth. If this is true, why don't just try
> with a subquery
>
> https://lucene.apache.org/solr/guide/6_6/transforming-result-documents.html#TransformingResultDocuments-_subquery_
>
> On Fri, Dec 11, 2020 at 3:31 PM Jae Joo  wrote:
>
> > I have the requirement to create field  - xyz to be returned based on the
> > matched result.
> > Here Is the code .
> >
> > XYZ:concat(
> >
> > if(exists(query({!v='field1:12345'})), '12345', ''),
> >
> > if(exists(query({!v='field1:23456'})), '23456', ''),
> >
> > if(exists(query({!v='field1:34567'})), '34567', ''),
> >
> > if(exists(query({!v='field:45678'})), '45678','')
> > ),
> >
> > I am feeling this is very complex, so I am looking for some smart and
> > faster ideas.
> >
> > Thanks,
> >
> > Jae
> >
>
>
> --
> Vincenzo D'Amore
>


Re: [SOLR-8.5.2] Commit through curl command is causing delay in issuing commit

2020-12-14 Thread raj.yadav
Hi All,


As I mentioned in my previous post that reloading/refreshing of the external
file is consuming most of the time during a commit operation.
In order to nullify the impact of external files, I had deleted external
files from all the shards and issued commit through the curl command. Commit
operation got completed in 3 seconds. Individual shards took 1.5 seconds to
complete the commit operation. But there was a delay of around 1.5 seconds
on the shard whose hostname was used to issue the commit. Hence overall
commit time is 3 seconds.

During this operation, there was no timeout or any other kind of error
(except `external file not found` error which is expected). I'm not able to
figure what might be causing the delay on hostname_shard. Is there any
setting that impacts curls operation and we might have accidentally changed
it.

I have been trying to solve this issue for the last 15 days, can someone
please help in resolving it.
Let me know in case any information/logs are missing. 

Regards,
Raj 



--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html