UpdateProcessorChains -cdcr processor along with ignore commit processor

2020-07-15 Thread Natarajan, Rajeswari
Resending this again as I still could not make this work. So would like to know 
if this is even possible to have
both solr.CdcrUpdateProcessorFactory and 
solr.IgnoreCommitOptimizeUpdateProcessorFactory  in solrconfig.xml and get both 
functionalities work.
Please let me know.

Thank you,
Rajeswari
 
On 7/14/20, 12:40 PM, "Natarajan, Rajeswari"  
wrote:

Hi ,

Would like to have these two processors (cdcr and ignorecommit)  in 
solrconfig.xml .

But cdcr fails with below error , with either cdcr-processor-chain or 
ignore-commit-from-client chain

version conflict for 60d35f0850afac66 expected=1671629672447737856 
actual=-1, retry=0 commError=false errorCode=409



 

  

cdcr-processor-chain

  







  

  









  

200

  



  





Also tried as below. Getting error comaplining about custom processor



  

custom

  
>









 200






  



Is there a way these two processors can be applied together.

Thanks,
Rajeswari



SSL + Solr 8.5.1 in cloud mode + Java 8

2020-07-15 Thread Natarajan, Rajeswari
Yes ,that's correct . I did that and  the exception is gone. 

But I see bel.ow exception , not sure what is the reason for this NPE.

2020-07-15 10:28:14.453 INFO  (MetricsHistoryHandler-12-thread-1) [   ] 
o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying 
again: IOException occurred when talking to server at: 
http://10-169-50-16.search-solrcloud-solrcloud.service:8983/solr
2020-07-15 10:28:14.956 INFO  (MetricsHistoryHandler-12-thread-1) [   ] 
o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying 
again: IOException occurred when talking to server at: 
http://10-169-50-16.search-solrcloud-solrcloud.service:8983/solr
2020-07-15 10:28:15.459 INFO  (MetricsHistoryHandler-12-thread-1) [   ] 
o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying 
again: IOException occurred when talking to server at: 
http://10-169-50-16.search-solrcloud-solrcloud.service:8983/solr
2020-07-15 10:28:15.960 WARN  (MetricsHistoryHandler-12-thread-1) [   ] 
o.a.s.c.s.i.SolrClientNodeStateProvider could not get tags from node 
10-169-50-16.search-solrcloud-solrcloud.service:8983_solr => 
java.lang.NullPointerException
at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.lambda$fetchReplicaMetrics$7(SolrClientNodeStateProvider.java:226)
java.lang.NullPointerException: null
at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.lambda$fetchReplicaMetrics$7(SolrClientNodeStateProvider.java:226)
 ~[solr-solrj-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera 
- 2020-04-08 09:01:44]
at java.util.HashMap.forEach(HashMap.java:1289) ~[?:1.8.0_211]
at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:225)
 ~[solr-solrj-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera 
- 2020-04-08 09:01:44]
at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:271)
 ~[solr-solrj-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera 
- 2020-04-08 09:01:44]
at 
org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:76)
 ~[solr-solrj-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera 
- 2020-04-08 09:01:44]
at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchTagValues(SolrClientNodeStateProvider.java:139)
 ~[solr-solrj-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera 
- 2020-04-08 09:01:44]
at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:128)
 ~[solr-solrj-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera 
- 2020-04-08 09:01:44]
at 
org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:506)
 ~[solr-core-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera - 
2020-04-08 09:01:41]
at 
org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:378)
 ~[solr-core-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera - 
2020-04-08 09:01:41]
at 
org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:235)
 ~[solr-core-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera - 
2020-04-08 09:01:41]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_211]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
~[?:1.8.0_211]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 ~[?:1.8.0_211]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 ~[?:1.8.0_211]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[?:1.8.0_211]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[?:1.8.0_211]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_211]

Thanks,
Rajeswari
On 7/15/20, 6:29 AM, "Kevin Risden"  wrote:

You need to remove the references from bin/solr or bin/solr.cmd to
SOLR_SSL_CLIENT_KEY_STORE and "-Djavax.net.ssl.keyStore". This is different
from solr.in.sh.

The way the bin/solr script is written it is falling back to whatever is
provided as SOLR_SSL_KEY_STORE for the client keystore which is causing
issues.

Kevin Risden



On Wed, Jul 15, 2020 at 3:45 AM Natarajan, Rajeswari <
rajeswari.natara...@sap.com> wrote:

> Thank you for your reply. I looked at solr.in.sh I see that
> SOLR_SSL_CLIENT_KEY_STORE  is already commented out by default. But you 
are
> right I looked at the running solr,  I see the option
> 

SSL + Solr 8.5.1 in cloud mode + Java 8

2020-07-15 Thread Natarajan, Rajeswari
Here is the result. I removed the else block and tested . You are correct , the 
previous exception which I saw went away.

But I see bel.ow exception , not sure what is the reason for this NPE.

2020-07-15 10:28:14.453 INFO  (MetricsHistoryHandler-12-thread-1) [   ] 
o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying 
again: IOException occurred when talking to server at: 
http://10-169-50-16.search-solrcloud-solrcloud.service:8983/solr
2020-07-15 10:28:14.956 INFO  (MetricsHistoryHandler-12-thread-1) [   ] 
o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying 
again: IOException occurred when talking to server at: 
http://10-169-50-16.search-solrcloud-solrcloud.service:8983/solr
2020-07-15 10:28:15.459 INFO  (MetricsHistoryHandler-12-thread-1) [   ] 
o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying 
again: IOException occurred when talking to server at: 
http://10-169-50-16.search-solrcloud-solrcloud.service:8983/solr
2020-07-15 10:28:15.960 WARN  (MetricsHistoryHandler-12-thread-1) [   ] 
o.a.s.c.s.i.SolrClientNodeStateProvider could not get tags from node 
10-169-50-16.search-solrcloud-solrcloud.service:8983_solr => 
java.lang.NullPointerException
at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.lambda$fetchReplicaMetrics$7(SolrClientNodeStateProvider.java:226)
java.lang.NullPointerException: null
at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.lambda$fetchReplicaMetrics$7(SolrClientNodeStateProvider.java:226)
 ~[solr-solrj-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera 
- 2020-04-08 09:01:44]
at java.util.HashMap.forEach(HashMap.java:1289) ~[?:1.8.0_211]
at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:225)
 ~[solr-solrj-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera 
- 2020-04-08 09:01:44]
at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:271)
 ~[solr-solrj-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera 
- 2020-04-08 09:01:44]
at 
org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:76)
 ~[solr-solrj-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera 
- 2020-04-08 09:01:44]
at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchTagValues(SolrClientNodeStateProvider.java:139)
 ~[solr-solrj-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera 
- 2020-04-08 09:01:44]
at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:128)
 ~[solr-solrj-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera 
- 2020-04-08 09:01:44]
at 
org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:506)
 ~[solr-core-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera - 
2020-04-08 09:01:41]
at 
org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:378)
 ~[solr-core-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera - 
2020-04-08 09:01:41]
at 
org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:235)
 ~[solr-core-8.5.1.jar:8.5.1 edb9fc409398f2c3446883f9f80595c884d245d0 - ivera - 
2020-04-08 09:01:41]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_211]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
~[?:1.8.0_211]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 ~[?:1.8.0_211]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 ~[?:1.8.0_211]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[?:1.8.0_211]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[?:1.8.0_211]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_211]

Thanks,
Rajeswari

On 7/15/20, 2:53 AM, "Natarajan, Rajeswari"  
wrote:

Ok , I looked at the solr script , here is the logic , even if the  
SOLR_SSL_CLIENT_KEY_STORE_PASSWORD is not set , it sets the  
-Djavax.net.ssl.keyStore=$SOLR_SSL_KEY_STORE" . Let me remove the else part and 
test it . Thanks again for the pointer

if [ -n "$SOLR_SSL_CLIENT_KEY_STORE" ]; then
SOLR_SSL_OPTS+=" -Djavax.net.ssl.keyStore=$SOLR_SSL_CLIENT_KEY_STORE"

if [ -n "$SOLR_SSL_CLIENT_KEY_STORE_PASSWORD" ]; then
  export 
SOLR_SSL_CLIENT_KEY_STORE_PASSWORD=$SOLR_SSL_CLIENT_KEY_STORE_PASSWORD
fi
if [ -n "$SOLR_SSL_C

SSL + Solr 8.5.1 in cloud mode + Java 8

2020-07-15 Thread Natarajan, Rajeswari
Ok , I looked at the solr script , here is the logic , even if the  
SOLR_SSL_CLIENT_KEY_STORE_PASSWORD is not set , it sets the  
-Djavax.net.ssl.keyStore=$SOLR_SSL_KEY_STORE" . Let me remove the else part and 
test it . Thanks again for the pointer

if [ -n "$SOLR_SSL_CLIENT_KEY_STORE" ]; then
SOLR_SSL_OPTS+=" -Djavax.net.ssl.keyStore=$SOLR_SSL_CLIENT_KEY_STORE"

if [ -n "$SOLR_SSL_CLIENT_KEY_STORE_PASSWORD" ]; then
  export 
SOLR_SSL_CLIENT_KEY_STORE_PASSWORD=$SOLR_SSL_CLIENT_KEY_STORE_PASSWORD
fi
if [ -n "$SOLR_SSL_CLIENT_KEY_STORE_TYPE" ]; then
  SOLR_SSL_OPTS+=" 
-Djavax.net.ssl.keyStoreType=$SOLR_SSL_CLIENT_KEY_STORE_TYPE"
fi
  else
if [ -n "$SOLR_SSL_KEY_STORE" ]; then
  SOLR_SSL_OPTS+=" -Djavax.net.ssl.keyStore=$SOLR_SSL_KEY_STORE"
fi
if [ -n "$SOLR_SSL_KEY_STORE_TYPE" ]; then
  SOLR_SSL_OPTS+=" -Djavax.net.ssl.keyStoreType=$SOLR_SSL_KEY_STORE_TYPE"
fi
  fi

Thanks,
Rajeswari

On 7/15/20, 1:49 AM, "Natarajan, Rajeswari"  
wrote:

From the /bin directory I did grep for SOLR_SSL_CLIENT_KEY_STORE 
, this is what I see . But somehow the option option -Djavax.net.ssl.keyStore 
is added 
grep SOLR_SSL_CLIENT_KEY_STORE *
grep: init.d: Is a directory
solr:  if [ -n "$SOLR_SSL_CLIENT_KEY_STORE" ]; then
solr:SOLR_SSL_OPTS+=" 
-Djavax.net.ssl.keyStore=$SOLR_SSL_CLIENT_KEY_STORE"
solr:if [ -n "$SOLR_SSL_CLIENT_KEY_STORE_PASSWORD" ]; then
solr:  export 
SOLR_SSL_CLIENT_KEY_STORE_PASSWORD=$SOLR_SSL_CLIENT_KEY_STORE_PASSWORD
solr:if [ -n "$SOLR_SSL_CLIENT_KEY_STORE_TYPE" ]; then
solr:  SOLR_SSL_OPTS+=" 
-Djavax.net.ssl.keyStoreType=$SOLR_SSL_CLIENT_KEY_STORE_TYPE"
solr.cmd:  IF DEFINED SOLR_SSL_CLIENT_KEY_STORE (
solr.cmd:set "SOLR_SSL_OPTS=!SOLR_SSL_OPTS! 
-Djavax.net.ssl.keyStore=%SOLR_SSL_CLIENT_KEY_STORE%"
solr.cmd:IF DEFINED SOLR_SSL_CLIENT_KEY_STORE_TYPE (
solr.cmd:  set "SOLR_SSL_OPTS=!SOLR_SSL_OPTS! 
-Djavax.net.ssl.keyStoreType=%SOLR_SSL_CLIENT_KEY_STORE_TYPE%"
solr.in.cmd:REM set SOLR_SSL_CLIENT_KEY_STORE=
solr.in.cmd:REM set SOLR_SSL_CLIENT_KEY_STORE_PASSWORD=
solr.in.cmd:REM set SOLR_SSL_CLIENT_KEY_STORE_TYPE=
solr.in.sh:#SOLR_SSL_CLIENT_KEY_STORE=
solr.in.sh:#SOLR_SSL_CLIENT_KEY_STORE_PASSWORD=
solr.in.sh:#SOLR_SSL_CLIENT_KEY_STORE_TYPE=

Thanks,
Rajeswari
On 7/15/20, 12:46 AM, "Natarajan, Rajeswari"  
wrote:

Thank you for your reply. I looked at solr.in.sh I see that  
SOLR_SSL_CLIENT_KEY_STORE  is already commented out by default. But you are 
right I looked at the running solr,  I see the option -Djavax.net.ssl.keyStore 
pointing to solr-ssl.keystore.p12 , not sure how it is getting that value. Let 
me dig more. Thanks for the pointer. Also if you have a pointer how it get's 
populated  other than SOLR_SSL_CLIENT_KEY_STORE config in solr.in.sh , please 
let me know

#SOLR_SSL_CLIENT_KEY_STORE=
#SOLR_SSL_CLIENT_KEY_STORE_PASSWORD=
#SOLR_SSL_CLIENT_KEY_STORE_TYPE=
#SOLR_SSL_CLIENT_TRUST_STORE=
#SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD=
#SOLR_SSL_CLIENT_TRUST_STORE_TYPE=

Yes we are not using Solr client auth.

Thanks,
Rajeswari

On 7/14/20, 5:55 PM, "Kevin Risden"  wrote:

Hmmm so I looked closer - it looks like a side effect of the default
passthrough of the keystore being passed to the client keystore.

https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L229

Can you remove or commout the entire SOLR_SSL_CLIENT_KEY_STORE 
section from
bin/solr or bin/solr.cmd depending on which version you are using? 
The key
being to make sure to not set "-Djavax.net.ssl.keyStore".

This assumes that you aren't using Solr client auth (which based on 
your
config you aren't) and you aren't trying to use Solr to connect to 
anything
that is secured via clientAuth (most likely you aren't).

If you can try this and report back that would be awesome. I think 
this
    will fix the issue and it would be possible to make client auth opt 
in
instead of default fall back.
Kevin Risden



On Tue, Jul 14, 2020 at 1:46 AM Natarajan, Rajeswari <
rajeswari.natara...@sap.com> wrote:

> Thank you so much for the response.  Below are the configs I have 
in
> solr.in.sh and I followed
> https://lucene.apache.org/solr/guide/8_5/enabling-ssl.html 
documentation
>
> # Enables HTTPS. It is implicitly true if you set 
SOLR_SSL_KEY_STORE. Use
> this config
> # to enable ht

Re: [CAUTION] Re: [CAUTION] SSL + Solr 8.5.1 in cloud mode + Java 8

2020-07-15 Thread Natarajan, Rajeswari
From the /bin directory I did grep for SOLR_SSL_CLIENT_KEY_STORE , 
this is what I see . But somehow the option option -Djavax.net.ssl.keyStore is 
added 
grep SOLR_SSL_CLIENT_KEY_STORE *
grep: init.d: Is a directory
solr:  if [ -n "$SOLR_SSL_CLIENT_KEY_STORE" ]; then
solr:SOLR_SSL_OPTS+=" -Djavax.net.ssl.keyStore=$SOLR_SSL_CLIENT_KEY_STORE"
solr:if [ -n "$SOLR_SSL_CLIENT_KEY_STORE_PASSWORD" ]; then
solr:  export 
SOLR_SSL_CLIENT_KEY_STORE_PASSWORD=$SOLR_SSL_CLIENT_KEY_STORE_PASSWORD
solr:if [ -n "$SOLR_SSL_CLIENT_KEY_STORE_TYPE" ]; then
solr:  SOLR_SSL_OPTS+=" 
-Djavax.net.ssl.keyStoreType=$SOLR_SSL_CLIENT_KEY_STORE_TYPE"
solr.cmd:  IF DEFINED SOLR_SSL_CLIENT_KEY_STORE (
solr.cmd:set "SOLR_SSL_OPTS=!SOLR_SSL_OPTS! 
-Djavax.net.ssl.keyStore=%SOLR_SSL_CLIENT_KEY_STORE%"
solr.cmd:IF DEFINED SOLR_SSL_CLIENT_KEY_STORE_TYPE (
solr.cmd:  set "SOLR_SSL_OPTS=!SOLR_SSL_OPTS! 
-Djavax.net.ssl.keyStoreType=%SOLR_SSL_CLIENT_KEY_STORE_TYPE%"
solr.in.cmd:REM set SOLR_SSL_CLIENT_KEY_STORE=
solr.in.cmd:REM set SOLR_SSL_CLIENT_KEY_STORE_PASSWORD=
solr.in.cmd:REM set SOLR_SSL_CLIENT_KEY_STORE_TYPE=
solr.in.sh:#SOLR_SSL_CLIENT_KEY_STORE=
solr.in.sh:#SOLR_SSL_CLIENT_KEY_STORE_PASSWORD=
solr.in.sh:#SOLR_SSL_CLIENT_KEY_STORE_TYPE=

Thanks,
Rajeswari
On 7/15/20, 12:46 AM, "Natarajan, Rajeswari"  
wrote:

Thank you for your reply. I looked at solr.in.sh I see that  
SOLR_SSL_CLIENT_KEY_STORE  is already commented out by default. But you are 
right I looked at the running solr,  I see the option -Djavax.net.ssl.keyStore 
pointing to solr-ssl.keystore.p12 , not sure how it is getting that value. Let 
me dig more. Thanks for the pointer. Also if you have a pointer how it get's 
populated  other than SOLR_SSL_CLIENT_KEY_STORE config in solr.in.sh , please 
let me know

#SOLR_SSL_CLIENT_KEY_STORE=
#SOLR_SSL_CLIENT_KEY_STORE_PASSWORD=
#SOLR_SSL_CLIENT_KEY_STORE_TYPE=
#SOLR_SSL_CLIENT_TRUST_STORE=
#SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD=
#SOLR_SSL_CLIENT_TRUST_STORE_TYPE=

Yes we are not using Solr client auth.

Thanks,
Rajeswari

On 7/14/20, 5:55 PM, "Kevin Risden"  wrote:

Hmmm so I looked closer - it looks like a side effect of the default
passthrough of the keystore being passed to the client keystore.

https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L229

Can you remove or commout the entire SOLR_SSL_CLIENT_KEY_STORE section 
from
bin/solr or bin/solr.cmd depending on which version you are using? The 
key
being to make sure to not set "-Djavax.net.ssl.keyStore".

This assumes that you aren't using Solr client auth (which based on your
config you aren't) and you aren't trying to use Solr to connect to 
anything
that is secured via clientAuth (most likely you aren't).

If you can try this and report back that would be awesome. I think this
will fix the issue and it would be possible to make client auth opt in
    instead of default fall back.
Kevin Risden



On Tue, Jul 14, 2020 at 1:46 AM Natarajan, Rajeswari <
rajeswari.natara...@sap.com> wrote:

> Thank you so much for the response.  Below are the configs I have in
> solr.in.sh and I followed
> https://lucene.apache.org/solr/guide/8_5/enabling-ssl.html 
documentation
>
> # Enables HTTPS. It is implicitly true if you set SOLR_SSL_KEY_STORE. 
Use
> this config
> # to enable https module with custom jetty configuration.
> SOLR_SSL_ENABLED=true
> # Uncomment to set SSL-related system properties
> # Be sure to update the paths to the correct keystore for your 
environment
> SOLR_SSL_KEY_STORE=etc/solr-ssl.keystore.p12
> SOLR_SSL_KEY_STORE_PASSWORD=secret
> SOLR_SSL_TRUST_STORE=etc/solr-ssl.keystore.p12
> SOLR_SSL_TRUST_STORE_PASSWORD=secret
> # Require clients to authenticate
> SOLR_SSL_NEED_CLIENT_AUTH=false
> # Enable clients to authenticate (but not require)
> SOLR_SSL_WANT_CLIENT_AUTH=false
> # SSL Certificates contain host/ip "peer name" information that is
> validated by default. Setting
> # this to false can be useful to disable these checks when re-using a
> certificate on many hosts
> SOLR_SSL_CHECK_PEER_NAME=true
>
> In local , with the below certificate it works
> ---
>
> keytool -list -keystore solr-ssl.keystore.p12
> Enter keystore password:
> Keystore type: PKCS12
> Keystore provider: SUN
>
> Your keystore contains 1 entry
>
> 

Re: [CAUTION] SSL + Solr 8.5.1 in cloud mode + Java 8

2020-07-15 Thread Natarajan, Rajeswari
Thank you for your reply. I looked at solr.in.sh I see that  
SOLR_SSL_CLIENT_KEY_STORE  is already commented out by default. But you are 
right I looked at the running solr,  I see the option -Djavax.net.ssl.keyStore 
pointing to solr-ssl.keystore.p12 , not sure how it is getting that value. Let 
me dig more. Thanks for the pointer. Also if you have a pointer how it get's 
populated  other than SOLR_SSL_CLIENT_KEY_STORE config in solr.in.sh , please 
let me know

#SOLR_SSL_CLIENT_KEY_STORE=
#SOLR_SSL_CLIENT_KEY_STORE_PASSWORD=
#SOLR_SSL_CLIENT_KEY_STORE_TYPE=
#SOLR_SSL_CLIENT_TRUST_STORE=
#SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD=
#SOLR_SSL_CLIENT_TRUST_STORE_TYPE=

Yes we are not using Solr client auth.

Thanks,
Rajeswari

On 7/14/20, 5:55 PM, "Kevin Risden"  wrote:

Hmmm so I looked closer - it looks like a side effect of the default
passthrough of the keystore being passed to the client keystore.

https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L229

Can you remove or commout the entire SOLR_SSL_CLIENT_KEY_STORE section from
bin/solr or bin/solr.cmd depending on which version you are using? The key
being to make sure to not set "-Djavax.net.ssl.keyStore".

This assumes that you aren't using Solr client auth (which based on your
config you aren't) and you aren't trying to use Solr to connect to anything
that is secured via clientAuth (most likely you aren't).

If you can try this and report back that would be awesome. I think this
will fix the issue and it would be possible to make client auth opt in
instead of default fall back.
Kevin Risden



On Tue, Jul 14, 2020 at 1:46 AM Natarajan, Rajeswari <
rajeswari.natara...@sap.com> wrote:

> Thank you so much for the response.  Below are the configs I have in
> solr.in.sh and I followed
> https://lucene.apache.org/solr/guide/8_5/enabling-ssl.html documentation
>
> # Enables HTTPS. It is implicitly true if you set SOLR_SSL_KEY_STORE. Use
> this config
> # to enable https module with custom jetty configuration.
> SOLR_SSL_ENABLED=true
> # Uncomment to set SSL-related system properties
> # Be sure to update the paths to the correct keystore for your environment
> SOLR_SSL_KEY_STORE=etc/solr-ssl.keystore.p12
> SOLR_SSL_KEY_STORE_PASSWORD=secret
> SOLR_SSL_TRUST_STORE=etc/solr-ssl.keystore.p12
> SOLR_SSL_TRUST_STORE_PASSWORD=secret
> # Require clients to authenticate
> SOLR_SSL_NEED_CLIENT_AUTH=false
> # Enable clients to authenticate (but not require)
> SOLR_SSL_WANT_CLIENT_AUTH=false
> # SSL Certificates contain host/ip "peer name" information that is
> validated by default. Setting
> # this to false can be useful to disable these checks when re-using a
> certificate on many hosts
> SOLR_SSL_CHECK_PEER_NAME=true
>
> In local , with the below certificate it works
> ---
>
> keytool -list -keystore solr-ssl.keystore.p12
> Enter keystore password:
> Keystore type: PKCS12
> Keystore provider: SUN
>
> Your keystore contains 1 entry
>
> solr-18, Jun 26, 2020, PrivateKeyEntry,
> Certificate fingerprint (SHA1):
> AB:F2:C8:84:E8:E7:A2:BF:2D:0D:2F:D3:95:4A:98:5B:2A:88:81:50
> C02W48C6HTD6:solr-8.5.1 i843100$ keytool -list -v -keystore
> solr-ssl.keystore.p12
> Enter keystore password:
> Keystore type: PKCS12
> Keystore provider: SUN
>
> Your keystore contains 1 entry
>
> Alias name: solr-18
> Creation date: Jun 26, 2020
> Entry type: PrivateKeyEntry
> Certificate chain length: 1
> Certificate[1]:
> Owner: CN=localhost, OU=Organizational Unit, O=Organization, L=Location,
> ST=State, C=Country
> Issuer: CN=localhost, OU=Organizational Unit, O=Organization, L=Location,
> ST=State, C=Country
> Serial number: 45a822c8
> Valid from: Fri Jun 26 00:13:03 PDT 2020 until: Sun Nov 10 23:13:03 PST
> 2047
> Certificate fingerprints:
>  MD5:  0B:80:54:89:44:65:93:07:1F:81:88:8D:EC:BD:38:41
>  SHA1: AB:F2:C8:84:E8:E7:A2:BF:2D:0D:2F:D3:95:4A:98:5B:2A:88:81:50
>  SHA256:
> 
9D:65:A6:55:D7:22:B2:72:C2:20:55:66:F8:0C:9C:48:B1:F6:48:40:A4:FB:CB:26:77:DE:C4:97:34:69:25:42
> Signature algorithm name: SHA256withRSA
> Subject Public Key Algorithm: 2048-bit RSA key
> Version: 3
>
> Extensions:
>
> #1: ObjectId: 2.5.29.17 Criticality=false
> SubjectAlternativeName [
>   DNSName: localhost
>

UpdateProcessorChains -cdcr processor along with ignore commit processor

2020-07-14 Thread Natarajan, Rajeswari
Hi ,

Would like to have these two processors (cdcr and ignorecommit)  in 
solrconfig.xml .

But cdcr fails with below error , with either cdcr-processor-chain or 
ignore-commit-from-client chain

version conflict for 60d35f0850afac66 expected=1671629672447737856 
actual=-1, retry=0 commError=false errorCode=409



 

  

cdcr-processor-chain

  







  

  









  

200

  



  





Also tried as below. Getting error comaplining about custom processor



  

custom

  
>









 200






  



Is there a way these two processors can be applied together.

Thanks,
Rajeswari


Re: [CAUTION] SSL + Solr 8.5.1 in cloud mode + Java 8

2020-07-13 Thread Natarajan, Rajeswari
ObjectId: 2.5.29.19 Criticality=false
BasicConstraints:[
  CA:false
  PathLen: undefined
]

#4: ObjectId: 2.5.29.37 Criticality=false
ExtendedKeyUsages [
  serverAuth
]

#5: ObjectId: 2.5.29.15 Criticality=true
KeyUsage [
  DigitalSignature
  Key_Encipherment
]

#6: ObjectId: 2.16.840.1.113730.1.1 Criticality=false
NetscapeCertType [
   SSL server
]

#7: ObjectId: 2.5.29.17 Criticality=false
SubjectAlternativeName [
  DNSName: search-solrcloud-solrcloud.service
  DNSName: search-solrcloud-solrcloud.service.mu.aws.ariba.com
  DNSName: *.query.mu.aws.ariba.com
  DNSName: *.query
  DNSName: *.service
  DNSName: 
e046469b-1bb0-55f6-913f-bd6d52b238a8.search-solrcloud-solrcloud.service.mu.aws.ariba.com
  DNSName: 
e046469b-1bb0-55f6-913f-bd6d52b238a8.search-solrcloud-solrcloud.service
  DNSName: *.service.mu.aws.ariba.com
  DNSName: 1.search-solrcloud-solrcloud.service.mu.aws.ariba.com
  DNSName: 1.search-solrcloud-solrcloud.service
  DNSName: localhost
  IPAddress: 10.1.56.9
  IPAddress: 10.169.50.16
  IPAddress: 127.0.0.1
]

#8: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
: 3F 9D 3D 24 48 1E 61 3C   BD C0 A4 07 8B 64 51 0D  ?.=$H.a<.dQ.
0010: A2 B2 FE 89
]
]

Certificate[2]:
Owner: CN=SAP Ariba Cobalt Sidecar Intermediate CA, OU=COBALT, O=SAP Ariba, 
ST=CA, C=US
Issuer: CN=SAP Ariba Cobalt CA, OU=ES, O=SAP Ariba, L=Palo Alto, ST=CA, C=US
Serial number: 1001
Valid from: Thu Apr 16 07:18:55 GMT 2020 until: Sun Apr 14 07:18:55 GMT 2030
Certificate fingerprints:
 MD5:  FA:70:2F:DB:63:36:66:71:A6:7B:0F:46:F3:52:0B:3C
 SHA1: 4F:27:D3:E3:12:24:64:18:B5:97:D0:BF:94:37:2D:5C:33:EA:1E:40
 SHA256: 
15:28:F4:DB:B3:D5:2E:21:6A:2E:56:47:E3:6B:D3:16:96:18:06:96:DA:5D:28:6B:34:CB:6D:FA:E8:FA:85:13
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 4096-bit RSA key
Version: 3

Extensions: 

#1: ObjectId: 2.5.29.35 Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
: D8 A1 D1 11 50 8C 1C 2A   67 69 82 40 DF B5 68 6A  P..*g...@..hj
0010: E4 97 6E 32..n2
]
]

#2: ObjectId: 2.5.29.19 Criticality=true
BasicConstraints:[
  CA:true
  PathLen:0
]

#3: ObjectId: 2.5.29.15 Criticality=true
KeyUsage [
  DigitalSignature
  Key_CertSign
  Crl_Sign
]

#4: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
: E9 5C 42 72 5E 70 D9 02   05 AA 11 BA 0D 4D 8D 0D  .\Br^p...M..
0010: F3 37 2C 95.7,.
]
]


Thanks,
Rajeswari

On 7/13/20, 2:16 PM, "Kevin Risden"  wrote:

>
> In local with just certificate and one domain name  the SSL communication
> worked. With multiple DNS and 2 certificates SSL fails with below 
exception.
>

A client keystore by definition can only have a single certificate. A
server keystore can have multiple certificates. The reason being is that a
client can only be identified by a single certificate.

Can you share more details about specifically what your solr.in.sh configs
look like related to keystore/truststore and which files? Specifically
highlight which files have multiple certificates in them.

It looks like for the Solr internal http client, the client keystore has
more than one certificate in it and the error is correct. This is more
strict with recent versions of Jetty 9.4.x. Previously this would silently
fail, but was still incorrect. Now the error is bubbled up so that there is
no silent misconfigurations.

Kevin Risden


    On Mon, Jul 13, 2020 at 4:54 PM Natarajan, Rajeswari <
rajeswari.natara...@sap.com> wrote:

> I looked at the patch mentioned in the JIRA
> https://issues.apache.org/jira/browse/SOLR-14105  reporting the below
> issue. I looked at the solr 8.5.1 code base , I see the patch is applied.
> But still seeing the same  exception with different stack trace. The
> initial excsption stacktrace was at
>
> at
> 
org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:245)
>
>
> Now the exception we encounter is at httpsolrclient creation
>
>
> Caused by: java.lang.RuntimeException:
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only
> supported on Server
>   at
> 
org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:223)
>
> I commented the JIRA also. Let me know if this is still an issue.
>
> Thanks,
> Rajeswari
>
> On 7/13/20, 2:03 AM, "Natarajan, Rajeswari" 
> wrote:
>
> Re-sending to see if anyone encountered  had this combination and
> encountered this issue. In local with just certificate and one domain name
> the SSL communication

Re: [CAUTION] SSL + Solr 8.5.1 in cloud mode + Java 8

2020-07-13 Thread Natarajan, Rajeswari
I looked at the patch mentioned in the JIRA  
https://issues.apache.org/jira/browse/SOLR-14105  reporting the below issue. I 
looked at the solr 8.5.1 code base , I see the patch is applied. But still 
seeing the same  exception with different stack trace. The initial excsption 
stacktrace was at 

at 
org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:245)


Now the exception we encounter is at httpsolrclient creation


Caused by: java.lang.RuntimeException: 
java.lang.UnsupportedOperationException: X509ExtendedKeyManager only supported 
on Server
  at 
org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:223)

I commented the JIRA also. Let me know if this is still an issue.

Thanks,
Rajeswari

On 7/13/20, 2:03 AM, "Natarajan, Rajeswari"  
wrote:

Re-sending to see if anyone encountered  had this combination and 
encountered this issue. In local with just certificate and one domain name  the 
SSL communication worked. With multiple DNS and 2 certificates SSL fails with 
below exception.  Below JIRA says it is fixed for Http2SolrClient , wondering 
if this is fixed for http1 solr client as we pass -Dsolr.http1=true .

Thanks,
Rajeswari

https://issues.apache.org/jira/browse/SOLR-14105

On 7/6/20, 10:02 PM, "Natarajan, Rajeswari"  
wrote:

Hi,

We are using Solr 8.5.1 in cloud mode  with Java 8. We are enabling  
TLS  with http1  (as we get a warning java 8 + solr 8.5 SSL can’t be enabled) 
and we get below exception



2020-07-07 03:58:53.078 ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.SolrException: Error instantiating 
shardHandlerFactory class [HttpShardHandlerFactory]: 
java.lang.UnsupportedOperationException: X509ExtendedKeyManager only supported 
on Server
  at 
org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
  at org.apache.solr.core.CoreContainer.load(CoreContainer.java:647)
  at 
org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:263)
  at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:183)
  at 
org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:134)
  at 
org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:751)
  at 
java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
  at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
  at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
  at 
java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
  at 
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
  at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:360)
  at 
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1445)
  at 
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1409)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:822)
  at 
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:275)
  at 
org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:46)
  at 
org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
  at 
org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:513)
  at 
org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:154)
  at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:173)
  at 
org.eclipse.jetty.deploy.providers.WebAppProvider.fileAdded(WebAppProvider.java:447)
  at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:66)
  at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:784)
  at 
org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:753)
  at org.eclipse.jetty.util.Scanner.scan(Scanner.java:641)
  at org.eclipse.jetty.util.Scanner.doStart(Scanner.java:540)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:146)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(Abstr

SSL + Solr 8.5.1 in cloud mode + Java 8

2020-07-13 Thread Natarajan, Rajeswari
Re-sending to see if anyone encountered  had this combination and encountered 
this issue. In local with just certificate and one domain name  the SSL 
communication worked. With multiple DNS and 2 certificates SSL fails with below 
exception.  Below JIRA says it is fixed for Http2SolrClient , wondering if this 
is fixed for http1 solr client as we pass -Dsolr.http1=true .

Thanks,
Rajeswari

https://issues.apache.org/jira/browse/SOLR-14105

On 7/6/20, 10:02 PM, "Natarajan, Rajeswari"  
wrote:

Hi,

We are using Solr 8.5.1 in cloud mode  with Java 8. We are enabling  TLS  
with http1  (as we get a warning java 8 + solr 8.5 SSL can’t be enabled) and we 
get below exception



2020-07-07 03:58:53.078 ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.SolrException: Error instantiating 
shardHandlerFactory class [HttpShardHandlerFactory]: 
java.lang.UnsupportedOperationException: X509ExtendedKeyManager only supported 
on Server
  at 
org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
  at org.apache.solr.core.CoreContainer.load(CoreContainer.java:647)
  at 
org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:263)
  at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:183)
  at 
org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:134)
  at 
org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:751)
  at 
java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
  at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
  at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
  at 
java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
  at 
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
  at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:360)
  at 
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1445)
  at 
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1409)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:822)
  at 
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:275)
  at 
org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:46)
  at 
org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
  at 
org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:513)
  at 
org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:154)
  at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:173)
  at 
org.eclipse.jetty.deploy.providers.WebAppProvider.fileAdded(WebAppProvider.java:447)
  at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:66)
  at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:784)
  at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:753)
  at org.eclipse.jetty.util.Scanner.scan(Scanner.java:641)
  at org.eclipse.jetty.util.Scanner.doStart(Scanner.java:540)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:146)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:599)
  at 
org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:249)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
  at org.eclipse.jetty.server.Server.start(Server.java:407)
  at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
  at 
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:100)
  at org.eclipse.jetty.server.Server.doStart(Server.java:371)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.xml.XmlConfiguration.lambda$main$0(XmlConfiguration.java:1888)
  at java.security.AccessController.doPrivileged(Nat

SSL + Solr 8.5.1 in cloud mode + Java 8

2020-07-06 Thread Natarajan, Rajeswari
Hi,

We are using Solr 8.5.1 in cloud mode  with Java 8. We are enabling  TLS  with 
http1  (as we get a warning java 8 + solr 8.5 SSL can’t be enabled) and we get 
below exception



2020-07-07 03:58:53.078 ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.SolrException: Error instantiating 
shardHandlerFactory class [HttpShardHandlerFactory]: 
java.lang.UnsupportedOperationException: X509ExtendedKeyManager only supported 
on Server
  at 
org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
  at org.apache.solr.core.CoreContainer.load(CoreContainer.java:647)
  at 
org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:263)
  at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:183)
  at 
org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:134)
  at 
org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:751)
  at 
java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
  at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
  at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
  at 
java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
  at 
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
  at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:360)
  at 
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1445)
  at 
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1409)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:822)
  at 
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:275)
  at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:46)
  at 
org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
  at 
org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:513)
  at 
org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:154)
  at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:173)
  at 
org.eclipse.jetty.deploy.providers.WebAppProvider.fileAdded(WebAppProvider.java:447)
  at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:66)
  at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:784)
  at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:753)
  at org.eclipse.jetty.util.Scanner.scan(Scanner.java:641)
  at org.eclipse.jetty.util.Scanner.doStart(Scanner.java:540)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:146)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:599)
  at 
org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:249)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
  at org.eclipse.jetty.server.Server.start(Server.java:407)
  at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
  at 
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:100)
  at org.eclipse.jetty.server.Server.doStart(Server.java:371)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.xml.XmlConfiguration.lambda$main$0(XmlConfiguration.java:1888)
  at java.security.AccessController.doPrivileged(Native Method)
  at org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:1837)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at org.eclipse.jetty.start.Main.invokeMain(Main.java:218)
  at org.eclipse.jetty.start.Main.start(Main.java:491)
  at org.eclipse.jetty.start.Main.main(Main.java:77)
Caused by: java.lang.RuntimeException: java.lang.UnsupportedOperationException: 
X509ExtendedKeyManager only supported on Server
  at 

Re: How upgrade to Solr 8 impact performance

2020-04-28 Thread Natarajan, Rajeswari
Sorry about the late reply.

The faceted search and free text search query performance degraded for us.

Thanks,
Rajeswari

On 4/23/20, 12:21 AM, "Srinivas Kashyap"  
wrote:

Can you share with details, what performance was degraded?

Thanks,
srinivas
From: Natarajan, Rajeswari 
Sent: 23 April 2020 12:41
To: solr-user@lucene.apache.org
Subject: Re: How upgrade to Solr 8 impact performance

With the same hardware and configuration we also saw performance 
degradation from 7.6 to 8.4.1 as this is why we are checking here to see if 
anyone else saw this behavior.

-Rajeswari

On 4/22/20, 7:16 AM, "Paras Lehana" 
mailto:paras.leh...@indiamart.com>> wrote:

Hi Rajeswari,

I can only share my experience of moving from Solr 6 to Solr 8. I suggest
you to move and then reevaluate your performance metrics. To recall another
experience, we moved from Java 8 to 11 for Solr 8.

Please note experiences can differ! :)

On Wed, 22 Apr 2020 at 00:50, Natarajan, Rajeswari <
rajeswari.natara...@sap.com<mailto:rajeswari.natara...@sap.com>> wrote:

> Any other experience from solr 7 to sol8 upgrade performance .Please
> share.
>
> Thanks,
> Rajeswari
>
> On 4/15/20, 4:00 PM, "Paras Lehana" 
mailto:paras.leh...@indiamart.com>> wrote:
>
> In January, we upgraded Solr from version 6 to 8 skipping all versions
> in
> between.
>
> The hardware and Solr configurations were kept the same but we still
> faced
> degradation in response time by 30-50%. We had exceptional Query times
> around 25 ms with Solr 6 and now we are hovering around 36 ms.
>
> Since response times under 50 ms are very good even for Auto-Suggest,
> we
> have not tried any changes regarding this. Nevertheless, you can try
> using
> Caffeine Cache. Looking forward to read community inputs as well.
>
>
>
> On Thu, 16 Apr 2020 at 01:34, ChienHuaWang 
mailto:chien-hua.w...@sap.com>>
> wrote:
>
> > Do anyone have experience to upgrade the application with Solr 7.X
> to 8.X?
> > How's the query performance?
> > Found out a little slower response time from application with Solr8
> based
> > on
> > current measurement, still looking into more detail it.
> > But wondering is any one have similar experience? is that something
> we
> > should expect for Solr 8.X?
> >
> > Please kindly share, thanks.
> >
> > Regards,
> > ChienHua
> >
> >
> >
> > --
> > Sent from:
> 
https://lucene.472066.n3.nabble.com/Solr-User-f472068.html<https://lucene.472066.n3.nabble.com/Solr-User-f472068.html>
> >
>
>
> --
> --
> Regards,
>
> *Paras Lehana* [65871]
> Development Engineer, *Auto-Suggest*,
> IndiaMART InterMESH Ltd,
>
> 11th Floor, Tower 2, Assotech Business Cresterra,
> Plot No. 22, Sector 135, Noida, Uttar Pradesh, India 201305
>
> Mob.: +91-9560911996
> Work: 0120-4056700 | Extn:
> *1196*
>
> --
> *
> *
>
> 
<https://www.facebook.com/IndiaMART/videos/578196442936091/<https://www.facebook.com/IndiaMART/videos/578196442936091>>
>
>

--
--
Regards,

*Paras Lehana* [65871]
Development Engineer, *Auto-Suggest*,
IndiaMART InterMESH Ltd,

11th Floor, Tower 2, Assotech Business Cresterra,
Plot No. 22, Sector 135, Noida, Uttar Pradesh, India 201305

Mob.: +91-9560911996
Work: 0120-4056700 | Extn:
*1196*

--
*
*


<https://www.facebook.com/IndiaMART/videos/578196442936091/<https://www.facebook.com/IndiaMART/videos/578196442936091/>>

DISCLAIMER:
E-mails and attachments from Bamboo Rose, LLC are confidential.
If you are not the intended recipient, please notify the sender immediately 
by replying to the e-mail, and then delete it without making copies or using it 
in any way.
No representation is made that this email or any attachments are free of 
viruses. Virus scanning is recommended and is the responsibility of the 
recipient.

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.



Re: How upgrade to Solr 8 impact performance

2020-04-23 Thread Natarajan, Rajeswari
With the same hardware and configuration we also saw performance degradation 
from 7.6 to 8.4.1 as this is why we are checking here to see if anyone else saw 
this behavior.

-Rajeswari

On 4/22/20, 7:16 AM, "Paras Lehana"  wrote:

Hi Rajeswari,

I can only share my experience of moving from Solr 6 to Solr 8. I suggest
you to move and then reevaluate your performance metrics. To recall another
experience, we moved from Java 8 to 11 for Solr 8.

Please note experiences can differ! :)

On Wed, 22 Apr 2020 at 00:50, Natarajan, Rajeswari <
rajeswari.natara...@sap.com> wrote:

> Any other experience from solr 7 to sol8 upgrade performance  .Please
> share.
>
> Thanks,
> Rajeswari
>
> On 4/15/20, 4:00 PM, "Paras Lehana"  wrote:
>
> In January, we upgraded Solr from version 6 to 8 skipping all versions
> in
> between.
>
> The hardware and Solr configurations were kept the same but we still
> faced
> degradation in response time by 30-50%. We had exceptional Query times
> around 25 ms with Solr 6 and now we are hovering around 36 ms.
>
> Since response times under 50 ms are very good even for Auto-Suggest,
> we
> have not tried any changes regarding this. Nevertheless, you can try
> using
> Caffeine Cache. Looking forward to read community inputs as well.
>
>
>
> On Thu, 16 Apr 2020 at 01:34, ChienHuaWang 
> wrote:
>
> > Do anyone have experience to upgrade the application with Solr 7.X
> to 8.X?
> > How's the query performance?
> > Found out a little slower response time from application with Solr8
> based
> > on
> > current measurement, still looking into more detail it.
> > But wondering is any one have similar experience? is that something
> we
> > should expect for Solr 8.X?
> >
> > Please kindly share, thanks.
> >
> > Regards,
> > ChienHua
> >
> >
> >
> > --
> > Sent from:
> https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
> >
>
>
> --
> --
> Regards,
>
> *Paras Lehana* [65871]
> Development Engineer, *Auto-Suggest*,
> IndiaMART InterMESH Ltd,
>
> 11th Floor, Tower 2, Assotech Business Cresterra,
> Plot No. 22, Sector 135, Noida, Uttar Pradesh, India 201305
>
> Mob.: +91-9560911996
> Work: 0120-4056700 | Extn:
> *1196*
>
> --
> *
> *
>
>  <https://www.facebook.com/IndiaMART/videos/578196442936091/>
>
>

-- 
-- 
Regards,

*Paras Lehana* [65871]
Development Engineer, *Auto-Suggest*,
IndiaMART InterMESH Ltd,

11th Floor, Tower 2, Assotech Business Cresterra,
Plot No. 22, Sector 135, Noida, Uttar Pradesh, India 201305

Mob.: +91-9560911996
Work: 0120-4056700 | Extn:
*1196*

-- 
*
*

 <https://www.facebook.com/IndiaMART/videos/578196442936091/>



Re: How upgrade to Solr 8 impact performance

2020-04-21 Thread Natarajan, Rajeswari
Any other experience from solr 7 to sol8 upgrade performance  .Please share.

Thanks,
Rajeswari

On 4/15/20, 4:00 PM, "Paras Lehana"  wrote:

In January, we upgraded Solr from version 6 to 8 skipping all versions in
between.

The hardware and Solr configurations were kept the same but we still faced
degradation in response time by 30-50%. We had exceptional Query times
around 25 ms with Solr 6 and now we are hovering around 36 ms.

Since response times under 50 ms are very good even for Auto-Suggest, we
have not tried any changes regarding this. Nevertheless, you can try using
Caffeine Cache. Looking forward to read community inputs as well.



On Thu, 16 Apr 2020 at 01:34, ChienHuaWang  wrote:

> Do anyone have experience to upgrade the application with Solr 7.X to 8.X?
> How's the query performance?
> Found out a little slower response time from application with Solr8 based
> on
> current measurement, still looking into more detail it.
> But wondering is any one have similar experience? is that something we
> should expect for Solr 8.X?
>
> Please kindly share, thanks.
>
> Regards,
> ChienHua
>
>
>
> --
> Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>


-- 
-- 
Regards,

*Paras Lehana* [65871]
Development Engineer, *Auto-Suggest*,
IndiaMART InterMESH Ltd,

11th Floor, Tower 2, Assotech Business Cresterra,
Plot No. 22, Sector 135, Noida, Uttar Pradesh, India 201305

Mob.: +91-9560911996
Work: 0120-4056700 | Extn:
*1196*

-- 
*
*

 



Re: Can I create 1000 cores in SOLR CLOUD

2020-01-29 Thread Natarajan, Rajeswari
Good to know Shawn.

Thanks,
Rajeswari

On 1/29/20, 12:52 PM, "Shawn Heisey"  wrote:

On 1/27/2020 4:59 AM, Vignan Malyala wrote:
> We are currently using solr without cloud with 500 cores. It works good.
> 
> Now we are planning to expand it using solr cloud with 1000 cores, (2 
cores
> for each of my client with different domain data).

SolrCloud starts having scalability issues once you reach a few hundred 
collections, regardless of how many servers are in the cloud.

I explored this in SOLR-7191.  You might notice that the issue is in a 
"Resolved/Fixed" state ... but there were no changes committed, and when 
I tested it again with a later version, I saw evidence that the 
situation has gotten worse, not better.

https://issues.apache.org/jira/browse/SOLR-7191

If you already have mechanisms in place to handle high availability, you 
would be far better off NOT using SolrCloud mode.

Thanks,
Shawn




Re: [CAUTION] Converting graph query to stream graph query

2019-10-15 Thread Natarajan, Rajeswari
I need to gather all the children of docid  1 . Root item has parent as null. 
(Sample data below)

Tried as below

nodes(graphtest,
  walk="1->parent",
  gather="docid",
  scatter="branches, leaves")
  
Response :
{
  "result-set": {
"docs": [
  {
"node": "1",
"collection": "graphtest,",
"field": "node",
"level": 0
  },
  {
"EOF": true,
"RESPONSE_TIME": 5
  }
]
  }
}

Query just gets the  root item and not it's children. Looks like I am missing 
something obvious . Any pointers , please.

As I said earlier the below graph query gets all the children of docid 1.

fq={!graph from=parent to=docid}docid:"1" 

Thanks,
Rajeswari



On 10/15/19, 12:04 PM, "Natarajan, Rajeswari"  
wrote:

Hi,


curl -XPOST -H 'Content-Type: application/json' 
'http://localhost:8983/solr/ggg/update' --data-binary '{
"add" : { "doc" : { "id" : "a", "docid" : "1", "name" : "Root document one" 
} },
"add" : { "doc" : { "id" : "b", "docid" : "2", "name" : "Root document two" 
} },
"add" : { "doc" : {  "id" : "c", "docid" : "3", "name" : "Root document 
three" } },
"add" : { "doc" : {  "id" : "d", "docid" : "11", "parent" : "1", "name" : 
"First level document 1, child one" } },
"add" : { "doc" : {  "id" : "e", "docid" : "12", "parent" : "1", "name" : 
"First level document 1, child two" } },
"add" : { "doc" : {  "id" : "f", "docid" : "13", "parent" : "1", "name" : 
"First level document 1, child three" } },
"add" : { "doc" : {  "id" : "g", "docid" : "21", "parent" : "2", "name" : 
"First level document 2, child one" } },
"add" : { "doc" : {  "id" : "h", "docid" : "22", "parent" : "2", "name" : 
"First level document 2, child two" } },
"add" : { "doc" : {  "id" : "j", "docid" : "121", "parent" : "12", "name" : 
"Second level document 12, child one" } },
"add" : { "doc" : {  "id" : "k", "docid" : "122", "parent" : "12", "name" : 
"Second level document 12, child two" } },
"add" : { "doc" : {  "id" : "l", "docid" : "131", "parent" : "13", "name" : 
"Second level document 13, child three" } },
"commit" : {}
}'


For the above data , the below query gets all the children of document with 
docid 1.


http://localhost:8983/solr/graphtest/select?q=*:*={!graph%20from=parent%20to=docid}docid:"1<http://localhost:8983/solr/graphtest/select?q=*:*=%7b!graph%20from=parent%20to=docid%7ddocid:%221>"


How can I convert this query into streaming graph query with nodes 
expression.

Thanks,
Rajeswari





Converting graph query to stream graph query

2019-10-15 Thread Natarajan, Rajeswari
Hi,


curl -XPOST -H 'Content-Type: application/json' 
'http://localhost:8983/solr/ggg/update' --data-binary '{
"add" : { "doc" : { "id" : "a", "docid" : "1", "name" : "Root document one" } },
"add" : { "doc" : { "id" : "b", "docid" : "2", "name" : "Root document two" } },
"add" : { "doc" : {  "id" : "c", "docid" : "3", "name" : "Root document three" 
} },
"add" : { "doc" : {  "id" : "d", "docid" : "11", "parent" : "1", "name" : 
"First level document 1, child one" } },
"add" : { "doc" : {  "id" : "e", "docid" : "12", "parent" : "1", "name" : 
"First level document 1, child two" } },
"add" : { "doc" : {  "id" : "f", "docid" : "13", "parent" : "1", "name" : 
"First level document 1, child three" } },
"add" : { "doc" : {  "id" : "g", "docid" : "21", "parent" : "2", "name" : 
"First level document 2, child one" } },
"add" : { "doc" : {  "id" : "h", "docid" : "22", "parent" : "2", "name" : 
"First level document 2, child two" } },
"add" : { "doc" : {  "id" : "j", "docid" : "121", "parent" : "12", "name" : 
"Second level document 12, child one" } },
"add" : { "doc" : {  "id" : "k", "docid" : "122", "parent" : "12", "name" : 
"Second level document 12, child two" } },
"add" : { "doc" : {  "id" : "l", "docid" : "131", "parent" : "13", "name" : 
"Second level document 13, child three" } },
"commit" : {}
}'


For the above data , the below query gets all the children of document with 
docid 1.

http://localhost:8983/solr/graphtest/select?q=*:*={!graph%20from=parent%20to=docid}docid:"1"


How can I convert this query into streaming graph query with nodes expression.

Thanks,
Rajeswari



Re: [CAUTION] Re: Solr 7.7 restore issue

2019-10-08 Thread Natarajan, Rajeswari
It looks like the rule created before was wrong.

From the solr documentation below
https://lucene.apache.org/solr/guide/7_6/rule-based-replica-placement.html

For a given shard, keep less than 2 replicas on any node
For this rule, we use the shard condition to define any shard, the replica 
condition with operators for "less than 2", and finally a pre-defined tag named 
node to define nodes with any name.

shard:*,replica:<2,node:*

The a above rule works fine with the restore.

Thanks,
Rajeswari

On 10/8/19, 9:34 PM, "Natarajan, Rajeswari"  
wrote:

I am also facing the same issue. With Solr 7.6 restore fails with below 
rule. Would like to place one replica per node by below rule

 with the rule to place one replica per node
"set-cluster-policy": [{
"replica": "<2",
"shard": "#EACH",
"node": "#ANY"
}]

Without the rule the restore works. But we need this rule. Any suggestions 
to overcome this issue. 

Thanks,
Rajeswari

On 7/12/19, 11:00 AM, "Mark Thill"  wrote:

I have a 4 node cluster.  My goal is to have 2 shards with two replicas
each and only allowing 1 core on each node.  I have a cluster policy 
set to:

[{"replica":"2", "shard": "#EACH", "collection":"test",
"port":"8983"},{"cores":"1", "node":"#ANY"}]

I then manually create a collection with:

name: test
config set: test
numShards: 2
replicationFact: 2

This works and I get a collection that looks like what I expect.  I then
backup this collection.  But when I try to restore the collection it 
fails
and says

"Error getting replica locations : No node can satisfy the rules"
[{"replica":"2", "shard": "#EACH", "collection":"test",
"port":"8983"},{"cores":"1", "node":"#ANY"}]

If I set my cluster-policy rules back to [] and try to restore it then
successfully restores my collection exactly how I expect it to be.  It
appears that having any cluster-policy rules in place is affecting my
restore, but the "error getting replica locations" is strange.

Any suggestions?

mark 






Re: Solr 7.7 restore issue

2019-10-08 Thread Natarajan, Rajeswari
I am also facing the same issue. With Solr 7.6 restore fails with below rule. 
Would like to place one replica per node by below rule

 with the rule to place one replica per node
"set-cluster-policy": [{
"replica": "<2",
"shard": "#EACH",
"node": "#ANY"
}]

Without the rule the restore works. But we need this rule. Any suggestions to 
overcome this issue. 

Thanks,
Rajeswari

On 7/12/19, 11:00 AM, "Mark Thill"  wrote:

I have a 4 node cluster.  My goal is to have 2 shards with two replicas
each and only allowing 1 core on each node.  I have a cluster policy set to:

[{"replica":"2", "shard": "#EACH", "collection":"test",
"port":"8983"},{"cores":"1", "node":"#ANY"}]

I then manually create a collection with:

name: test
config set: test
numShards: 2
replicationFact: 2

This works and I get a collection that looks like what I expect.  I then
backup this collection.  But when I try to restore the collection it fails
and says

"Error getting replica locations : No node can satisfy the rules"
[{"replica":"2", "shard": "#EACH", "collection":"test",
"port":"8983"},{"cores":"1", "node":"#ANY"}]

If I set my cluster-policy rules back to [] and try to restore it then
successfully restores my collection exactly how I expect it to be.  It
appears that having any cluster-policy rules in place is affecting my
restore, but the "error getting replica locations" is strange.

Any suggestions?

mark 




Re: [CAUTION] Re: [CAUTION] Re: CDCR Queues API invocation with CloudSolrclient

2019-07-25 Thread Natarajan, Rajeswari
I tried Shawn's suggestion to use SolrQuery Object instead of  QT , still it is 
the same issue.

Regards,
Rajeswari

On 7/24/19, 4:54 PM, "Natarajan, Rajeswari"  
wrote:

Please look at the below test  which tests CDCR OPS Api. This has 
"BadApple" annotation (meaning the test fails intermittently)

https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrOpsAndBoundariesTest.java#L73
This also  is because of  sometimes the Cloudsolrclient gets the value and 
sometimes not. This OPS api also needs to talk to core. OK indeed this issue 
looks like a bug

Thanks,
Rajeswari

On 7/24/19, 4:18 PM, "Natarajan, Rajeswari"  
wrote:

Btw , the code is copied from solr 7.6 source code.

Thanks,
Rajeswari

On 7/24/19, 4:12 PM, "Natarajan, Rajeswari" 
 wrote:

Thanks Shawn for the reply. I am not saying it is bug. I just would 
like to know how to get the "lastTimestamp" by invoking CluodSolrClient 
reliabily.

Regards,
Rajeswari

On 7/24/19, 3:14 PM, "Shawn Heisey"  wrote:
    
On 7/24/2019 3:50 PM, Natarajan, Rajeswari wrote:
> Hi,
> 
> With the below API , the QueryResponse , sometimes have the 
"lastTimestamp" , sometimes not.
> protected static QueryResponse getCdcrQueue(CloudSolrClient 
client) throws SolrServerException, IOException {
>  ModifiableSolrParams params = new ModifiableSolrParams();
>  params.set(CommonParams.QT, "/cdcr");
>  params.set(CommonParams.ACTION, CdcrParams.QUEUES);
>  return client.query(params);
>}

Side note:  Setting the handler path with the qt parameter was 
deprecated in Solr 3.6, which was released seven years ago.  
I'm 
surprised it even still works.

Use a SolrQuery object instead of ModifiableSolrParams, and 
call its 
setRequestHandler method to set the request handler.

> Invoking 
http://:/solr//cdcr?action=QUEUES  has the same 
issue
> 
> But if invoked as 
http://:/solr//cdcr?action=QUEUES always gets the " 
lastTimestamp" value. Would like to know
> How to get the cdcr queues always return " lastTimestamp" 
value reliabily by CloudSolrClient.

This part I really have no idea about.  The API documentation 
does say 
that monitoring actions are done at the core level and control 
actions 
are done at the collection level, so this might not be 
considered a bug. 
  Someone who knows CDCR really well will need to comment.

https://lucene.apache.org/solr/guide/8_1/cdcr-api.html

Thanks,
Shawn










Re: [CAUTION] Re: CDCR Queues API invocation with CloudSolrclient

2019-07-24 Thread Natarajan, Rajeswari
Please look at the below test  which tests CDCR OPS Api. This has "BadApple" 
annotation (meaning the test fails intermittently)
https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrOpsAndBoundariesTest.java#L73
This also  is because of  sometimes the Cloudsolrclient gets the value and 
sometimes not. This OPS api also needs to talk to core. OK indeed this issue 
looks like a bug

Thanks,
Rajeswari

On 7/24/19, 4:18 PM, "Natarajan, Rajeswari"  
wrote:

Btw , the code is copied from solr 7.6 source code.

Thanks,
Rajeswari

On 7/24/19, 4:12 PM, "Natarajan, Rajeswari"  
wrote:

Thanks Shawn for the reply. I am not saying it is bug. I just would 
like to know how to get the "lastTimestamp" by invoking CluodSolrClient 
reliabily.

Regards,
Rajeswari

On 7/24/19, 3:14 PM, "Shawn Heisey"  wrote:
    
On 7/24/2019 3:50 PM, Natarajan, Rajeswari wrote:
> Hi,
> 
> With the below API , the QueryResponse , sometimes have the 
"lastTimestamp" , sometimes not.
> protected static QueryResponse getCdcrQueue(CloudSolrClient 
client) throws SolrServerException, IOException {
>  ModifiableSolrParams params = new ModifiableSolrParams();
>  params.set(CommonParams.QT, "/cdcr");
>  params.set(CommonParams.ACTION, CdcrParams.QUEUES);
>  return client.query(params);
>}

Side note:  Setting the handler path with the qt parameter was 
deprecated in Solr 3.6, which was released seven years ago.  I'm 
surprised it even still works.

Use a SolrQuery object instead of ModifiableSolrParams, and call 
its 
setRequestHandler method to set the request handler.

> Invoking 
http://:/solr//cdcr?action=QUEUES  has the same 
issue
> 
> But if invoked as 
http://:/solr//cdcr?action=QUEUES always gets the " 
lastTimestamp" value. Would like to know
> How to get the cdcr queues always return " lastTimestamp" value 
reliabily by CloudSolrClient.

This part I really have no idea about.  The API documentation does 
say 
that monitoring actions are done at the core level and control 
actions 
are done at the collection level, so this might not be considered a 
bug. 
  Someone who knows CDCR really well will need to comment.

https://lucene.apache.org/solr/guide/8_1/cdcr-api.html

Thanks,
Shawn








Re: CDCR Queues API invocation with CloudSolrclient

2019-07-24 Thread Natarajan, Rajeswari
Btw , the code is copied from solr 7.6 source code.

Thanks,
Rajeswari

On 7/24/19, 4:12 PM, "Natarajan, Rajeswari"  
wrote:

Thanks Shawn for the reply. I am not saying it is bug. I just would like to 
know how to get the "lastTimestamp" by invoking CluodSolrClient reliabily.

Regards,
Rajeswari

On 7/24/19, 3:14 PM, "Shawn Heisey"  wrote:

On 7/24/2019 3:50 PM, Natarajan, Rajeswari wrote:
> Hi,
> 
> With the below API , the QueryResponse , sometimes have the 
"lastTimestamp" , sometimes not.
> protected static QueryResponse getCdcrQueue(CloudSolrClient client) 
throws SolrServerException, IOException {
>  ModifiableSolrParams params = new ModifiableSolrParams();
>  params.set(CommonParams.QT, "/cdcr");
>  params.set(CommonParams.ACTION, CdcrParams.QUEUES);
>  return client.query(params);
>}

Side note:  Setting the handler path with the qt parameter was 
deprecated in Solr 3.6, which was released seven years ago.  I'm 
surprised it even still works.

Use a SolrQuery object instead of ModifiableSolrParams, and call its 
setRequestHandler method to set the request handler.

> Invoking 
http://:/solr//cdcr?action=QUEUES  has the same 
issue
> 
> But if invoked as 
http://:/solr//cdcr?action=QUEUES always gets the " 
lastTimestamp" value. Would like to know
> How to get the cdcr queues always return " lastTimestamp" value 
reliabily by CloudSolrClient.

This part I really have no idea about.  The API documentation does say 
that monitoring actions are done at the core level and control actions 
are done at the collection level, so this might not be considered a 
bug. 
  Someone who knows CDCR really well will need to comment.

https://lucene.apache.org/solr/guide/8_1/cdcr-api.html

Thanks,
Shawn






Re: CDCR Queues API invocation with CloudSolrclient

2019-07-24 Thread Natarajan, Rajeswari
Thanks Shawn for the reply. I am not saying it is bug. I just would like to 
know how to get the "lastTimestamp" by invoking CluodSolrClient reliabily.

Regards,
Rajeswari

On 7/24/19, 3:14 PM, "Shawn Heisey"  wrote:

On 7/24/2019 3:50 PM, Natarajan, Rajeswari wrote:
> Hi,
> 
> With the below API , the QueryResponse , sometimes have the 
"lastTimestamp" , sometimes not.
> protected static QueryResponse getCdcrQueue(CloudSolrClient client) 
throws SolrServerException, IOException {
>  ModifiableSolrParams params = new ModifiableSolrParams();
>  params.set(CommonParams.QT, "/cdcr");
>  params.set(CommonParams.ACTION, CdcrParams.QUEUES);
>  return client.query(params);
>}

Side note:  Setting the handler path with the qt parameter was 
deprecated in Solr 3.6, which was released seven years ago.  I'm 
surprised it even still works.

Use a SolrQuery object instead of ModifiableSolrParams, and call its 
setRequestHandler method to set the request handler.

> Invoking 
http://:/solr//cdcr?action=QUEUES  has the same 
issue
> 
> But if invoked as 
http://:/solr//cdcr?action=QUEUES always gets the " 
lastTimestamp" value. Would like to know
> How to get the cdcr queues always return " lastTimestamp" value reliabily 
by CloudSolrClient.

This part I really have no idea about.  The API documentation does say 
that monitoring actions are done at the core level and control actions 
are done at the collection level, so this might not be considered a bug. 
  Someone who knows CDCR really well will need to comment.

https://lucene.apache.org/solr/guide/8_1/cdcr-api.html

Thanks,
Shawn




CDCR Queues API invocation with CloudSolrclient

2019-07-24 Thread Natarajan, Rajeswari
Hi,

With the below API , the QueryResponse , sometimes have the "lastTimestamp" , 
sometimes not.
protected static QueryResponse getCdcrQueue(CloudSolrClient client) throws 
SolrServerException, IOException {
ModifiableSolrParams params = new ModifiableSolrParams();
params.set(CommonParams.QT, "/cdcr");
params.set(CommonParams.ACTION, CdcrParams.QUEUES);
return client.query(params);
  }

Invoking http://:/solr//cdcr?action=QUEUES  has 
the same issue

But if invoked as http://:/solr//cdcr?action=QUEUES 
always gets the " lastTimestamp" value. Would like to know
How to get the cdcr queues always return " lastTimestamp" value reliabily by 
CloudSolrClient.

Thank you,
Rajeswari
 



Re: [CAUTION] CDCR Monitoring - To figure out the latency between source and target replication delay

2019-06-18 Thread Natarajan, Rajeswari
I see below for CDCR Queues API Documentation 

The output is composed of a list “queues” which contains a list of (ZooKeeper) 
Target hosts, themselves containing a list of Target collections. For each 
collection, the current size of the queue and the timestamp of the last update 
operation successfully processed is provided. The timestamp of the update 
operation is the original timestamp, i.e., the time this operation was 
processed on the Source SolrCloud. This allows an estimate the latency of the 
replication process.

The timestamp of the update operation in the source solrcloud is given,  how 
does it help to figure out the latency of replication. Can someone please 
explain , am I missing something obvious. We want to generate alert  if there 
is a huge latency , looking to see how this can be done.

Thank you.
Rajeswari

On 5/30/19, 9:47 AM, "Natarajan, Rajeswari"  
wrote:

Hi,

Is there a way to  monitor the replication delay between Primary/Secondary 
Cluster for CDCR  and raise alerts ,if it exceeds above some threshold.

I see below API’s for monitoring.

·
core/cdcr?action=QUEUES: Fetches statistics about the 
queue<https://lucene.apache.org/solr/guide/7_6/cdcr-api.html#queues> for each 
replica and about the update logs.
· core/cdcr?action=OPS: Fetches statistics about the replication 
performance<https://lucene.apache.org/solr/guide/7_6/cdcr-api.html#ops> 
(operations per second) for each replica.
· core/cdcr?action=ERRORS: Fetches statistics and other information 
about replication 
errors<https://lucene.apache.org/solr/guide/7_6/cdcr-api.html#errors> for each 
replica.

These report the stats, performance and errors.
Thanks,
Rajeswari





Re: bi-directional CDCR

2019-06-18 Thread Natarajan, Rajeswari
We are using bidirectional CDCR with solr 7.6 and it works for us. Did you look 
at the logs to see if there are any errors.

"Both Cluster 1 and Cluster 2 can act as Source and Target at any given
point of time but a cluster cannot be both Source and Target at the same
time."

The above means the publishing can take place on one cluster only at any point. 
Publishing cannot happen simultaneously on both clusters.

Hope this helps
Rajeswari

On 6/11/19, 7:13 PM, "Susheel Kumar"  wrote:

Hello,

What does that mean by below.  How do we set which cluster will act as
source or target at a time?

Both Cluster 1 and Cluster 2 can act as Source and Target at any given
point of time but a cluster cannot be both Source and Target at the same
time.
Also following the directions mentioned in this page doesn't make cdcr
works. No data flows from cluster 1  to cluster 2. The Solr 7.7.1.  Is
there something missing.

https://lucene.apache.org/solr/guide/7_7/cdcr-config.html#bi-directional-updates




CDCR Monitoring

2019-05-30 Thread Natarajan, Rajeswari
Hi,

Is there a way to  monitor the replication delay between Primary/Secondary 
Cluster for CDCR  and raise alerts ,if it exceeds above some threshold.

I see below API’s for monitoring.

·
core/cdcr?action=QUEUES: Fetches statistics about the 
queue for each 
replica and about the update logs.
· core/cdcr?action=OPS: Fetches statistics about the replication 
performance 
(operations per second) for each replica.
· core/cdcr?action=ERRORS: Fetches statistics and other information 
about replication 
errors for each 
replica.

These report the stats, performance and errors.
Thanks,
Rajeswari



Re: Unable to run solr | SolrCore Initialization Failures {{Core}}: {{error}}

2019-05-23 Thread Natarajan, Rajeswari
Please see if the zookeeper is installed before installing solrcloud , in case 
you are not running  embedded Zookeeper.
Hope it helps.

Regards,
Rajeswari

From: Karthic Viswanathan 
Reply-To: "solr-user@lucene.apache.org" 
Date: Wednesday, May 22, 2019 at 10:37 PM
To: "solr-user@lucene.apache.org" 
Subject: Unable to run solr | SolrCore Initialization Failures {{Core}}: 
{{error}}


Hi,
I am trying to install Solr for my Windows Server 2016 Standard edition. . 
While the installation of Solr itself succeeds, I am not able to get it running.
Everytime after installation and starting the service
 “SolrCore Initialization Failures {{Core}}: {{error}}”

I am not sure what the error is since it is not very clear. Also, the log files 
are all empty. It has just a few warnings.  I have attached them for reference. 
Solr is a requirement for installing Sitecore CMS and I am not able to proceed 
any further.  Any help on this would be greatly appreciated.


I have this same error with solr 7.2.1, 6.6.2.
I tried running this with both nssm 2.4 and nssm 2.24 pre.
I have jre 1.8.0_211 installed.

--
Regards,
Karthic Viswanathan


[solr.png]



[log.png]




Re: [CDCR]Unable to locate core

2019-05-19 Thread Natarajan, Rajeswari
Thanks Amrith. Created a bug
https://issues.apache.org/jira/browse/SOLR-13481

Regards,
Rajeswari

On 5/19/19, 3:44 PM, "Amrit Sarkar"  wrote:

Sounds legit to me.

Can you create a Jira and list down the problem statement and design
solution there. I am confident it will attract committers' attention and
they can review the design and provide feedback.

Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Medium: https://medium.com/@sarkaramrit2


On Mon, May 20, 2019 at 3:59 AM Natarajan, Rajeswari <
rajeswari.natara...@sap.com> wrote:

> Thanks Amrith for creating a patch. But the code in the
> LBHttpSolrClient.java needs to be fixed too, if the for loop  to work as
> intended.
> Regards
> Rajeswari
>
> public Rsp request(Req req) throws SolrServerException, IOException {
> Rsp rsp = new Rsp();
> Exception ex = null;
> boolean isNonRetryable = req.request instanceof IsUpdateRequest ||
> ADMIN_PATHS.contains(req.request.getPath());
> List skipped = null;
>
> final Integer numServersToTry = req.getNumServersToTry();
> int numServersTried = 0;
>
> boolean timeAllowedExceeded = false;
> long timeAllowedNano = getTimeAllowedInNanos(req.getRequest());
> long timeOutTime = System.nanoTime() + timeAllowedNano;
> for (String serverStr : req.getServers()) {
>   if (timeAllowedExceeded = isTimeExceeded(timeAllowedNano,
> timeOutTime)) {
> break;
>   }
>
>   serverStr = normalize(serverStr);
>   // if the server is currently a zombie, just skip to the next one
>   ServerWrapper wrapper = zombieServers.get(serverStr);
>   if (wrapper != null) {
> // System.out.println("ZOMBIE SERVER QUERIED: " + serverStr);
> final int numDeadServersToTry = req.getNumDeadServersToTry();
> if (numDeadServersToTry > 0) {
>   if (skipped == null) {
> skipped = new ArrayList<>(numDeadServersToTry);
> skipped.add(wrapper);
>   }
>   else if (skipped.size() < numDeadServersToTry) {
> skipped.add(wrapper);
>   }
> }
> continue;
>   }
>   try {
> MDC.put("LBHttpSolrClient.url", serverStr);
>
> if (numServersToTry != null && numServersTried >
> numServersToTry.intValue()) {
>   break;
> }
>
> HttpSolrClient client = makeSolrClient(serverStr);
>
> ++numServersTried;
> ex = doRequest(client, req, rsp, isNonRetryable, false, null);
> if (ex == null) {
>   return rsp; // SUCCESS
> }
>   } finally {
> MDC.remove("LBHttpSolrClient.url");
>   }
> }
>
> // try the servers we previously skipped
> if (skipped != null) {
>   for (ServerWrapper wrapper : skipped) {
> if (timeAllowedExceeded = isTimeExceeded(timeAllowedNano,
> timeOutTime)) {
>   break;
> }
>
> if (numServersToTry != null && numServersTried >
> numServersToTry.intValue()) {
>   break;
> }
>
> try {
>   MDC.put("LBHttpSolrClient.url", wrapper.client.getBaseURL());
>   ++numServersTried;
>   ex = doRequest(wrapper.client, req, rsp, isNonRetryable, true,
> wrapper.getKey());
>   if (ex == null) {
> return rsp; // SUCCESS
>   }
> } finally {
>   MDC.remove("LBHttpSolrClient.url");
> }
>   }
> }
>
>
> final String solrServerExceptionMessage;
> if (timeAllowedExceeded) {
>   solrServerExceptionMessage = "Time allowed to handle this request
> exceeded";
> } else {
>   if (numServersToTry != null && numServersTried >
> numServersToTry.intValue()) {
> solrServerExceptionMessage = "No live SolrServers available to
> handle this request:"
> + " numServersTried="+numServersTried
> + " numServersToTry="+numServersToTry.intValue();
>   } else {
> solrServerExce

Re: [CDCR]Unable to locate core

2019-05-19 Thread Natarajan, Rajeswari
Thanks Amrith for creating a patch. But the code in the LBHttpSolrClient.java 
needs to be fixed too, if the for loop  to work as intended.
Regards
Rajeswari

public Rsp request(Req req) throws SolrServerException, IOException {
Rsp rsp = new Rsp();
Exception ex = null;
boolean isNonRetryable = req.request instanceof IsUpdateRequest || 
ADMIN_PATHS.contains(req.request.getPath());
List skipped = null;

final Integer numServersToTry = req.getNumServersToTry();
int numServersTried = 0;

boolean timeAllowedExceeded = false;
long timeAllowedNano = getTimeAllowedInNanos(req.getRequest());
long timeOutTime = System.nanoTime() + timeAllowedNano;
for (String serverStr : req.getServers()) {
  if (timeAllowedExceeded = isTimeExceeded(timeAllowedNano, timeOutTime)) {
break;
  }
  
  serverStr = normalize(serverStr);
  // if the server is currently a zombie, just skip to the next one
  ServerWrapper wrapper = zombieServers.get(serverStr);
  if (wrapper != null) {
// System.out.println("ZOMBIE SERVER QUERIED: " + serverStr);
final int numDeadServersToTry = req.getNumDeadServersToTry();
if (numDeadServersToTry > 0) {
  if (skipped == null) {
skipped = new ArrayList<>(numDeadServersToTry);
skipped.add(wrapper);
  }
  else if (skipped.size() < numDeadServersToTry) {
skipped.add(wrapper);
  }
}
continue;
  }
  try {
MDC.put("LBHttpSolrClient.url", serverStr);

if (numServersToTry != null && numServersTried > 
numServersToTry.intValue()) {
  break;
}

HttpSolrClient client = makeSolrClient(serverStr);

++numServersTried;
ex = doRequest(client, req, rsp, isNonRetryable, false, null);
if (ex == null) {
  return rsp; // SUCCESS
}
  } finally {
MDC.remove("LBHttpSolrClient.url");
  }
}

// try the servers we previously skipped
if (skipped != null) {
  for (ServerWrapper wrapper : skipped) {
if (timeAllowedExceeded = isTimeExceeded(timeAllowedNano, timeOutTime)) 
{
  break;
}

if (numServersToTry != null && numServersTried > 
numServersToTry.intValue()) {
  break;
}

try {
  MDC.put("LBHttpSolrClient.url", wrapper.client.getBaseURL());
  ++numServersTried;
  ex = doRequest(wrapper.client, req, rsp, isNonRetryable, true, 
wrapper.getKey());
  if (ex == null) {
return rsp; // SUCCESS
  }
} finally {
  MDC.remove("LBHttpSolrClient.url");
}
  }
}


final String solrServerExceptionMessage;
if (timeAllowedExceeded) {
  solrServerExceptionMessage = "Time allowed to handle this request 
exceeded";
} else {
  if (numServersToTry != null && numServersTried > 
numServersToTry.intValue()) {
solrServerExceptionMessage = "No live SolrServers available to handle 
this request:"
+ " numServersTried="+numServersTried
+ " numServersToTry="+numServersToTry.intValue();
  } else {
solrServerExceptionMessage = "No live SolrServers available to handle 
this request";
  }
}
if (ex == null) {
  throw new SolrServerException(solrServerExceptionMessage);
} else {
  throw new SolrServerException(solrServerExceptionMessage+":" + 
zombieServers.keySet(), ex);
}

  }

On 5/19/19, 3:12 PM, "Amrit Sarkar"  wrote:

>
> Thanks Natrajan,
>
> Solid analysis and I saw the issue being reported by multiple users in
> past few months and unfortunately I baked an incomplete code.
>
> I think the correct way of solving this issue is to identify the correct
> base-url for the respective core we need to trigger REQUESTRECOVERY to and
> create a local HttpSolrClient instead of using CloudSolrClient from
> CdcrReplicatorState. This will avoid unnecessary retry which will be
> redundant in our case.
>
> I baked a small patch few weeks back and will upload it on the SOLR-11724
> .
>




Re: [CDCR]Unable to locate core

2019-05-19 Thread Natarajan, Rajeswari
Here is my close analysis:


SolrClient request goes to the below method  "request " in the class 
LBHttpSolrClient.java
There is a for loop to try  different live servers , but when  doRequest method 
 (in the request method below) sends exception there is no catch , so next 
re-try is not done. To solve this issue , there should be catch around 
doRequest and then the second time it will re-try the correct request. But in 
case there are multiple live servers, the request might timeout also.  This 
needs to be fixed to make CDCR bootstrap  work reliable. If not sometimes it 
will work good and sometimes not. I can work on this patch  if this is agreed.


public Rsp request(Req req) throws SolrServerException, IOException {
Rsp rsp = new Rsp();
Exception ex = null;
boolean isNonRetryable = req.request instanceof IsUpdateRequest || 
ADMIN_PATHS.contains(req.request.getPath());
List skipped = null;

final Integer numServersToTry = req.getNumServersToTry();
int numServersTried = 0;

boolean timeAllowedExceeded = false;
long timeAllowedNano = getTimeAllowedInNanos(req.getRequest());
long timeOutTime = System.nanoTime() + timeAllowedNano;
for (String serverStr : req.getServers()) {
  if (timeAllowedExceeded = isTimeExceeded(timeAllowedNano, timeOutTime)) {
break;
  }
  
  serverStr = normalize(serverStr);
  // if the server is currently a zombie, just skip to the next one
  ServerWrapper wrapper = zombieServers.get(serverStr);
  if (wrapper != null) {
// System.out.println("ZOMBIE SERVER QUERIED: " + serverStr);
final int numDeadServersToTry = req.getNumDeadServersToTry();
if (numDeadServersToTry > 0) {
  if (skipped == null) {
skipped = new ArrayList<>(numDeadServersToTry);
skipped.add(wrapper);
  }
  else if (skipped.size() < numDeadServersToTry) {
skipped.add(wrapper);
  }
}
continue;
  }
  try {
MDC.put("LBHttpSolrClient.url", serverStr);

if (numServersToTry != null && numServersTried > 
numServersToTry.intValue()) {
  break;
} 

HttpSolrClient client = makeSolrClient(serverStr);

++numServersTried;
ex = doRequest(client, req, rsp, isNonRetryable, false, null);
if (ex == null) {
  return rsp; // SUCCESS
}
   //NO CATCH HERE ,  SO IT FAILS
  } finally {
MDC.remove("LBHttpSolrClient.url");
  }
}

// try the servers we previously skipped
if (skipped != null) {
  for (ServerWrapper wrapper : skipped) {
if (timeAllowedExceeded = isTimeExceeded(timeAllowedNano, timeOutTime)) 
{
  break;
}

if (numServersToTry != null && numServersTried > 
numServersToTry.intValue()) {
  break;
}

try {
  MDC.put("LBHttpSolrClient.url", wrapper.client.getBaseURL());
  ++numServersTried;
  ex = doRequest(wrapper.client, req, rsp, isNonRetryable, true, 
wrapper.getKey());
  if (ex == null) {
return rsp; // SUCCESS
  }
} finally {
  MDC.remove("LBHttpSolrClient.url");
}
  }
}


final String solrServerExceptionMessage;
if (timeAllowedExceeded) {
  solrServerExceptionMessage = "Time allowed to handle this request 
exceeded";
} else {
  if (numServersToTry != null && numServersTried > 
numServersToTry.intValue()) {
solrServerExceptionMessage = "No live SolrServers available to handle 
this request:"
+ " numServersTried="+numServersTried
+ " numServersToTry="+numServersToTry.intValue();
  } else {
solrServerExceptionMessage = "No live SolrServers available to handle 
this request";
  }
}
if (ex == null) {
  throw new SolrServerException(solrServerExceptionMessage);
} else {
  throw new SolrServerException(solrServerExceptionMessage+":" + 
zombieServers.keySet(), ex);
}

  }


Thanks,
Rajeswari


On 5/19/19, 9:39 AM, "Natarajan, Rajeswari"  
wrote:

Hi

We are using solr 7.6 and trying out bidirectional CDCR and I also hit this 
issue. 

Stacktrace

INFO  (cdcr-bootstrap-status-17-thread-1) [   ] 
o.a.s.h.CdcrReplicatorManager CDCR bootstrap successful in 3 seconds
   
INFO  (cdcr-bootstrap-status-17-thread-1) [   ] 
o.a.s.h.CdcrReplicatorManager Create new update log reader for target abcd_ta 
with checkpoint -1 @ abcd_ta:shard1
ERROR (cdcr-bootstrap-status-17-thread-1) [   ] 
o.a.s.h.CdcrReplicatorManager Unable to bootstrap the target collection abcd_ta 
shard: shard1

Re: [CDCR]Unable to locate core

2019-05-19 Thread Natarajan, Rajeswari
Hi

We are using solr 7.6 and trying out bidirectional CDCR and I also hit this 
issue. 

Stacktrace

INFO  (cdcr-bootstrap-status-17-thread-1) [   ] o.a.s.h.CdcrReplicatorManager 
CDCR bootstrap successful in 3 seconds  
 
INFO  (cdcr-bootstrap-status-17-thread-1) [   ] o.a.s.h.CdcrReplicatorManager 
Create new update log reader for target abcd_ta with checkpoint -1 @ 
abcd_ta:shard1
ERROR (cdcr-bootstrap-status-17-thread-1) [   ] o.a.s.h.CdcrReplicatorManager 
Unable to bootstrap the target collection abcd_ta shard: shard1 

olrj.impl.HttpSolrClient$RemoteSolrException: Error from server at 
http://10.169.50.182:8983/solr: Unable to locate core 
kanna_ta_shard1_replica_n1
lr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) 
~[solr-solrj-7.6.0.jar:7.6.0 719cde97f84640faa1e3525690d262946571245f - nknize 
- 2018-12-07 14:47:53]
lr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) 
~[solr-solrj-7.6.0.jar:7.6.0 719cde97f84640faa1e3525690d262946571245f - nknize 
- 2018-12-07 14:47:53] 
lr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) 
~[solr-solrj-7.6.0.jar:7.6.0 719cde97f84640faa1e3525690d262946571245f - nknize 
- 2018-12-07 14:47:53]
lr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) 
~[solr-solrj-7.6.0.jar:7.6.0 719cde97f84640faa1e3525690d262946571245f - nknize 
- 2018-12-07 14:47:53]
lr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) 
~[solr-solrj-7.6.0.jar:7.6.0 719cde97f84640faa1e3525690d262946571245f - nknize 
- 2018-12-07 14:47:53]
lr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) 
~[solr-solrj-7.6.0.jar:7.6.0 719cde97f84640faa1e3525690d262946571245f - nknize 
- 2018-12-07 14:47:53]
lr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
 ~[solr-solrj-7.6.0.jar:7.6.0 719cde97f84640faa1e3525690d262946571245f - nknize 
- 2018-12-07 14:47:53]


I stepped through the code

private NamedList sendRequestRecoveryToFollower(SolrClient client, String 
coreName) throws SolrServerException, IOException {
CoreAdminRequest.RequestRecovery recoverRequestCmd = new 
CoreAdminRequest.RequestRecovery();

recoverRequestCmd.setAction(CoreAdminParams.CoreAdminAction.REQUESTRECOVERY);
recoverRequestCmd.setCoreName(coreName);
return client.request(recoverRequestCmd);
  }

 In the above method , recovery request command is admin command and it is 
specific to a core. In the  solrclient.request logic the code gets the 
liveservers and execute the command in a loop ,but  since this is admin command 
this is non re-triable.  Depending on which live server the code gets and where 
does the core lies , the recover request command might be successful or 
failure.  So I think there is problem with this code in trying to send the core 
command to all available live servers , the code I guess should find the 
correct server on which the core lies and send this request.

Regards,
Rajeswari

On 5/15/19, 10:59 AM, "Natarajan, Rajeswari"  
wrote:

I am also facing this issue. Any resolution found on this issue, Please 
update. Thanks

On 2/7/19, 10:42 AM, "Tim"  wrote:

So it looks like I'm having an issue with this fix:
https://issues.apache.org/jira/browse/SOLR-11724

So I've messed around with this for a while and every time the leader to
leader replica portion works fine. But the Recovery portion 
(implemented as
part of the fix above) fails. 

I've run a few tests and every time the recovery portion kicks off, it 
sends
the recovery command to the node which has the leader for a given 
replica
instead of the follower. 
I've recreated the collection several times so that replicas are on
different nodes with the same results each time. It seems to be assumed 
that
the follower is on the same solr node as the leader. 
 
For example, if s3r10 (shard 3, replica 10) is the leader and is on 
node1,
while the follower s3r8 is on node2, then the core recovery command 
meant
for s3r8 is being sent to node1 instead of node2.





--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html






Re: [CDCR]Unable to locate core

2019-05-15 Thread Natarajan, Rajeswari
I am also facing this issue. Any resolution found on this issue, Please update. 
Thanks

On 2/7/19, 10:42 AM, "Tim"  wrote:

So it looks like I'm having an issue with this fix:
https://issues.apache.org/jira/browse/SOLR-11724

So I've messed around with this for a while and every time the leader to
leader replica portion works fine. But the Recovery portion (implemented as
part of the fix above) fails. 

I've run a few tests and every time the recovery portion kicks off, it sends
the recovery command to the node which has the leader for a given replica
instead of the follower. 
I've recreated the collection several times so that replicas are on
different nodes with the same results each time. It seems to be assumed that
the follower is on the same solr node as the leader. 
 
For example, if s3r10 (shard 3, replica 10) is the leader and is on node1,
while the follower s3r8 is on node2, then the core recovery command meant
for s3r8 is being sent to node1 instead of node2.





--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html




Analyzer used if the field type has only index type specified

2018-08-30 Thread Natarajan, Rajeswari
Hi ,

In case of fieldTypes which specify only  ‘index’ time analyzers , what will be 
the analyzer used during query time.


Example below specified only index time analyzer. So what will be used during 
query time


https://opengrok.ariba.com/source/s?path=solr/=arches_rel>.TextField"

 positionIncrementGap="100" autoGeneratePhraseQueries="true">

  

   https://opengrok.ariba.com/source/s?path=solr/=arches_rel>.WhitespaceTokenizerFactory"/>

   https://opengrok.ariba.com/source/s?path=solr/=arches_rel>.WordDelimiterFilterFactory"

generateWordParts="1" generateNumberParts="1" 
catenateWords="0"catenateNumbers="1" catenateAll="0" 
splitOnCaseChange="1" preserveOriginal="1"/>

   https://opengrok.ariba.com/source/s?path=solr/=arches_rel>.LowerCaseFilterFactory"/>

https://opengrok.ariba.com/source/s?path=solr/=arches_rel>.StopFilterFactory"


words="lang/en/stopwords.txt"
 enablePositionIncrements="true"/>

https://opengrok.ariba.com/source/s?path=solr/=arches_rel>.EnglishPossessiveFilterFactory"/>

   https://opengrok.ariba.com/source/s?path=solr/=arches_rel>.PorterStemFilterFactory"/>

   




Regards
Rajeswari


CDCR traffic

2018-06-22 Thread Natarajan, Rajeswari
Hi,

Would like to know , if the CDCR traffic is encrypted.

Thanks
Ra


Rule based replica placement solr cloud 6.2.1

2018-05-08 Thread Natarajan, Rajeswari
 Hi,

Would like to have below rule set up in solr cloud 6.2.1. Not sure how to model 
this with default snitch. Any suggestions?

Don’t assign more than 1 replica of this collection to a host


Regards,
Rajeswari



solr.DictionaryCompoundWordTokenFilterFactory filter and double quotes

2018-02-20 Thread Natarajan, Rajeswari
 Hi,

We have below field type defined in our schema.xml  to support  the German 
Compound  word search .  This works find. But even when double quotes are there 
in the search term , it gets split . Is there a way not to split the term when 
double quotes are present in the query with this field type


 
  
  
  

  
 
 
  


Thanks in Advance,
Rajeswari  



Re: Index size optimization between 4.5.1 and 4.10.4 Solr

2017-12-07 Thread Natarajan, Rajeswari
Thanks a lot for the response. We did not change schema or config. We simply 
opened 4.5 indexes with 4.10 libraries.
Thank you,
Rajeswari

On 12/7/17, 3:17 PM, "Shawn Heisey" <apa...@elyograg.org> wrote:

On 12/7/2017 1:27 PM, Natarajan, Rajeswari wrote:
> We have upgraded solr from 4.5.1 to 4.10.4 and we see index size 
reduction.  Trying to see if any optimization done to decrease the index sizes 
, couldn’t locate.  If anyone knows why please share.

Here's a history where you can see the a summary of the changes in
Lucene's index format in various versions:


https://lucene.apache.org/core/7_1_0/core/org/apache/lucene/codecs/lucene70/package-summary.html#History

Looking over the history, I would guess that the changes mentioned
between 4.5 and 4.10 would make little difference in most indexes, but
for some configurations, might actually *increase* index size slightly. 
Chances are that the change would only happen after performing some kind
of operation on the whole index, though.

Did you do anything other than simply open the 4.5.1 index in 4.10.4
with the same config/schema?  This would include things like running an
optimize operation on the index, running IndexUpgrader on the index,
completely reindexing from scratch rather than using the old index, or
any number of other possibilities.  Operations like those I mentioned
would have eliminated deleted documents from the index, which can result
in a size reduction.  If you changed your schema at all, that can have
an effect on index size -- in either direction.

Thanks,
Shawn





Index size optimization between 4.5.1 and 4.10.4 Solr

2017-12-07 Thread Natarajan, Rajeswari
Hi,

We have upgraded solr from 4.5.1 to 4.10.4 and we see index size reduction.  
Trying to see if any optimization done to
decrease the index sizes , couldn’t locate.  If anyone knows why please share.


Thank you,
Rajeswari


score field returns NaN with distributed Search after Upgrade from 4.5 to 4.10

2017-07-27 Thread Natarajan, Rajeswari
Hi  All,

We are upgrading from Solr/Lucene 4.5.1 to 4.10.4. When testing we found the 
below issue.

The score field in  the query response for a distributed query results in NaN. 
This happens if the indexes were created in 4.5 and the query is received in 
4.10.1.

Also the score in the explain statement with the debugQuery on is same with 4.5 
or 4.10.  

This issue does not happen for queries  which involves single shard. Has anyone 
faced this issue or know the cause/remedy of this behavior.


Thank you,
Rajeswari 


Query is just Q=*:*=score ,with the correct URL

Part of the Output


NaN
  


LuceneQParser
  

0.003922721 = (MATCH) product of:
  0.07845442 = (MATCH) sum of:
0.07845442 = (MATCH) 
weight(arches_id:PR^test.abcindexadapter^en_US^4^13^ABC-02-docid01-0-ABC-02-docid01-0-ABC02-docid01-0
 in 0) [ABCSimilarity], result of:
  0.07845442 = score(doc=0,freq=1.0 = termFreq=1.0
), product of:
0.22360682 = queryWeight, product of:
  4.912023 = idf(docFreq=1, maxDocs=100)
  0.045522347 = queryNorm
0.35085878 = fieldWeight in 0, product of:
  0.07142857 = (tf = 1.0, Boost = 0, Boost (Binary) = 0, normValue = 
0.0, bmf = 50)
1.0 = termFreq=1.0
  4.912023 = idf(docFreq=1, maxDocs=100)
  0.05 = coord(1/20)




d: org.apache.http.ParseException: Invalid content type: - solr distributed search 4.10.4

2017-05-13 Thread Natarajan, Rajeswari
Hi,

When doing a distributed query from solr 4.10.4 ,getting below exception

org.apache.solr.common.SolrException: org.apache.http.ParseException: Invalid 
content type:

org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:311)

org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
org.apache.solr.core.SolrCore.execute(SolrCore.java:1976)

ariba.arches.search.ArchesSearcher.invokeSearch(ArchesSearcher.java:306)

ariba.arches.search.ArchesSearcher.search(ArchesSearcher.java:169)

ariba.arches.search.SearchManagerServlet.handleSelect(SearchManagerServlet.java:651)

ariba.arches.search.SearchManagerServlet.service(SearchManagerServlet.java:146)


javax.servlet.http.HttpServlet.service(HttpServlet.java:848)


query is below:

http://:20042/ 
/search/select?q=(*:*)=xml=5=SupplierID,MarketPrice=:20042
 /search/select/executeS2-63,:20022/ search/select/execute/S1-69


In the code

Below method n SolrCore is used to execute the query.
execute(SolrRequestHandler handler, SolrQueryRequest req, SolrQueryResponse 
rsp) {


Saw ssame issue in https://lists.gt.net/lucene/java-dev/242650.

If we test the distributed query in a stand alone solr  as below it works

http://localhost:8983/solr/select?shards=localhost:8983/solr,localhost:8984/solr=true=ipod+solr


Any pointers  to resolve this issue please.


Thank you,
Raji



Solr 4.10 and Distributed pivot faceting in Non-Solr cloud mode

2017-04-13 Thread Natarajan, Rajeswari
Hi ,

Would like to know Solr 4.10 supports distributed pivot faceting in non-solr 
cloud mode.

According to the below JIRA , it looks like it is fixed in 4.10. But we use 
solr in non cloud mode.

https://issues.apache.org/jira/browse/SOLR-2894


Thank you,
Raji


Adding solr to existing web application

2016-07-15 Thread Natarajan, Rajeswari
Hi,

We have a springboot application and we would like to add Solr to it as the 
same process as the springboot application. Tried to add SolrDispatchFilter in 
the springboot. Could add the filter successfully ,but then the solr admin 
panel is not reachable ,there are no errors. We use solr 6.1.

Has anyone done this successfully. If yes ,can you please share the steps.

Thank you,
Rajeswari