Re: Unable to create collection with custom queryParser Plugin

2019-02-10 Thread Erick Erickson
What Jörn said.

Your jar should be nothing but your custom code. Usually I cheat in
IntelliJ and check the box for artifacts that says something like
"only compiled output"

On Sun, Feb 10, 2019 at 10:37 PM Jörn Franke  wrote:
>
> You can put all solr dependencies as provided. They are already on the class 
> path - no need to put them in the fat jar.
>
> > Am 11.02.2019 um 05:59 schrieb Aroop Ganguly :
> >
> > Thanks Erick!
> >
> > I see. Yes it is a fat jar post shadowJar process (in the order of MBs).
> > It contains solrj and solr-core dependencies plus a few more scala related 
> > ones.
> > I guess the solr-core dependencies are unavoidable (right ?), let me try to 
> > trim the others.
> >
> > Regards
> > Aroop
> >
> >> On Feb 10, 2019, at 8:44 PM, Erick Erickson  
> >> wrote:
> >>
> >> Aroop:
> >>
> >> How big is your custom jar file? The name "test-plugins-aroop-all.jar"
> >> makes me suspicious. It should be very small and should _not_ contain
> >> any of the Solr distribution jar files, just your compiled custom
> >> code. I'm grasping at straws a bit, but it may be that you have the
> >> same jar files from the Solr distro and also included in your custom
> >> jar and it's confusing the classloader. "Very small" here is on the
> >> order of 10K given it does very little. If it's much bigger than, say,
> >> 15K it's a red flag. If you do a "jar -dvf your_custom_jar" there
> >> should be _very_ few classes in it.
> >>
> >> Best,
> >> Erick
> >>
> >> On Sun, Feb 10, 2019 at 8:33 PM Aroop Ganguly
> >>  wrote:
> >>>
> >>> [resending due to bounce warning from the other email]
> >>>
> >>>
> >>> Hi Team
> >>>
> >>> I thought this was simple, but I am just missing something here. Any 
> >>> guidance would be very appreciated.
> >>>
> >>> What have I done so far:
> >>>   1. I have created a custom querParser (class SamplePluggin extends 
> >>> QParserPlugin { ), which right now does nothing but logs an info message, 
> >>> and returns a new LuceneQParser() instance with the same parameters.
> >>>   2. I am on solr 7.5 and I have added the path to the jar and 
> >>> referenced the plugin in the following ways in my solrconfig.xml:
> >>>
> >>>   
> >>>>>> class="com.aroop.plugins.SamplePluggin"/>
> >>>
> >>> Now when I create a collection with this solrconfig, I keep getting this 
> >>> exception stack:
> >>> I have tried debugging the live solr instance and for the life of me, I 
> >>> cannot understand why am I getting this cast exception
> >>> 2019-02-11 03:57:10.410 ERROR (qtp1594873248-62) [c:cvp2 s:shard1 
> >>> r:core_node2 x:testCollection_shard1_replica_n1] 
> >>> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Error 
> >>> CREATEing SolrCore 'testCollection_shard1_replica_n1': Unable to create 
> >>> core [testCollection_shard1_replica_n1] Caused by: class 
> >>> com.aroop.plugins.SamplePluggin
> >>>   at 
> >>> org.apache.solr.core.CoreContainer.create(CoreContainer.java:1087)
> >>>   at 
> >>> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$247(CoreAdminOperation.java:92)
> >>>   at 
> >>> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
> >>>   at 
> >>> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
> >>>   at 
> >>> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
> >>>   at 
> >>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
> >>>   at 
> >>> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)
> >>>   at 
> >>> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)
> >>>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)
> >>>   at 
> >>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
> >>>   at 
> >>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
> >>>   at 
> >>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
> >>>   at 
> >>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
> >>>   at 
> >>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> >>>   at 
> >>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> >>>   at 
> >>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> >>>   at 
> >>> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> >>>   at 
> >>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
> >>>   at 
> >>> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> >>>   at 
> >>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
> >>>   at 
> >>> 

Re: COLLECTION CREATE and CLUSTERSTATUS changes in SOLR 7.5.0

2019-02-10 Thread Hendrik Haddorp
Do you have something about legacyCloud in your CLUSTERSTATUS response? 
I have "properties":{"legacyCloud":"false"}
In the legacy cloud mode, also calles format 1, the state is stored in a 
central clusterstate.js node in ZK, which does not scale well. In the 
modern mode every collection has its own state.json node in ZK. I guess 
there is something mixed up on your system. I would make sure to not use 
the legacy mode.


On 11.02.2019 05:57, ramyogi wrote:

I found the reason,
=true when I create a collection with this parameter I could
find that replicas data in CLUSTERSTATUS api request,. is there anything
wrong if I use this in SOLR 7.5.0 when create a collection ?
Please advice.



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html




Re: Solr Size Limitation upto 32 kb limitation

2019-02-10 Thread Jan Høydahl
Please do not cross-post, this thread is for the users mailing list, not dev.

You have got the answer several times already: clean your input data. You 
obviously parse some pdf that contains bad data that result in one single token 
(word) being >32kb. Clean your input data either in your application or with 
Update Processor or TokenFilter in Solr.

Jan Høydahl

> 11. feb. 2019 kl. 06:27 skrev Kranthi Kumar K 
> :
> 
> Hi Team,
>  
> We didn’t get any suggested solutions. Could you help us by providing better 
> approach or a solution to fix the issue?
> We’ll be awaiting your reply.
>  
> 
> 
> Thanks & Regards,
> Kranthi Kumar.K,
> Software Engineer,
> Ccube Fintech Global Services Pvt Ltd.,
> Email/Skype: kranthikuma...@ccubefintech.com,
> Mobile: +91-8978078449.
>  
>  
> From: Kranthi Kumar K  
> Sent: Friday, February 1, 2019 10:26 AM
> To: d...@lucene.apache.org; solr-user@lucene.apache.org
> Cc: Ananda Babu medida ; Srinivasa Reddy 
> Karri ; Ravi Vangala 
> ; Suresh Malladi ; 
> Vijay Nandula ; Michelle Ngo 
> 
> Subject: Re: Solr Size Limitation upto 32 kb limitation
>  
> Hi Team,
> 
>  
> 
> Thanks for your suggestions that you've posted, but none of them have fixed 
> our issue. Could you please provide us your valuable suggestions to address 
> this issue.
> 
>  
> 
> We'll be awaiting your reply.
> 
>  
> 
> Thanks,
> 
> Kranthi kumar.K
> 
> From: Michelle Ngo
> Sent: Thursday, January 24, 2019 12:00:06 PM
> To: Kranthi Kumar K; d...@lucene.apache.org; solr-user@lucene.apache.org
> Cc: Ananda Babu medida; Srinivasa Reddy Karri; Ravi Vangala; Suresh Malladi; 
> Vijay Nandula
> Subject: RE: Solr Size Limitation upto 32 kb limitation
>  
> Thanks @Kranthi Kumar K for following up
>  
> From: Kranthi Kumar K  
> Sent: Thursday, 24 January 2019 4:51 PM
> To: d...@lucene.apache.org; solr-user@lucene.apache.org
> Cc: Ananda Babu medida ; Srinivasa Reddy 
> Karri ; Michelle Ngo 
> ; Ravi Vangala ; 
> Suresh Malladi ; Vijay Nandula 
> 
> Subject: RE: Solr Size Limitation upto 32 kb limitation
>  
> Thank you Bernd Fehling for your suggested solution, I've tried the same by 
> changing the type and added multivalued to true in Schema.xml file i.e,
> change from:
>  
> 
>  
> Changed to:
>  
>  multiValued="true" />
>  
> After changing it also still we are unable to import the files size > 32 kb. 
> please find the solution suggested by Bernd in the below url:
>  
> http://lucene.472066.n3.nabble.com/Re-Solr-Size-Limitation-upto-32-kb-limitation-td4421569.html
>  
> Bernd Fehling, could you please suggest another alternative solution to 
> resolve our issue, which would help us alot?
>  
> Please let me know for any questions.
>  
> 
> 
> Thanks & Regards,
> Kranthi Kumar.K,
> Software Engineer,
> Ccube Fintech Global Services Pvt Ltd.,
> Email/Skype: kranthikuma...@ccubefintech.com,
> Mobile: +91-8978078449.
>  
>  
> From: Kranthi Kumar K 
> Sent: Friday, January 18, 2019 4:22 PM
> To: d...@lucene.apache.org; solr-user@lucene.apache.org
> Cc: Ananda Babu medida ; Srinivasa Reddy 
> Karri ; Michelle Ngo 
> ; Ravi Vangala 
> Subject: RE: Solr Size Limitation upto 32 kb limitation
>  
> Hi team,
>  
> Thank you Erick Erickson ,Bernd Fehling , Jan Hoydahl for your suggested 
> solutions. I’ve tried the suggested one’s and still we are unable to import 
> files havingsize  >32 kb, it is displaying same error.
>  
> Below link has the suggested solutions. Please have a look once.
>  
> http://lucene.472066.n3.nabble.com/Solr-Size-Limitation-upto-32-KB-files-td4419779.html
>  
> As per Erick Erickson, I’ve changed the string type to Text type based and 
> still the issue occurs .
> I’ve changed from :
>  
> 
>  
> Changed to:
>  
> 
>  
> If we do so, it is showing error in the log, please find the error in the 
> attachment.
>  
> If I change to:
>  
> 
>  
> It is not showing any error , but the issue still exists.
>  
> As per Jan Hoydahl, I have gone through the link that you have provided and 
> checked ‘requestParsers’ tag in solrconfig.xml,
>  
> RequestParsers tag in our application is as follows:
>  
> ‘ multipartUploadLimitInKB="2048000"
> formdataUploadLimitInKB="2048"
> addHttpRequestToContext="false"/>’
> Request parsers, which we are using and in the link you have provided are 
> similar. And still we are unable to import the files size >32 kb.
>  
> As per Bernd Fehling, we are using Solr 4.10.2. you have mentioned as,
> ‘If you are trying to add larger content then you have to "chop" that 
> by yourself and add it as multivalued. Can be done within a self written 
> loader. ’
>  
> I’m a newbie to Solr and I didn’t get what exactly ‘self written loader’ is?
>  
> Could you please provide us sample code, that helps us to go further?
>  
>  
> 
> 
> Thanks & Regards,
> Kranthi Kumar.K,
> Software Engineer,
> Ccube Fintech Global Services Pvt Ltd.,
> Email/Skype: kranthikuma...@ccubefintech.com,
> Mobile: 

Re: Solr Size Limitation upto 32 kb limitation

2019-02-10 Thread Walter Underwood
Solr is not designed to store chunks of binary data. This is not a bug. It will 
probably not be “fixed”.

I strongly recommend putting your chunks of data in a database. Then store the 
primary key in a field in Solr. When the Solr results are returned, the client 
code can use the keys to fetch the data blobs from the database.

STOP sending this to d...@lucene.apache.org. This is a user question and should 
go to solr-user@lucene.apache.org.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Feb 10, 2019, at 9:27 PM, Kranthi Kumar K 
>  wrote:
> 
> Hi Team,
>  
> We didn’t get any suggested solutions. Could you help us by providing better 
> approach or a solution to fix the issue?
> We’ll be awaiting your reply.
>  
> 
> 
> Thanks & Regards,
> Kranthi Kumar.K,
> Software Engineer,
> Ccube Fintech Global Services Pvt Ltd.,
> Email/Skype: kranthikuma...@ccubefintech.com 
> ,
> Mobile: +91-8978078449.
>  
>  
> From: Kranthi Kumar K  > 
> Sent: Friday, February 1, 2019 10:26 AM
> To: d...@lucene.apache.org ; 
> solr-user@lucene.apache.org 
> Cc: Ananda Babu medida  >; Srinivasa Reddy Karri 
>  >; Ravi Vangala 
> mailto:ravi.vang...@ccubefintech.com>>; 
> Suresh Malladi mailto:sur...@ccubefintech.com>>; 
> Vijay Nandula  >; Michelle Ngo 
> mailto:michelle@ccube.com.au>>
> Subject: Re: Solr Size Limitation upto 32 kb limitation
>  
> Hi Team,
> 
>  
> 
> Thanks for your suggestions that you've posted, but none of them have fixed 
> our issue. Could you please provide us your valuable suggestions to address 
> this issue.
> 
>  
> 
> We'll be awaiting your reply.
> 
>  
> 
> Thanks,
> 
> Kranthi kumar.K
> 
> From: Michelle Ngo
> Sent: Thursday, January 24, 2019 12:00:06 PM
> To: Kranthi Kumar K; d...@lucene.apache.org ; 
> solr-user@lucene.apache.org 
> Cc: Ananda Babu medida; Srinivasa Reddy Karri; Ravi Vangala; Suresh Malladi; 
> Vijay Nandula
> Subject: RE: Solr Size Limitation upto 32 kb limitation
>  
> Thanks @Kranthi Kumar K  for 
> following up
>  
> From: Kranthi Kumar K  > 
> Sent: Thursday, 24 January 2019 4:51 PM
> To: d...@lucene.apache.org ; 
> solr-user@lucene.apache.org 
> Cc: Ananda Babu medida  >; Srinivasa Reddy Karri 
>  >; Michelle Ngo 
> mailto:michelle@ccube.com.au>>; Ravi Vangala 
> mailto:ravi.vang...@ccubefintech.com>>; 
> Suresh Malladi mailto:sur...@ccubefintech.com>>; 
> Vijay Nandula  >
> Subject: RE: Solr Size Limitation upto 32 kb limitation
>  
> Thank you Bernd Fehling for your suggested solution, I've tried the same by 
> changing the type and added multivalued to true in Schema.xml file i.e,
> change from: 
>  
> 
>  
> Changed to: 
>  
>  multiValued="true" />
>  
> After changing it also still we are unable to import the files size > 32 kb. 
> please find the solution suggested by Bernd in the below url:
>  
> http://lucene.472066.n3.nabble.com/Re-Solr-Size-Limitation-upto-32-kb-limitation-td4421569.html
>  
> 
>  
> Bernd Fehling, could you please suggest another alternative solution to 
> resolve our issue, which would help us alot?
>  
> Please let me know for any questions.
>  
> 
> 
> Thanks & Regards,
> Kranthi Kumar.K,
> Software Engineer,
> Ccube Fintech Global Services Pvt Ltd.,
> Email/Skype: kranthikuma...@ccubefintech.com 
> ,
> Mobile: +91-8978078449.
>  
>  
> From: Kranthi Kumar K 
> Sent: Friday, January 18, 2019 4:22 PM
> To: d...@lucene.apache.org ; 
> solr-user@lucene.apache.org 
> Cc: Ananda Babu medida  >; Srinivasa Reddy Karri 
>  >; Michelle Ngo 
> mailto:michelle@ccube.com.au>>; Ravi Vangala 
> mailto:ravi.vang...@ccubefintech.com>>
> Subject: RE: Solr Size Limitation upto 32 kb limitation
>  
> Hi team,
>  
> Thank you Erick Erickson ,Bernd Fehling , Jan Hoydahl for your suggested 
> solutions. I’ve tried the suggested one’s and still we are unable to import 
> files havingsize  >32 kb, it is displaying same error.
>  
> Below link has the suggested solutions. Please have a look once.
>  
> http://lucene.472066.n3.nabble.com/Solr-Size-Limitation-upto-32-KB-files-td4419779.html
>  
> 

Re: Unable to create collection with custom queryParser Plugin

2019-02-10 Thread Jörn Franke
You can put all solr dependencies as provided. They are already on the class 
path - no need to put them in the fat jar.

> Am 11.02.2019 um 05:59 schrieb Aroop Ganguly :
> 
> Thanks Erick!
> 
> I see. Yes it is a fat jar post shadowJar process (in the order of MBs).
> It contains solrj and solr-core dependencies plus a few more scala related 
> ones.
> I guess the solr-core dependencies are unavoidable (right ?), let me try to 
> trim the others.
> 
> Regards
> Aroop
> 
>> On Feb 10, 2019, at 8:44 PM, Erick Erickson  wrote:
>> 
>> Aroop:
>> 
>> How big is your custom jar file? The name "test-plugins-aroop-all.jar"
>> makes me suspicious. It should be very small and should _not_ contain
>> any of the Solr distribution jar files, just your compiled custom
>> code. I'm grasping at straws a bit, but it may be that you have the
>> same jar files from the Solr distro and also included in your custom
>> jar and it's confusing the classloader. "Very small" here is on the
>> order of 10K given it does very little. If it's much bigger than, say,
>> 15K it's a red flag. If you do a "jar -dvf your_custom_jar" there
>> should be _very_ few classes in it.
>> 
>> Best,
>> Erick
>> 
>> On Sun, Feb 10, 2019 at 8:33 PM Aroop Ganguly
>>  wrote:
>>> 
>>> [resending due to bounce warning from the other email]
>>> 
>>> 
>>> Hi Team
>>> 
>>> I thought this was simple, but I am just missing something here. Any 
>>> guidance would be very appreciated.
>>> 
>>> What have I done so far:
>>>   1. I have created a custom querParser (class SamplePluggin extends 
>>> QParserPlugin { ), which right now does nothing but logs an info message, 
>>> and returns a new LuceneQParser() instance with the same parameters.
>>>   2. I am on solr 7.5 and I have added the path to the jar and 
>>> referenced the plugin in the following ways in my solrconfig.xml:
>>> 
>>>   
>>>   >> class="com.aroop.plugins.SamplePluggin"/>
>>> 
>>> Now when I create a collection with this solrconfig, I keep getting this 
>>> exception stack:
>>> I have tried debugging the live solr instance and for the life of me, I 
>>> cannot understand why am I getting this cast exception
>>> 2019-02-11 03:57:10.410 ERROR (qtp1594873248-62) [c:cvp2 s:shard1 
>>> r:core_node2 x:testCollection_shard1_replica_n1] o.a.s.h.RequestHandlerBase 
>>> org.apache.solr.common.SolrException: Error CREATEing SolrCore 
>>> 'testCollection_shard1_replica_n1': Unable to create core 
>>> [testCollection_shard1_replica_n1] Caused by: class 
>>> com.aroop.plugins.SamplePluggin
>>>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1087)
>>>   at 
>>> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$247(CoreAdminOperation.java:92)
>>>   at 
>>> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
>>>   at 
>>> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
>>>   at 
>>> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
>>>   at 
>>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>>>   at 
>>> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)
>>>   at 
>>> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)
>>>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)
>>>   at 
>>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
>>>   at 
>>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
>>>   at 
>>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
>>>   at 
>>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>>>   at 
>>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>>>   at 
>>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>>>   at 
>>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>>>   at 
>>> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>>>   at 
>>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>>>   at 
>>> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>>>   at 
>>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
>>>   at 
>>> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>>>   at 
>>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>>>   at 
>>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>>>   at 
>>> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>>>   at 
>>> 

Re: Unable to create collection with custom queryParser Plugin

2019-02-10 Thread Aroop Ganguly
Thanks Erick!

I see. Yes it is a fat jar post shadowJar process (in the order of MBs).
It contains solrj and solr-core dependencies plus a few more scala related ones.
I guess the solr-core dependencies are unavoidable (right ?), let me try to 
trim the others.

Regards
Aroop

> On Feb 10, 2019, at 8:44 PM, Erick Erickson  wrote:
> 
> Aroop:
> 
> How big is your custom jar file? The name "test-plugins-aroop-all.jar"
> makes me suspicious. It should be very small and should _not_ contain
> any of the Solr distribution jar files, just your compiled custom
> code. I'm grasping at straws a bit, but it may be that you have the
> same jar files from the Solr distro and also included in your custom
> jar and it's confusing the classloader. "Very small" here is on the
> order of 10K given it does very little. If it's much bigger than, say,
> 15K it's a red flag. If you do a "jar -dvf your_custom_jar" there
> should be _very_ few classes in it.
> 
> Best,
> Erick
> 
> On Sun, Feb 10, 2019 at 8:33 PM Aroop Ganguly
>  wrote:
>> 
>> [resending due to bounce warning from the other email]
>> 
>> 
>> Hi Team
>> 
>> I thought this was simple, but I am just missing something here. Any 
>> guidance would be very appreciated.
>> 
>> What have I done so far:
>>1. I have created a custom querParser (class SamplePluggin extends 
>> QParserPlugin { ), which right now does nothing but logs an info message, 
>> and returns a new LuceneQParser() instance with the same parameters.
>>2. I am on solr 7.5 and I have added the path to the jar and 
>> referenced the plugin in the following ways in my solrconfig.xml:
>> 
>>
>>> class="com.aroop.plugins.SamplePluggin"/>
>> 
>> Now when I create a collection with this solrconfig, I keep getting this 
>> exception stack:
>> I have tried debugging the live solr instance and for the life of me, I 
>> cannot understand why am I getting this cast exception
>> 2019-02-11 03:57:10.410 ERROR (qtp1594873248-62) [c:cvp2 s:shard1 
>> r:core_node2 x:testCollection_shard1_replica_n1] o.a.s.h.RequestHandlerBase 
>> org.apache.solr.common.SolrException: Error CREATEing SolrCore 
>> 'testCollection_shard1_replica_n1': Unable to create core 
>> [testCollection_shard1_replica_n1] Caused by: class 
>> com.aroop.plugins.SamplePluggin
>>at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1087)
>>at 
>> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$247(CoreAdminOperation.java:92)
>>at 
>> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
>>at 
>> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
>>at 
>> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
>>at 
>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>>at 
>> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)
>>at 
>> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)
>>at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)
>>at 
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
>>at 
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
>>at 
>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
>>at 
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>>at 
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>>at 
>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>>at 
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>>at 
>> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>>at 
>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>>at 
>> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>>at 
>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
>>at 
>> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>>at 
>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>>at 
>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>>at 
>> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>>at 
>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
>>at 
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>>at 
>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
>>at 
>> 

Re: COLLECTION CREATE and CLUSTERSTATUS changes in SOLR 7.5.0

2019-02-10 Thread ramyogi
I found the reason,
=true when I create a collection with this parameter I could
find that replicas data in CLUSTERSTATUS api request,. is there anything
wrong if I use this in SOLR 7.5.0 when create a collection ?
Please advice.



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Unable to create collection with custom queryParser Plugin

2019-02-10 Thread Erick Erickson
Aroop:

How big is your custom jar file? The name "test-plugins-aroop-all.jar"
makes me suspicious. It should be very small and should _not_ contain
any of the Solr distribution jar files, just your compiled custom
code. I'm grasping at straws a bit, but it may be that you have the
same jar files from the Solr distro and also included in your custom
jar and it's confusing the classloader. "Very small" here is on the
order of 10K given it does very little. If it's much bigger than, say,
15K it's a red flag. If you do a "jar -dvf your_custom_jar" there
should be _very_ few classes in it.

Best,
Erick

On Sun, Feb 10, 2019 at 8:33 PM Aroop Ganguly
 wrote:
>
> [resending due to bounce warning from the other email]
>
>
> Hi Team
>
> I thought this was simple, but I am just missing something here. Any guidance 
> would be very appreciated.
>
> What have I done so far:
> 1. I have created a custom querParser (class SamplePluggin extends 
> QParserPlugin { ), which right now does nothing but logs an info message, and 
> returns a new LuceneQParser() instance with the same parameters.
> 2. I am on solr 7.5 and I have added the path to the jar and 
> referenced the plugin in the following ways in my solrconfig.xml:
>
> 
>  class="com.aroop.plugins.SamplePluggin"/>
>
> Now when I create a collection with this solrconfig, I keep getting this 
> exception stack:
> I have tried debugging the live solr instance and for the life of me, I 
> cannot understand why am I getting this cast exception
> 2019-02-11 03:57:10.410 ERROR (qtp1594873248-62) [c:cvp2 s:shard1 
> r:core_node2 x:testCollection_shard1_replica_n1] o.a.s.h.RequestHandlerBase 
> org.apache.solr.common.SolrException: Error CREATEing SolrCore 
> 'testCollection_shard1_replica_n1': Unable to create core 
> [testCollection_shard1_replica_n1] Caused by: class 
> com.aroop.plugins.SamplePluggin
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1087)
> at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$247(CoreAdminOperation.java:92)
> at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
> at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)
> at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at org.eclipse.jetty.server.Server.handle(Server.java:531)
> at 

Re: Get recent documents from solr

2019-02-10 Thread shruti suri
Hi,
 
Yes we are running full indexing every 4 hours and also we are using more
than 4 views to get data and each view has its own update date.



-
Regards
Shruti
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


COLLECTION CREATE and CLUSTERSTATUS changes in SOLR 7.5.0

2019-02-10 Thread ramyogi


SOLR 7.5.0
Created collection as below:
/admin/collections?action=CREATE=test_collection=1=1=replica:<2,node:*=test_collection_config

Created Successfully.

After that when I try to see CLUSTERSTATUS, it is  giving "replicas": {}
empty. But IN SOLR 5.3.1 it was giving the response but SOLR 7.5.0 it is
not.
Please  help out what is missing how to fix.

/solr/admin/collections?action=CLUSTERSTATUS=json

{
  "responseHeader": {
"status": 0,
"QTime": 1
  },
  "cluster": {
"collections": {
  "test_collection": {
"pullReplicas": "0",
"replicationFactor": "1",
"shards": {
  "shard1": {
"range": "8000-7fff",
"state": "active",
"replicas": {}
  }
},
"router": {
  "name": "compositeId"
},
"maxShardsPerNode": "1",
"autoAddReplicas": "false",
"nrtReplicas": "1",
"tlogReplicas": "0",
"rule": [
  {
"replica": "<2",
"node": "*"
  }
],
"znodeVersion": 6,
"configName": "test_collection_config"
  }



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Unable to create collection with custom queryParser Plugin

2019-02-10 Thread Aroop Ganguly
[resending due to bounce warning from the other email]


Hi Team

I thought this was simple, but I am just missing something here. Any guidance 
would be very appreciated.

What have I done so far:
1. I have created a custom querParser (class SamplePluggin extends 
QParserPlugin { ), which right now does nothing but logs an info message, and 
returns a new LuceneQParser() instance with the same parameters.
2. I am on solr 7.5 and I have added the path to the jar and referenced 
the plugin in the following ways in my solrconfig.xml:




Now when I create a collection with this solrconfig, I keep getting this 
exception stack:
I have tried debugging the live solr instance and for the life of me, I cannot 
understand why am I getting this cast exception
2019-02-11 03:57:10.410 ERROR (qtp1594873248-62) [c:cvp2 s:shard1 r:core_node2 
x:testCollection_shard1_replica_n1] o.a.s.h.RequestHandlerBase 
org.apache.solr.common.SolrException: Error CREATEing SolrCore 
'testCollection_shard1_replica_n1': Unable to create core 
[testCollection_shard1_replica_n1] Caused by: class 
com.aroop.plugins.SamplePluggin
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1087)
at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$247(CoreAdminOperation.java:92)
at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:531)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at 

Unable to create collection with custom queryParser Plugin

2019-02-10 Thread Aroop Ganguly
Hi Team

I thought this was simple, but I am just missing something here. Any guidance 
would be very appreciated.

What have I done so far:
1. I have created a custom querParser (class SamplePluggin extends 
QParserPlugin { ), which right now does nothing but logs an info message, and 
returns a new LuceneQParser() instance with the same parameters.
2. I am on solr 7.5 and I have added the path to the jar and referenced 
the plugin in the following ways in my solrconfig.xml:




Now when I create a collection with this solrconfig, I keep getting this 
exception stack:
I have tried debugging the live solr instance and for the life of me, I cannot 
understand why am I getting this cast exception
2019-02-11 03:57:10.410 ERROR (qtp1594873248-62) [c:cvp2 s:shard1 r:core_node2 
x:testCollection_shard1_replica_n1] o.a.s.h.RequestHandlerBase 
org.apache.solr.common.SolrException: Error CREATEing SolrCore 
'testCollection_shard1_replica_n1': Unable to create core 
[testCollection_shard1_replica_n1] Caused by: class 
com.aroop.plugins.SamplePluggin
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1087)
at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$247(CoreAdminOperation.java:92)
at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:531)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)

Re: CloudSolrClient getDocCollection

2019-02-10 Thread Erick Erickson
bq. But I would assume it  should still be ok. The number of watchers
should still not be gigantic.

This assumption would need to be rigorously tested before I'd be
comfortable. I've spent quite a
bit of time with unhappy clients chasing down issues in the field where
1> it takes hours to cold-start the cluster
2> Solr just locks up
3> etc.

That said, there have been a series of other improvements that may
have invalidated these concerns.
Specifically:

1> Overseer operations were drastically sped up (up to 400x). It's
possible that some of the caching
was compensating for this.

2> The number of znode changes written  was reduced. This would reduce
the number of
watches triggered and the load on ZK.

3> The number of state changes for nodes coming up was reduced, again
reducing the number of
watches triggered.

4> etc.

But that's hand-waving, I'm mostly saying that the caching code was
put in place to solve some
existing problem, and I'd hate to have those problems re-introduced.
Whether the caching code
was the correct solution or whether there are better solutions given
additional changes is certainly
open for debate.

Best,
Erick

On Sun, Feb 10, 2019 at 3:32 AM Hendrik Haddorp  wrote:
>
> I opened now https://issues.apache.org/jira/browse/SOLR-13239 for the
> problem I observed.
>
> Well, who can really be sure about those things. But I would assume it
> should still be ok. The number of watchers should still not be gigantic.
> I have setups with about 2000 collections each but far less JVMs. ZK
> distributes the watches over the all nodes, which should also include
> observer nodes.
>
> Said that an alternative could be to refresh the cache asynchronously to
> the call detecting it to be outdated. Wouldn't the worst case be that a
> request gets send to a Solr node that has to forward the request to the
> correct node? The chance for the cache entry to be wrong after just one
> minute is however quite low. So in most cases the request would still be
> send to the correct node without having to wait for the cache update and
> without potentially blocking other requests. In a performance test we
> saw quite a few threads being blocked at this point.
>
> regards,
> Hendrik
>
> On 09.02.2019 20:40, Erick Erickson wrote:
> > Jason's comments are exactly why there _is_ a state.json per
> > collection rather than the single clusterstate.json in the original
> > implementation.
> >
> > Hendrik:
> > yes, please do open a JIRA for the condition you observed,
> > especially if you can point to the suspect code. There have
> > been intermittent issues with collection creation in the test
> > shells.
> >
> > About the watchers.
> >
> > bq. Yes, you would need one watch per state.json and
> > thus one watch per collection. That should however not really be a
> > problem with ZK.
> >
> > Consider an installation I have witnessed with 450K replicas scattered
> > over 100s of collections and 100s of JVMs. Each JVM may have one
> > or more CloudSolrClients. Are you _sure_ ZK can handle that kind
> > of watch load? The current architecture allows there to be many fewer
> > watches set, partially to deal with this scale. And even at this scale,
> > an incoming request to a node that does _not_ host _any_ replica of
> > the target collection needs to be able to forward the request, but doesn't
> > need to know much else about the target collections.
> >
> > Best,
> > Erick
> >
> >
> > On Fri, Feb 8, 2019 at 5:23 PM Hendrik Haddorp  
> > wrote:
> >> Hi Jason,
> >>
> >> thanks for your answer. Yes, you would need one watch per state.json and
> >> thus one watch per collection. That should however not really be a
> >> problem with ZK. I would assume that the Solr server instances need to
> >> monitor those nodes to be up to date on the cluster state. Using
> >> org.apache.solr.common.cloud.ZkStateReader.registerCollectionStateWatcher
> >> you can even add a watch for that using the SolrJ API. At least for the
> >> currently watched collections the client should thus actually already
> >> have the correct information available. The access to that would likely
> >> be a bit ugly though.
> >>
> >> The CloudSolrClient also allows to set a watch on /collections using
> >> org.apache.solr.common.cloud.ZkStateReader.registerCloudCollectionsListener.
> >> This is actually another thing I just ran into. As the code has a watch
> >> on /collections the listener gets informed about new collections as soon
> >> as the "directory" for the collection is being created. If the listener
> >> does then straight away try to access the collection info via
> >> zkStateReader.getClusterState() the DocCollection can be returned as
> >> null as the DocCollection is build on the information stored in the
> >> state.json file, which might not exist yet. I'm trying to monitor the
> >> Solr cluster state and thus ran into this. Not sure if I should open a
> >> Jira for that.
> >>
> >> regards,
> >> Hendrik
> >>
> >> On 08.02.2019 23:20, Jason 

Failed to create collection

2019-02-10 Thread Issei Nishigata

Hello, all.


I have 1 collection running, and when I tried to create a new collection with 
the following command,
-
$ solr-6.2.0/bin/solr create -c collection2 -d data_driven_schema_configs
-

I got the following error.
-
Connecting to ZooKeeper at sample1:2181,sample2:2182,sample3:2183 ...
Uploading /tmp/solr-6.2.0/server/solr/configsets/data_driven_schema_configs/conf for config collection2 to ZooKeeper at 
sample1:2181,sample2:2182,sample3:2183


Creating new collection 'collection2' using command:
http://localhost:8983/solr/admin/collections?action=CREATE=collection2=1=1=1=collection2


ERROR: Failed to create collection 'collection2' due to: Could not fully create 
collection: portal2
-

I can see collection2 on the collections list of Solr Admin UI. but I can't confirm collection2 on the graph list of Solr Admin UI, and 
collection selector as well.


Does anyone know about cause for this error?
Could you please help me on how to resolve them?


Regards,
Issei


Re: Solr moved all replicas from node

2019-02-10 Thread Hendrik Haddorp

I opened https://issues.apache.org/jira/browse/SOLR-13240 for the exception.

On 10.02.2019 01:35, Hendrik Haddorp wrote:

Hi,

I have two Solr clouds using Version 7.6.0 with 4 nodes each and about 
500 collections with one shard and a replication factor of 2 per Solr 
cloud. The data is stored in the HDFS. I restarted the nodes one by 
one and always waited for the replicas to fully recover before I 
restarted the next. Once the last node was restarted I noticed that 
Solr was starting to move replicas to other nodes. Actually it started 
to move all replicas from one node, which is now left empty. Is there 
any way to figure out why Solr decided to move all replicas to other 
nodes?
The only problem that I see is that during the recovery the Solr 
instance logged a problem with the HDFS, claiming that the filesystem 
is closed. The recovery seems to have continued after that just fine 
though and the logs are clean for the time after wards.
I restarted the node now and invoked the UTILIZENODE action that moved 
a few replicas back to the node but then failed with this exception:


{
  "responseHeader":{
    "status":500,
    "QTime":40220},
  "Operation utilizenode caused 
exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException: 
Comparison method violates its general contract!",

  "exception":{
    "msg":"Comparison method violates its general contract!",
    "rspCode":-1},
  "error":{
    "metadata":[
  "error-class","org.apache.solr.common.SolrException",
  "root-error-class","org.apache.solr.common.SolrException"],
    "msg":"Comparison method violates its general contract!",
    "trace":"org.apache.solr.common.SolrException: Comparison method 
violates its general contract!\n\tat 
org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat 
org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat 
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat 
org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)\n\tat 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat 
org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)\n\tat 
org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat 
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)\n\tat 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\n\tat 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)\n\tat 

Re: CloudSolrClient getDocCollection

2019-02-10 Thread Hendrik Haddorp
I opened now https://issues.apache.org/jira/browse/SOLR-13239 for the 
problem I observed.


Well, who can really be sure about those things. But I would assume it 
should still be ok. The number of watchers should still not be gigantic. 
I have setups with about 2000 collections each but far less JVMs. ZK 
distributes the watches over the all nodes, which should also include 
observer nodes.


Said that an alternative could be to refresh the cache asynchronously to 
the call detecting it to be outdated. Wouldn't the worst case be that a 
request gets send to a Solr node that has to forward the request to the 
correct node? The chance for the cache entry to be wrong after just one 
minute is however quite low. So in most cases the request would still be 
send to the correct node without having to wait for the cache update and 
without potentially blocking other requests. In a performance test we 
saw quite a few threads being blocked at this point.


regards,
Hendrik

On 09.02.2019 20:40, Erick Erickson wrote:

Jason's comments are exactly why there _is_ a state.json per
collection rather than the single clusterstate.json in the original
implementation.

Hendrik:
yes, please do open a JIRA for the condition you observed,
especially if you can point to the suspect code. There have
been intermittent issues with collection creation in the test
shells.

About the watchers.

bq. Yes, you would need one watch per state.json and
thus one watch per collection. That should however not really be a
problem with ZK.

Consider an installation I have witnessed with 450K replicas scattered
over 100s of collections and 100s of JVMs. Each JVM may have one
or more CloudSolrClients. Are you _sure_ ZK can handle that kind
of watch load? The current architecture allows there to be many fewer
watches set, partially to deal with this scale. And even at this scale,
an incoming request to a node that does _not_ host _any_ replica of
the target collection needs to be able to forward the request, but doesn't
need to know much else about the target collections.

Best,
Erick


On Fri, Feb 8, 2019 at 5:23 PM Hendrik Haddorp  wrote:

Hi Jason,

thanks for your answer. Yes, you would need one watch per state.json and
thus one watch per collection. That should however not really be a
problem with ZK. I would assume that the Solr server instances need to
monitor those nodes to be up to date on the cluster state. Using
org.apache.solr.common.cloud.ZkStateReader.registerCollectionStateWatcher
you can even add a watch for that using the SolrJ API. At least for the
currently watched collections the client should thus actually already
have the correct information available. The access to that would likely
be a bit ugly though.

The CloudSolrClient also allows to set a watch on /collections using
org.apache.solr.common.cloud.ZkStateReader.registerCloudCollectionsListener.
This is actually another thing I just ran into. As the code has a watch
on /collections the listener gets informed about new collections as soon
as the "directory" for the collection is being created. If the listener
does then straight away try to access the collection info via
zkStateReader.getClusterState() the DocCollection can be returned as
null as the DocCollection is build on the information stored in the
state.json file, which might not exist yet. I'm trying to monitor the
Solr cluster state and thus ran into this. Not sure if I should open a
Jira for that.

regards,
Hendrik

On 08.02.2019 23:20, Jason Gerlowski wrote:

Hi Henrik,

I'll try to answer, and let others correct me if I stray.  I wasn't
around when CloudSolrClient was written, so take this with a grain of
salt:

"Why does the client need that timeout?Wouldn't it make sense to
use a watch?"

You could probably write a CloudSolrClient that uses watch(es) to keep
track of changing collection state.  But I suspect you'd need a
watch-per-collection, instead of just a single watch.

Modern versions of Solr store the state for each collection in
individual "state.json" ZK nodes
("/solr/collections//state.json").  To catch changes
to all of these collections, you'd need to watch each of those nodes.
Which wouldn't scale well for users who want lots of collections.  I
suspect this was one of the concerns that nudged the author(s) to use
a cache-based approach.

(Even when all collection state was stored in a single ZK node, a
watch-based CloudSolrClient would likely have scaling issues for the
many-collection use case.  The client would need to recalculate its
state information for _all_ collections any time that _any_ of the
collections changed, since it has no way to tell which collection was
changed.)

Best,

Jason

On Thu, Feb 7, 2019 at 11:44 AM Hendrik Haddorp  wrote:

Hi,

when I perform a query using the CloudSolrClient the code first
retrieves the DocCollection to determine to which instance the query
should be send [1]. getDocCollection [2] does a lookup in a cache, which
has a 60s expiration time [3]. 

Re: Solr moved all replicas from node

2019-02-10 Thread Hendrik Haddorp

Solr version is 7.6.0
autoAddReplicas is set to true
/api/cluster/autoscaling returns this:

{
  "responseHeader":{
"status":0,
"QTime":1},
  "cluster-preferences":[{
  "minimize":"cores",
  "precision":1}],
  "cluster-policy":[{
  "replica":"<2",
  "shard":"#EACH",
  "node":"#ANY"}],
  "triggers":{
".auto_add_replicas":{
  "name":".auto_add_replicas",
  "event":"nodeLost",
  "waitFor":1800,
  "enabled":true,
  "actions":[{
  "name":"auto_add_replicas_plan",
  "class":"solr.AutoAddReplicasPlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}]},
".scheduled_maintenance":{
  "name":".scheduled_maintenance",
  "event":"scheduled",
  "startTime":"NOW",
  "every":"+1DAY",
  "enabled":true,
  "actions":[{
  "name":"inactive_shard_plan",
  "class":"solr.InactiveShardPlanAction"},
{
  "name":"execute_plan",
  "class":"solr.ExecutePlanAction"}]}},
  "listeners":{
".auto_add_replicas.system":{
  "beforeAction":[],
  "afterAction":[],
  "stage":["STARTED",
"ABORTED",
"SUCCEEDED",
"FAILED",
"BEFORE_ACTION",
"AFTER_ACTION",
"IGNORED"],
  "trigger":".auto_add_replicas",
  "class":"org.apache.solr.cloud.autoscaling.SystemLogListener"},
".scheduled_maintenance.system":{
  "beforeAction":[],
  "afterAction":[],
  "stage":["STARTED",
"ABORTED",
"SUCCEEDED",
"FAILED",
"BEFORE_ACTION",
"AFTER_ACTION",
"IGNORED"],
  "trigger":".scheduled_maintenance",
  "class":"org.apache.solr.cloud.autoscaling.SystemLogListener"}},
  "properties":{},
  "WARNING":"This response format is experimental.  It is likely to change in the 
future."}

I have two solr clouds that are setup in the same way. When restarting 
the nodes only one of them showed this behavior.
Ideally I want replicas to be moved when a node is down for a longer 
time but not when I just restart it. I would also like all nodes to end 
up with the same number of cores.


On 10.02.2019 05:30, Erick Erickson wrote:

What version of Solr? Do you have any of the autoscaling stuff turned
on? What about autoAddReplicas (which does not need Solr 7x)?

On Sat, Feb 9, 2019 at 4:35 PM Hendrik Haddorp  wrote:

Hi,

I have two Solr clouds using Version 7.6.0 with 4 nodes each and about
500 collections with one shard and a replication factor of 2 per Solr
cloud. The data is stored in the HDFS. I restarted the nodes one by one
and always waited for the replicas to fully recover before I restarted
the next. Once the last node was restarted I noticed that Solr was
starting to move replicas to other nodes. Actually it started to move
all replicas from one node, which is now left empty. Is there any way to
figure out why Solr decided to move all replicas to other nodes?
The only problem that I see is that during the recovery the Solr
instance logged a problem with the HDFS, claiming that the filesystem is
closed. The recovery seems to have continued after that just fine though
and the logs are clean for the time after wards.
I restarted the node now and invoked the UTILIZENODE action that moved a
few replicas back to the node but then failed with this exception:

{
"responseHeader":{
  "status":500,
  "QTime":40220},
"Operation utilizenode caused
exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
Comparison method violates its general contract!",
"exception":{
  "msg":"Comparison method violates its general contract!",
  "rspCode":-1},
"error":{
  "metadata":[
"error-class","org.apache.solr.common.SolrException",
"root-error-class","org.apache.solr.common.SolrException"],
  "msg":"Comparison method violates its general contract!",
  "trace":"org.apache.solr.common.SolrException: Comparison method
violates its general contract!\n\tat
org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat