RE: [solr-solrcloud] How does DIH work when there are multiple nodes?

2019-01-03 Thread Doss
Hi,

The data import process will not happen automatically, we have to do it
manually through the admin interface or by calling the URL

https://lucene.apache.org/solr/guide/7_5/uploading-structured-data-store-data-with-the-data-import-handler.html

Full Import:

http://node1ip:8983/solr/yourindexname/dataimport?command=full-import=true

Delta Import:

http://node1ip:8983/solr/yourindexname/dataimport?command=delta-import=true


If you want to do the delta import automatically you can setup a cron
(linux) which can call the URL periodically.

Best,
Doss.




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


RE: [solr-solrcloud] How does DIH work when there are multiple nodes?

2019-01-03 Thread 유정인
Hi

Did you tell me how to call one node directly?

Are you saying that one of the three nodes is automatically run?

I would like to know how one of the three nodes is automatically performed.

-Original Message-
From: Doss  
Sent: Friday, January 04, 2019 3:38 PM
To: solr-user@lucene.apache.org
Subject: Re: [solr-solrcloud] How does DIH work when there are multiple
nodes?

Hi,

I am assuming you are having the same index replicated in all 3 nodes, then
doing a full index/ delta index using DIH in one node will replicate the
data to other nodes, so no need to do it in all 3 nodes. Hope this helps!

Best,
Doss.



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html



Re: [solr-solrcloud] How does DIH work when there are multiple nodes?

2019-01-03 Thread Doss
Hi,

I am assuming you are having the same index replicated in all 3 nodes, then
doing a full index/ delta index using DIH in one node will replicate the
data to other nodes, so no need to do it in all 3 nodes. Hope this helps!

Best,
Doss.



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Regarding Shards - Composite / Implicit , Replica Type - NRT / TLOG

2019-01-03 Thread Doss
Hi,

We are planning to setup a SOLR cloud with 6 nodes for 3 million records
(expected to grow to 5 million in a year), with 150 fields and over all
index would come around 120GB.

We plan to use NRT with 5 sec soft commit and 1 min hard commit.

Expected query volume would be 5000 select hits per second and 7000 inserts
/ updates per second.

Our records can be classified under 15 categories, but they will not have
even number of records, few categories will have more number of records.

Queries will also come in the same pattern, that is., categories with high
number of records will get high volume of select / updates.

For this situation we are confused in choosing what type of sharding would
help us in better performance in both select and updates?

Composite / implicit - Composite with 15 shards or implicit based on 15
categories.

Our select queries will have minimum 15 filters in fq, with extensive
function queries used in sort.

Updates will have 6 integer fields, 5 string fields and 4 string/integer
fields with multi valued.

If we choose implicit to boost select performance, our updates will be
heavy on few shards (major category shards), will this be a problem?

For our kind of situation which replica Type can we choose? All NRT or NRT
with TLOG ?

Thanks in advance!

Best,
Doss.


Solr with Tableau

2019-01-03 Thread Saurabh Chandra
Hi,

We want to connect Solr from Tableau. I don’t see 
default Solr connector available for Tableau. We thought we can use Solr JDBC 
driver, I think it will work for single collection but it will not support 
joins (what we get from Solr streams)? Please let me know if there are any 
other alternatives?

Thanks,
Saurabh


The information transmitted (including attachments) is covered by the 
Electronic Communications Privacy Act (18 U.S.C. §§2510-2522), and is intended 
for the named recipient only. If the reader of this e-mail is not the intended 
recipient, you are hereby notified that you have received this message in error 
and that any review, dissemination, distribution or copying of this e-mail or 
its contents is strictly prohibited. If you have received this e-mail in error, 
please notify the sender immediately by e-mail and delete this e-mail. The 
transmission may contain confidential and proprietary information. In that 
regard, financial and technical information are confidential and proprietary 
information of Amber Road, Inc. and its subsidiaries. Notwithstanding any 
agreements or understandings to the contrary, Amber Road, Inc. and its 
subsidiaries have an expectation that such information will be kept 
confidential, not disclosed to anyone and will only be used to promote the 
business relationship with Amber Road, Inc. and its subsidiaries.


[solr-solrcloud] How does DIH work when there are multiple nodes?

2019-01-03 Thread 유정인
Hi

solrcloud Configured on 3 nodes.

DIH is used for collecting / indexing, and each node has the same DIH. The
DIH is executed at a fixed interval each time.

 

Then there is the question here.

Are you running on 3 nodes simultaneously?

Or is it only a leader?

 

And how do you know the leader?

 

I am wondering how DIH works in solrcloud configuration.



Re: Identifying product name and other details from search string

2019-01-03 Thread Jan Høydahl
Check out http://solr.cool  for some candidate query parsers

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 30. des. 2018 kl. 17:33 skrev UsesRN :
> 
> Is there any way to identify product name and other details from search
> string in Solr or Java?
> 
> For example: 
> 1. Input String: "
> 
> wound type cartridge filter size 20 * 4 Inch for RO plant" 
> 
> Output:
> 
> Product: cartridge filter for RO plant
> 
> Size: 20 * 4 inch
> 
> 
> 
> 2. Input String: "
> 
> WD 40 rust removing spray Container of 100 ml"
> 
> Product: Rust removing spray
> 
> Size: 100ml
> 
> Model: WD 40
> 
> 
> 
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html



Re: Solr Size Limitation upto 32 KB files

2019-01-03 Thread Jan Høydahl
You are not saying exactly how you index those documents. But check out the 
requestParsers tag in solrconfig.xml, see 
https://lucene.apache.org/solr/guide/6_6/requestdispatcher-in-solrconfig.html#RequestDispatcherinSolrConfig-requestParsersElement
 

 

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 2. jan. 2019 kl. 10:23 skrev Kranthi Kumar K 
> :
> 
> Hi,
>  
> We are currently using Solr 4.2.1 version in our project and everything is 
> going well. But recently, we are facing an issue with Solr Data Import. It is 
> not importing the files with size greater than 32766 bytes (i.e, 32 kb) and 
> showing 2 exceptions:
>  
> java.lang.illegalargumentexception
> org.apache.lucene.util.bytesref hash$maxbyteslengthexceededexception
>  
> Please find the attached screenshot for reference.
>  
> We have searched for solutions in many forums and didn’t find the exact 
> solution for this issue. Interestingly, we found in the article, by changing 
> the type of the ‘field’ from sting to  ‘text_general’ might solve the issue. 
> Please have a look in the below forum:
>  
> https://stackoverflow.com/questions/29445323/adding-a-document-to-the-index-in-solr-document-contains-at-least-one-immense-t
>  
> 
>   
>  
> Schema.xml:
> Changed from:
> ‘ multiValued="true" />’
>  
> Changed to:
> ‘ multiValued="true" />’
>  
> We have tried it but still it is not importing the files > 32 KB or 32766 
> bytes.
>  
> Could you please let us know the solution to fix this issue? We’ll be 
> awaiting your reply.
>  
>  
> 
> 
> Thanks & Regards,
> Kranthi Kumar.K,
> Software Engineer,
> Ccube Fintech Global Services Pvt Ltd.,
> Email/Skype: kranthikuma...@ccubefintech.com 
> ,
> Mobile: +91-8978078449.



Re: Solr 7.2.1 Stream API throws null pointer execption when used with collapse filter query

2019-01-03 Thread David Smiley
File a JIRA issue please

On Thu, Jan 3, 2019 at 5:20 PM gopikannan  wrote:

> Hi,
>I am getting null pointer exception when streaming search is done with
> collapse filter query. When debugged the last element in FixedBitSet array
> is null. Please let me know if I can raise an issue.
>
>
> https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/export/ExportWriter.java#L232
>
>
> http://localhost:8983/stream/?expr=search(coll_a ,sort="field_a
>
> asc",fl="field_a,field_b,field_c,field_d",qt="/export",q="*:*",fq="(filed_b:x)",fq="{!collapse
> field=field_c sort='field_d desc'}")
>
> org.apache.solr.servlet.HttpSolrCall null:java.lang.NullPointerException
> at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:61)
> at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:243)
> at
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:222)
> at
>
> org.apache.solr.response.JSONWriter.writeIterator(JSONResponseWriter.java:523)
> at
>
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:180)
> at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
> at
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:222)
> at
> org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
> at
>
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:198)
> at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
> at
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:220)
> at
> org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
> at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:218)
> at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2627)
> at
>
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>
-- 
Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


Re: So Many Zookeeper Warnings--There Must Be a Problem

2019-01-03 Thread Scott Stults
Good! Hopefully that's your smoking gun.

The port settings are fine, but since you're deploying to separate servers
you don't need different ports in the "server.x=" section. This section of
the docs explains it better:

http://zookeeper.apache.org/doc/r3.4.7/zookeeperAdmin.html#sc_zkMulitServerSetup


On Thu, Jan 3, 2019 at 3:49 PM Joe Lerner  wrote:

> Hi Scott,
>
> First, we are definitely mis-onfigured for the myid thing. Basically two of
> them were identifying as ID #2, and they are the two ZK's claiming to be
> the
> leader. Definitely something to straighten out!
>
> Our 3 lines in zoo.cfg look correct. Except they look like this:
>
> clientPort:2181
>
> server.1=host1:2190:2195
> server.2=host2:2191:2196
> server.3=host3:2192:2197
>
> Notice the port range, and overlap...
>
> Is that.../copacetic/?
>
> Thanks!
>
> Joe
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>


-- 
Scott Stults | Founder & Solutions Architect | OpenSource Connections, LLC
| 434.409.2780
http://www.opensourceconnections.com


Solr 7.2.1 Stream API throws null pointer execption when used with collapse filter query

2019-01-03 Thread gopikannan
Hi,
   I am getting null pointer exception when streaming search is done with
collapse filter query. When debugged the last element in FixedBitSet array
is null. Please let me know if I can raise an issue.

https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/export/ExportWriter.java#L232


http://localhost:8983/stream/?expr=search(coll_a ,sort="field_a
asc",fl="field_a,field_b,field_c,field_d",qt="/export",q="*:*",fq="(filed_b:x)",fq="{!collapse
field=field_c sort='field_d desc'}")

org.apache.solr.servlet.HttpSolrCall null:java.lang.NullPointerException
at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:61)
at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:243)
at org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:222)
at
org.apache.solr.response.JSONWriter.writeIterator(JSONResponseWriter.java:523)
at
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:180)
at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
at org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:222)
at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
at
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:198)
at org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559)
at
org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:220)
at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:218)
at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2627)
at
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)


Re: HttpParser URI is too large

2019-01-03 Thread Jan Høydahl
Upgrade to v7.6
https://issues.apache.org/jira/browse/SOLR-12814 


--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 21. des. 2018 kl. 21:00 skrev Tannen, Lev (USAEO) [Contractor] 
> :
> 
> Hello Solr community,
> 
> My solrcloud system consists of 3 machines, each running a zookeeper and a 
> solr server. It manages about 200 collections with 1 shard each. It is up and 
> running, but about every minutes it produces  messages in the message log on 
> each computer (messages are at the end). These messages do not relate to any 
> requests. It looks like they are produced by some internal mechanism. Using a 
> suggestion found on the Internet I have already increased the 
> requestHeaderSize and the responseHeaderSize to 81920, but this did not help.
> Do anyone know what do these messages mean and how to get reed of them?
> Thank you.
> Lev Tannen
> 
> Message on Computer1:
> 2018-12-21 19:44:46.552 WARN  (qtp817348612-21) [   ] o.e.j.h.HttpParser URI 
> is too large >81920
> 2018-12-21 19:44:46.560 INFO  (qtp817348612-16) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=1
> 
> Message on computer 2
> 2018-12-21 19:44:46.530 WARN  (qtp817348612-14) [   ] o.e.j.h.HttpParser URI 
> is too large >81920
> 2018-12-21 19:44:46.537 INFO  (qtp817348612-14) [   ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=2
> 
> Message on computer3 (a leader)
> 2018-12-21 19:44:46.513 WARN  (qtp817348612-14) [   ] o.e.j.h.HttpParser URI 
> is too large >81920
> 2018-12-21 19:44:46.514 WARN  (MetricsHistoryHandler-12-thread-1) [   ] 
> o.a.s.c.s.i.SolrClientNodeStateProvider could not get tags from node 
> usahubslvcvw121.usa.doj.gov:8983_solr
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://usahubslvcvw121.usa.doj.gov:8983/solr: Expected mime 
> type application/octet-stream but got text/html. Bad Message 
> 414reason: URI Too Long
>at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) 
> ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:292)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchMetrics(SolrClientNodeStateProvider.java:150)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getReplicaInfo(SolrClientNodeStateProvider.java:131)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:478)
>  ~[solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:368)
>  ~[solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:230)
>  ~[solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_181]
>at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [?:1.8.0_181]
>at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [?:1.8.0_181]
>at 
> 

Re: AutoScaling Solr on AWS

2019-01-03 Thread Aaron Cline
I thought I'd try to add some more information here.

1.  I have setup TLS for Solr and it seems to be working fine
2.  I have setup Basic Auth for Solr which also seems to be working fine
3.  I have setup ACLs for the Solr configs in Zookeepers which also seems
to be working as expected.

We have 10 or so collections that each have 5 shards and a
replicationfactor of 2.  When a new node comes up, I would just like Solr
to balance all of the shards and i would expect some number of shards to be
migrated to the new node.  I started with 2 nodes and built my 10
collections.  We then added data to the collections.  Success!

Its when the 3rd node spins up that I'm experience the unexpected results.
As you can see from this diagnostic, it is not taking any shards:

/api/cluster/autoscaling/diagnostics

{
  "responseHeader": {
"status": 0,
"QTime": 64
  },
  "diagnostics": {
"sortedNodes": [
  {
"node": "ip-10-228-2-33.local:8983_solr",
"cores": 50,
"freedisk": 14.302078247070312,
"sysLoadAvg": 56.99
  },
  {
"node": "ip-10-228-12-123.local:8983_solr",
"cores": 50,
"freedisk": 14.298782348632812,
"sysLoadAvg": 2
  },
  {
"node": "ip-10-228-7-27.local:8983_solr",
"cores": 0,
"freedisk": 14.729938507080078,
"sysLoadAvg": 0
  }
],
"violations": []
  },
  "WARNING": "This response format is experimental.  It is likely to change
in the future."
}

It looks like other people on this mailing have had similar issues, but no
one seems to get the solr error that I do which I posted in the first email.

Thanks.

Aaron


On Thu, Jan 3, 2019 at 11:46 AM Aaron Cline  wrote:

> Solr Version 7.3.1
> Java Version 1.8.0_151
>
> I'm trying to get solrcloud to autoscale when a new node is added to the
> cluster and balance the existing replicas across the new node accordingly.
> I'm running into some kind of odd error during the compute_plan action.
> I'm hoping someone here will point me in the right direction.  Please let
> me know if I need to provide more information.
>
> Here is the log of the error from the solr leader:
>
> 2019-01-03 17:23:10.268 INFO
> (AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
> [   ] o.a.s.c.a.ComputePlanAction Computed Plan:
> action=MOVEREPLICA=blc-customers1=ip-10-228-7-27.local:8983_solr=true=core_node19
> 2019-01-03 17:23:10.276 INFO
> (AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
> [   ] o.a.s.c.a.ComputePlanAction Computed Plan:
> action=MOVEREPLICA=blc-customers1=ip-10-228-7-27.local:8983_solr=true=core_node9
> 2019-01-03 17:23:10.283 INFO
> (AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
> [   ] o.a.s.c.a.ComputePlanAction Computed Plan:
> action=MOVEREPLICA=blc-customers1=ip-10-228-7-27.local:8983_solr=true=core_node13
> 2019-01-03 17:23:10.292 INFO
> (AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
> [   ] o.a.s.c.a.ComputePlanAction Computed Plan:
> action=MOVEREPLICA=blc-customers1=ip-10-228-7-27.local:8983_solr=true=core_node3
> 2019-01-03 17:23:10.301 INFO
> (AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
> [   ] o.a.s.c.a.ComputePlanAction Computed Plan:
> action=MOVEREPLICA=blc-gsr-content0=ip-10-228-7-27.local:8983_solr=true=core_node7
> 2019-01-03 17:23:10.309 INFO
> (AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
> [   ] o.a.s.c.a.ComputePlanAction Computed Plan:
> action=MOVEREPLICA=blc-customers1=ip-10-228-7-27.local:8983_solr=true=core_node17
> 2019-01-03 17:23:10.318 INFO
> (AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
> [   ] o.a.s.c.a.ComputePlanAction Computed Plan:
> action=MOVEREPLICA=blc-fulfillment-orders1=ip-10-228-7-27.local:8983_solr=true=core_node9
> 2019-01-03 17:23:10.329 INFO
> (AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
> [   ] o.a.s.c.a.ComputePlanAction Computed Plan:
> action=MOVEREPLICA=blc-gsr-content0=ip-10-228-7-27.local:8983_solr=true=core_node11
> 2019-01-03 17:23:10.337 INFO
> (AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
> [   ] o.a.s.c.a.ComputePlanAction Computed Plan:
> action=MOVEREPLICA=blc-gsr-content0=ip-10-228-7-27.local:8983_solr=true=core_node3
> 2019-01-03 17:23:10.345 INFO
> (AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
> [   ] o.a.s.c.a.ComputePlanAction Computed Plan:
> action=MOVEREPLICA=blc-fulfillment-orders1=ip-10-228-7-27.local:8983_solr=true=core_node13
> 2019-01-03 17:23:10.353 INFO
> (AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
> [   ] o.a.s.c.a.ComputePlanAction Computed Plan:
> 

Re: So Many Zookeeper Warnings--There Must Be a Problem

2019-01-03 Thread Joe Lerner
Hi Scott,

First, we are definitely mis-onfigured for the myid thing. Basically two of
them were identifying as ID #2, and they are the two ZK's claiming to be the
leader. Definitely something to straighten out!

Our 3 lines in zoo.cfg look correct. Except they look like this:

clientPort:2181

server.1=host1:2190:2195 
server.2=host2:2191:2196 
server.3=host3:2192:2197

Notice the port range, and overlap...

Is that.../copacetic/?

Thanks!

Joe 



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: So Many Zookeeper Warnings--There Must Be a Problem

2019-01-03 Thread Scott Stults
Hi Joe,

Yeah, two leaders is definitely a problem. I'd fix that before wading
through the error logs.

Check out zoo.cfg on each server. You should have three lines at the end
similar to this:

server.1=host1:2181:2281
server.2=host2:2182:2282
server.3=host3:2183:2283

(substitute "host*" with the right IP or address of your servers)

Also on each server, check the file "myid". It should have a single number
that maps to the list above. For example, on host1 your myid file should
contain a single value of "1" in it. On host2 the file should contain "2".

You'll probably have to delete the contents of the zk data directory and
rebuild your collections.



On Thu, Jan 3, 2019 at 2:47 PM Joe Lerner  wrote:

> Hi,
>
> We have a simple architecture: 2 SOLR Cloud servers (on servers #1 and #2),
> and 3 zookeeper instances (on servers #1, #2, and #3). Things work fine
> (although we had a couple of brief unexplained outages), but:
>
> One worrisome thing is that when I status zookeeper on #1 and #2, I get
> Mode=Leader on both--#3 shows follower. This seems to be a pretty permanent
> condition, at least right now as I look at it. And there isn't any big
> maintenance or anything going on.
>
> Also, we are getting *TONS* of continuous log warnings from our client
> applications. From one server it shows this:
>
>
>
> And from another server we get this:
>
>
> These are making our logs impossible to read, but worse, I assume indicate
> that something is wrong.
>
> Thanks for any help!
>
> Joe Lerner
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>


-- 
Scott Stults | Founder & Solutions Architect | OpenSource Connections, LLC
| 434.409.2780
http://www.opensourceconnections.com


So Many Zookeeper Warnings--There Must Be a Problem

2019-01-03 Thread Joe Lerner
Hi,

We have a simple architecture: 2 SOLR Cloud servers (on servers #1 and #2),
and 3 zookeeper instances (on servers #1, #2, and #3). Things work fine
(although we had a couple of brief unexplained outages), but:

One worrisome thing is that when I status zookeeper on #1 and #2, I get
Mode=Leader on both--#3 shows follower. This seems to be a pretty permanent
condition, at least right now as I look at it. And there isn't any big
maintenance or anything going on.

Also, we are getting *TONS* of continuous log warnings from our client
applications. From one server it shows this:



And from another server we get this:


These are making our logs impossible to read, but worse, I assume indicate
that something is wrong.

Thanks for any help!

Joe Lerner



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: SOLR v7 Security Issues Caused Denial of Use - Sonatype Application Composition Report

2019-01-03 Thread Bob Hathaway
Critical and Severe security vulnerabilities against Solr v7.1.  Many of
these appear to be from old open source  framework versions.

*9* CVE-2017-7525 com.fasterxml.jackson.core : jackson-databind : 2.5.4
Open

   CVE-2016-131 commons-fileupload : commons-fileupload : 1.3.2 Open

   CVE-2015-1832 org.apache.derby : derby : 10.9.1.0 Open

   CVE-2017-7525 org.codehaus.jackson : jackson-mapper-asl : 1.9.13 Open

   CVE-2017-7657 org.eclipse.jetty : jetty-http : 9.3.20.v20170531 Open

   CVE-2017-7658 org.eclipse.jetty : jetty-http : 9.3.20.v20170531 Open

   CVE-2017-1000190 org.simpleframework : simple-xml : 2.7.1 Open

*7* sonatype-2016-0397 com.fasterxml.jackson.core : jackson-core : 2.5.4
Open

   sonatype-2017-0355 com.fasterxml.jackson.core : jackson-core : 2.5.4
Open

   CVE-2014-0114 commons-beanutils : commons-beanutils : 1.8.3 Open

   CVE-2018-1000632 dom4j : dom4j : 1.6.1 Open

   CVE-2018-8009 org.apache.hadoop : hadoop-common : 2.7.4 Open

   CVE-2017-12626 org.apache.poi : poi : 3.17-beta1 Open

   CVE-2017-12626 org.apache.poi : poi-scratchpad : 3.17-beta1 Open

   CVE-2018-1308 org.apache.solr : solr-dataimporthandler : 7.1.0 Open

   CVE-2016-4434 org.apache.tika : tika-core : 1.16 Open

   CVE-2018-11761 org.apache.tika : tika-core : 1.16 Open

   CVE-2016-1000338 org.bouncycastle : bcprov-jdk15 : 1.45 Open

   CVE-2016-1000343 org.bouncycastle : bcprov-jdk15 : 1.45 Open

   CVE-2018-1000180 org.bouncycastle : bcprov-jdk15 : 1.45 Open

   CVE-2017-7656 org.eclipse.jetty : jetty-http : 9.3.20.v20170531 Open

   CVE-2012-0881 xerces : xercesImpl : 2.9.1 Open

   CVE-2013-4002 xerces : xercesImpl : 2.9.1 Open

On Thu, Jan 3, 2019 at 12:15 PM Bob Hathaway  wrote:

> We want to use SOLR v7 but Sonatype scans past v6.5 show dozens of
> critical and severe security issues and dozens of licensing issues. The
> critical security violations using Sonatype are inline and are indexed with
> codes from the National Vulnerability Database,
>
> Are there recommended steps for running Solr 7 in secure enterprises
> specifically infosec remediation over Sonatype Application Composition
> Reports?
>
> Are there plans to make Solr more secure in v7 or v8?
>
> I'm new to the Solr User forum and suggests are welcome.
>
>
> Sonatype Application Composition Reports
> Of Solr - 7.6.0, Build Scanned On Thu Jan 03 2019 at 14:49:49
> Using Scanner 1.56.0-01
>
> [image: image.png]
>
> [image: image.png]
>
> [image: image.png]
>
> Security Issues
> Threat Level Problem Code Component Status
> 9 CVE-2015-1832 org.apache.derby : derby : 10.9.1.0 Open
> CVE-2017-7525 org.codehaus.jackson : jackson-mapper-asl : 1.9.13 Open
> CVE-2017-1000
> 190
> org.simpleframework : simple-xml : 2.7.1 Open
> 8 CVE-2018-1471
> 8
> com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
> CVE-2018-1471
> 9
> com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
> sonatype-2017-
> 0312
> com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
> 7 CVE-2018-1472
> 0
> com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
> CVE-2018-1472
> 1
> com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
> CVE-2018-1000
> 632
> dom4j : dom4j : 1.6.1 Open
> CVE-2018-8009 org.apache.hadoop : hadoop-common : 2.7.4 Open
> CVE-2012-0881 xerces : xercesImpl : 2.9.1 Open
> CVE-2013-4002 xerces : xercesImpl : 2.9.1 Open
>
>
> License Analysis
> License Threat Component Status
> MPL-1.1, GPL-2.0+ or
> LGPL-2.1+ or MPL-1.1
> com.googlecode.juniversalchardet : juniversalchardet : 1.0.3 Open
> Apache-2.0, AFL-2.1 or
> GPL-2.0+
> org.ccil.cowan.tagsoup : tagsoup : 1.2.1 Open
> Not Declared, Not
> Supported
> d3 2.9.6 Open
> BSD-3-Clause, Adobe com.adobe.xmp : xmpcore : 5.1.3 Open
> Apache-2.0, No Source
> License
> com.cybozu.labs : langdetect : 1.1-20120112 Open
> Apache-2.0, No Source
> License
> com.fasterxml.jackson.core : jackson-annotations : 2.9.6 Open
> Apache-2.0, No Source
> License
> com.fasterxml.jackson.core : jackson-core : 2.9.6 Open
> Apache-2.0, No Source
> License
> com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
> Apache-2.0, No Source
> License
> com.fasterxml.jackson.dataformat : jackson-dataformat-smile : 2.9.6 Open
> Apache-2.0, EPL-1.0, MIT com.googlecode.mp4parser : isoparser : 1.1.22 Open
> Not Provided, No Source
> License
> com.ibm.icu : icu4j : 62.1 Open
> Apache-2.0, LGPL-3.0+ com.pff : java-libpst : 0.8.1 Open
> Apache-2.0, No Source
> License
> com.rometools : rome-utils : 1.5.1 Open
> CDDL-1.1 or GPL-2.0-
> CPE
> com.sun.mail : gimap : 1.5.1 Open
> CDDL-1.1 or GPL-2.0-
> CPE
> com.sun.mail : javax.mail : 1.5.1 Open
> Not Declared,
> Apache-1.1, Sun-IP
> dom4j : dom4j : 1.6.1 Open
> MIT, No Source License info.ganglia.gmetric4j : gmetric4j : 1.0.7 Open
> Apache-2.0, No Source
> License
> io.dropwizard.metrics : metrics-ganglia : 3.2.6 Open
> Apache-2.0, No Source
> License
> io.dropwizard.metrics : metrics-graphite : 3.2.6 Open
> Apache-2.0, No Source
> License
> io.dropwizard.metrics : 

SOLR v7 Security Issues Caused Denial of Use - Sonatype Application Composition Report

2019-01-03 Thread Bob Hathaway
We want to use SOLR v7 but Sonatype scans past v6.5 show dozens of critical
and severe security issues and dozens of licensing issues. The critical
security violations using Sonatype are inline and are indexed with codes
from the National Vulnerability Database,

Are there recommended steps for running Solr 7 in secure enterprises
specifically infosec remediation over Sonatype Application Composition
Reports?

Are there plans to make Solr more secure in v7 or v8?

I'm new to the Solr User forum and suggests are welcome.


Sonatype Application Composition Reports
Of Solr - 7.6.0, Build Scanned On Thu Jan 03 2019 at 14:49:49
Using Scanner 1.56.0-01

[image: image.png]

[image: image.png]

[image: image.png]

Security Issues
Threat Level Problem Code Component Status
9 CVE-2015-1832 org.apache.derby : derby : 10.9.1.0 Open
CVE-2017-7525 org.codehaus.jackson : jackson-mapper-asl : 1.9.13 Open
CVE-2017-1000
190
org.simpleframework : simple-xml : 2.7.1 Open
8 CVE-2018-1471
8
com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
CVE-2018-1471
9
com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
sonatype-2017-
0312
com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
7 CVE-2018-1472
0
com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
CVE-2018-1472
1
com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
CVE-2018-1000
632
dom4j : dom4j : 1.6.1 Open
CVE-2018-8009 org.apache.hadoop : hadoop-common : 2.7.4 Open
CVE-2012-0881 xerces : xercesImpl : 2.9.1 Open
CVE-2013-4002 xerces : xercesImpl : 2.9.1 Open


License Analysis
License Threat Component Status
MPL-1.1, GPL-2.0+ or
LGPL-2.1+ or MPL-1.1
com.googlecode.juniversalchardet : juniversalchardet : 1.0.3 Open
Apache-2.0, AFL-2.1 or
GPL-2.0+
org.ccil.cowan.tagsoup : tagsoup : 1.2.1 Open
Not Declared, Not
Supported
d3 2.9.6 Open
BSD-3-Clause, Adobe com.adobe.xmp : xmpcore : 5.1.3 Open
Apache-2.0, No Source
License
com.cybozu.labs : langdetect : 1.1-20120112 Open
Apache-2.0, No Source
License
com.fasterxml.jackson.core : jackson-annotations : 2.9.6 Open
Apache-2.0, No Source
License
com.fasterxml.jackson.core : jackson-core : 2.9.6 Open
Apache-2.0, No Source
License
com.fasterxml.jackson.core : jackson-databind : 2.9.6 Open
Apache-2.0, No Source
License
com.fasterxml.jackson.dataformat : jackson-dataformat-smile : 2.9.6 Open
Apache-2.0, EPL-1.0, MIT com.googlecode.mp4parser : isoparser : 1.1.22 Open
Not Provided, No Source
License
com.ibm.icu : icu4j : 62.1 Open
Apache-2.0, LGPL-3.0+ com.pff : java-libpst : 0.8.1 Open
Apache-2.0, No Source
License
com.rometools : rome-utils : 1.5.1 Open
CDDL-1.1 or GPL-2.0-
CPE
com.sun.mail : gimap : 1.5.1 Open
CDDL-1.1 or GPL-2.0-
CPE
com.sun.mail : javax.mail : 1.5.1 Open
Not Declared,
Apache-1.1, Sun-IP
dom4j : dom4j : 1.6.1 Open
MIT, No Source License info.ganglia.gmetric4j : gmetric4j : 1.0.7 Open
Apache-2.0, No Source
License
io.dropwizard.metrics : metrics-ganglia : 3.2.6 Open
Apache-2.0, No Source
License
io.dropwizard.metrics : metrics-graphite : 3.2.6 Open
Apache-2.0, No Source
License
io.dropwizard.metrics : metrics-jetty9 : 3.2.6 Open
Apache-2.0, No Source
License
io.dropwizard.metrics : metrics-jvm : 3.2.6 Open
Apache-2.0, No Source
License
io.prometheus : simpleclient_common : 0.2.0 Open
Apache-2.0, No Source
License
io.prometheus : simpleclient_httpserver : 0.2.0 Open
CDDL-1.0, CDDL-1.1 or
GPL-2.0-CPE
javax.activation : activation : 1.1.1 Open
CDDL-1.0 or GPL-2.0-
CPE, Apache-2.0,
CDDL-1.1 or GPL-2.0-
CPE
javax.servlet


AutoScaling Solr on AWS

2019-01-03 Thread Aaron Cline
Solr Version 7.3.1
Java Version 1.8.0_151

I'm trying to get solrcloud to autoscale when a new node is added to the
cluster and balance the existing replicas across the new node accordingly.
I'm running into some kind of odd error during the compute_plan action.
I'm hoping someone here will point me in the right direction.  Please let
me know if I need to provide more information.

Here is the log of the error from the solr leader:

2019-01-03 17:23:10.268 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-customers1=ip-10-228-7-27.local:8983_solr=true=core_node19
2019-01-03 17:23:10.276 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-customers1=ip-10-228-7-27.local:8983_solr=true=core_node9
2019-01-03 17:23:10.283 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-customers1=ip-10-228-7-27.local:8983_solr=true=core_node13
2019-01-03 17:23:10.292 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-customers1=ip-10-228-7-27.local:8983_solr=true=core_node3
2019-01-03 17:23:10.301 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-gsr-content0=ip-10-228-7-27.local:8983_solr=true=core_node7
2019-01-03 17:23:10.309 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-customers1=ip-10-228-7-27.local:8983_solr=true=core_node17
2019-01-03 17:23:10.318 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-fulfillment-orders1=ip-10-228-7-27.local:8983_solr=true=core_node9
2019-01-03 17:23:10.329 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-gsr-content0=ip-10-228-7-27.local:8983_solr=true=core_node11
2019-01-03 17:23:10.337 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-gsr-content0=ip-10-228-7-27.local:8983_solr=true=core_node3
2019-01-03 17:23:10.345 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-fulfillment-orders1=ip-10-228-7-27.local:8983_solr=true=core_node13
2019-01-03 17:23:10.353 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-fulfillment-orders1=ip-10-228-7-27.local:8983_solr=true=core_node17
2019-01-03 17:23:10.360 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-orders0=ip-10-228-7-27.local:8983_solr=true=core_node13
2019-01-03 17:23:10.367 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-orders0=ip-10-228-7-27.local:8983_solr=true=core_node20
2019-01-03 17:23:10.375 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-fulfillment-orders1=ip-10-228-7-27.local:8983_solr=true=core_node20
2019-01-03 17:23:10.382 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-fulfillment-orders1=ip-10-228-7-27.local:8983_solr=true=core_node5
2019-01-03 17:23:10.389 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-orders0=ip-10-228-7-27.local:8983_solr=true=core_node5
2019-01-03 17:23:10.396 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-customers0=ip-10-228-7-27.local:8983_solr=true=core_node9
2019-01-03 17:23:10.403 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[   ] o.a.s.c.a.ComputePlanAction Computed Plan:
action=MOVEREPLICA=blc-gsr-content0=ip-10-228-7-27.local:8983_solr=true=core_node17
2019-01-03 17:23:10.411 INFO
(AutoscalingActionExecutor-7-thread-1-processing-n:ip-10-228-12-123.local:8983_solr)
[ 

Accessing multiValued field from within custom function

2019-01-03 Thread Dariusz Wojtas
Hi,

I am using SOLR 7.5 in the cloud mode.
I want to create a custom function similar to 'strdist' that works on
multivalued fields (multiValued=true) and finds the highest matching score.
Yes, I know the potential performance issues, but in my usecase this would
bring a huge benefit.

There is not much information on how to work with multiValued fields, but I
have found a piece of code that might be useful. It's how SOLR standard
functions are registered:
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/search/ValueSourceParser.java

The interesting part for me starts in line 424, when the 'field' function
is registered.
It optionally accepts a multivalue field for min/max calculation.
If the 2nd argument is 'min' or 'max' it tries to resolve the field as
SchemaField.
  SchemaField f = fp.getReq().getSchema().getField(fieldName);

Now the questions are:
1. Is this the path I should follow? If not - are there any other ways?
2. How to retrieve all the actual *String *or *Text *values from a
multivalue field, not just a single value? Some kind of a table or set of
values. How?
3. Does cloud mode change anything here? In my case the whole index is on a
single machine, but there are several replicas.

Best regards,
Dariusz Wojtas


Re: Question about Solr concept

2019-01-03 Thread Alexandre Rafalovitch
I believe the answer is yes, but specifics depends on whether you mean
online or offline index creation (as in when does the content appear)
and also why you want to do so.

Couple of ideas:
1) If you just want to make sure all updates are visible at once, you
can control that with commit strategies even in the same collection:
https://lucene.apache.org/solr/guide/7_6/updatehandlers-in-solrconfig.html#commits
2) If you are doing full re-indexing, you can do that on a separate
(identical) instance and bring it to the active instance to swap-in
and/or aliases:
https://lucene.apache.org/solr/guide/7_6/coreadmin-api.html#coreadmin-api
(for non SolrCloud),
https://lucene.apache.org/solr/guide/7_6/collections-api.html#createalias
(for Cloud)
3) If you are looking at primary/read-only secondary options, latest
Solr has new replication strategies in SolrCloud mode:
https://lucene.apache.org/solr/guide/7_6/shards-and-indexing-data-in-solrcloud.html

Regards,
   Alex.

On Thu, 3 Jan 2019 at 09:35, KrishnaKumar Satyamurthy
 wrote:
>
> Hi Solr Community Help,
>
> We are new to Solr and have have a basic question about Solr functioning.
> Is it possible to configure solr to perform searching only but not perform
> any indexing by reading the indexes created by a second solr instance?
>
> We really appreciate your kind response in this matter
>
> Thanks,
> Krishna


Question about Solr concept

2019-01-03 Thread KrishnaKumar Satyamurthy
Hi Solr Community Help,

We are new to Solr and have have a basic question about Solr functioning.
Is it possible to configure solr to perform searching only but not perform
any indexing by reading the indexes created by a second solr instance?

We really appreciate your kind response in this matter

Thanks,
Krishna


Solr 7.6.0 and Java 11: ClassCastException: class java.lang.Integer cannot be cast to class java.lang.String (java.lang.Integer and java.lang.String are in module java.base of loader 'bootstrap')

2019-01-03 Thread Paul Smith Parker
Hello,

I am going nuts with an issue I noticed since upgrading to Java 11.

What I am using:
Java 11
Spring Boot 2.1.1 with spring-boot-starter-data-solr (amongst spring-data-jpa 
etc)
solr-solrj 7.6.0

What I am doing:

SolrCrudRepository.saveAll(documents)

What I am getting:

2019-01-03 13:47:54.830 ERROR (qtp735937428-22) [   x:documents] 
o.a.s.h.RequestHandlerBase java.lang.ClassCastException: class 
java.lang.Integer cannot be cast to class java.lang.String (java.lang.Integer 
and java.lang.String are in module java.base of loader 'bootstrap')
at 
org.apache.solr.common.util.JavaBinCodec.readSolrInputDocument(JavaBinCodec.java:601)
at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:315)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
at 
org.apache.solr.common.util.JavaBinCodec.readMapEntry(JavaBinCodec.java:781)
at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:319)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:182)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:144)
at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:311)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:130)

What I noticed:
The error seems to be on JavaBinCodec.readSolrInputDocument() where fieldName = 
(String)obj;
Initially I thought it was a problem with one of my document so I iterated over 
the collection saving each document one by one with 
SolrCrudRepository.save(document): in this case I don’t have the 
ClassCastException

Am I doing anything wrong? Or is it perhaps a bug?

Any help is very much appreciated!

Kind regards,
Paul