Re: Is there a way to retrieve the a term's position/offset in Solr

2017-04-07 Thread forest_soup
Thanks Rick. Unfortunately we have no that converter, so we have to count
characters in the rich text.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Is-there-a-way-to-retrieve-the-a-term-s-position-offset-in-Solr-tp4326931p4328859.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Is there a way to retrieve the a term's position/offset in Solr

2017-03-30 Thread forest_soup
Unfortunately the rich text is not an html/xml/doc/pdf or any other popular
rich text format. And we would like to show the highlighted text in the
doc's own specific viewer. That's why I'm eagerly want the offset.

The /tvrh(term vector component) and tv.offsets/tv.positions can give us
such info, but they returns all terms' data instead of the being searched
ones. So we are still seeking ways to filter the results.

Any ideas? 

Thanks!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Is-there-a-way-to-retrieve-the-a-term-s-position-offset-in-Solr-tp4326931p4327623.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Is there a way to retrieve the a term's position/offset in Solr

2017-03-28 Thread forest_soup
Thanks All!

Actually we are going to show the highlighted words in a rich text format
instead of the plain text which was indexed. So the hl.fragsize=0 seems not
work for me..

And for the patch(SOLR-4722), haven't tried it. Hope it can return the
position/offset info.

Thanks!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Is-there-a-way-to-retrieve-the-a-term-s-position-offset-in-Solr-tp4326931p4327339.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Is there a way to retrieve the a term's position/offset in Solr

2017-03-28 Thread forest_soup
Thanks Eric.

Actually solr highlighting function does not meet my requirement. My
requirement is not showing the highlighted words in snippets, but show them
in the whole opening document. So I would like to get the term's
position/offset info from solr. I went through the highlight feature, but
found that exact info(position/offset) is not returned. 
If you know that info within highlighting feature, could you please point it
out to me? 

The most promising way seems to be /tvrh and tv.offsets/tv.positions
parameters. But I haven't tried it. Any comments on that one?

Thanks!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Is-there-a-way-to-retrieve-the-a-term-s-position-offset-in-Solr-tp4326931p4327149.html
Sent from the Solr - User mailing list archive at Nabble.com.


Is there a way to retrieve the a term's position/offset in Solr

2017-03-27 Thread forest_soup
We are going to implement a feature: 
When opening a document whose body field is already indexed in Solr,  if we
issued a keyword search before opening the doc, highlight the keyword in the
opening document. 

That needs the position/offset info of the keyword in the doc's index, which
I think can be indexed or stored in solr in anyway. And we are searching
ways to retrieve them from any solr api.

Thanks!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Is-there-a-way-to-retrieve-the-a-term-s-position-offset-in-Solr-tp4326931.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: regex-urlfilter help

2016-12-18 Thread forest_soup
Yeah,, I'm curious why this thread is used to talk that topic.
I'll start a new thread on my questions. 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-cannot-provide-index-service-after-a-large-GC-pause-but-core-state-in-ZK-is-still-active-tp4308942p4310302.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Very long young generation stop the world GC pause

2016-12-18 Thread forest_soup
Sorry for my wrong memory. The swap is 16GB. 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Very-long-young-generation-stop-the-world-GC-pause-tp4308911p4310301.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Very long young generation stop the world GC pause

2016-12-18 Thread forest_soup
Thanks a lot, PushKar! And sorry for late response.
Our OS ram is 128GB. And we have 2 solr nodes on one machine. Each solr node
has max heap size 32GB.
And we do not have swap.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Very-long-young-generation-stop-the-world-GC-pause-tp4308911p4310291.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr has a CPU% spike when indexing a batch of data

2016-12-15 Thread forest_soup
Thanks a lot, Shawn.

We'll consider your suggestion to tune our solr servers. Will let you know
the result. 

Thanks!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-has-a-CPU-spike-when-indexing-a-batch-of-data-tp4309529p4310002.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr has a CPU% spike when indexing a batch of data

2016-12-14 Thread forest_soup
Thanks, Shawn!

We are doing index on the same http endpoint. But as we have shardnum=1 and
replicafactor=1, so each collection only has one core. So there should no
distributed update/query, as we are using solrj's CloudSolrClient which will
get the target URL of the solrnode when requesting to each collection.

For the questions:
* What is the total physical memory in the machine? 
128GB

* What is the max heap on each of the two Solr processes? 
32GB for each 

* What is the total index size in each Solr process?
Each Solr node(process) has 16 cores. 130GB for each solr core. So totally
>2000G for each solr node. 
 
* What is the total tlog size in each Solr process? 
25m for each core. So totally 400m for each solr node.


${solr.ulog.dir:}
${solr.ulog.numVersionBuckets:65536}
1
100


* What are your commit characteristics like -- both manual and automatic. 


1
${solr.autoCommit.maxTime:59000}
false


5000
${solr.autoSoftCommit.maxTime:31000}



* Do you have WARN or ERROR messages in your logfile? 
No.

* How many collections are in each cloud? 
80 collections with only one shard each. And replicafactor=1.

* How many servers are in each cloud? 
5 solr nodes. So each solr node has 16 cores.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-has-a-CPU-spike-when-indexing-a-batch-of-data-tp4309529p4309669.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr has a CPU% spike when indexing a batch of data

2016-12-13 Thread forest_soup
Hi, 

I posted this issue to a JIRA. Could anyone help comment? Thanks!

https://issues.apache.org/jira/browse/SOLR-9741

The details:

When we doing a batch of index and search operations to SolrCloud v5.3.2, we
usually met a CPU% spike lasting about 10 min. 
We have 5 physical servers, 2 solr instances running on each server with
different port(8983 and 8984), all 8983 are in a same solrcloud, all 8984
are in another solrcloud.

You can see the chart in the attach file screenshot-1.png.
 

The thread dump are in the attach file threads.zip.
threads.zip   

During the spike, the thread dump shows most of the threads are with the
call stacks below:
"qtp634210724-4759" #4759 prio=5 os_prio=0 tid=0x7fb32803e000 nid=0x64e7
runnable [0x7fb3ef1ef000]
java.lang.Thread.State: RUNNABLE
at
java.lang.ThreadLocal$ThreadLocalMap.getEntryAfterMiss(ThreadLocal.java:444)
at java.lang.ThreadLocal$ThreadLocalMap.getEntry(ThreadLocal.java:419)
at java.lang.ThreadLocal$ThreadLocalMap.access$000(ThreadLocal.java:298)
at java.lang.ThreadLocal.get(ThreadLocal.java:163)
at
org.apache.solr.search.SolrQueryTimeoutImpl.get(SolrQueryTimeoutImpl.java:49)
at
org.apache.solr.search.SolrQueryTimeoutImpl.shouldExit(SolrQueryTimeoutImpl.java:57)
at
org.apache.lucene.index.ExitableDirectoryReader$ExitableTermsEnum.checkAndThrow(ExitableDirectoryReader.java:165)
at
org.apache.lucene.index.ExitableDirectoryReader$ExitableTermsEnum.(ExitableDirectoryReader.java:157)
at
org.apache.lucene.index.ExitableDirectoryReader$ExitableTerms.iterator(ExitableDirectoryReader.java:141)
at org.apache.lucene.index.TermContext.build(TermContext.java:93)
at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
at
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:855)
at org.apache.lucene.search.BooleanWeight.(BooleanWeight.java:56)
at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:203)
at
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:855)
at org.apache.lucene.search.BooleanWeight.(BooleanWeight.java:56)
at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:203)
at
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:855)
at org.apache.lucene.search.BooleanWeight.(BooleanWeight.java:56)
at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:203)
at
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:855)
at
org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:838)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:486)
at org.apache.solr.search.Grouping.searchWithTimeLimiter(Grouping.java:456)
at org.apache.solr.search.Grouping.execute(Grouping.java:370)
at
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:496)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:277)



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-has-a-CPU-spike-when-indexing-a-batch-of-data-tp4309529.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr cannot provide index service after a large GC pause but core state in ZK is still active

2016-12-08 Thread forest_soup
Hi Erick, Mark and Varun,

I'll use this mail thread tracking the issue in
https://issues.apache.org/jira/browse/SOLR-9829 .

@Erick, for your question: 
I'm sure the solr node is still in the live_nodes list. 
The logs are from solr log. And the most root cause I can see here is the
IndexWriter is closed.

@Mark and Varun, are you sure this issue is dup of
https://issues.apache.org/jira/browse/SOLR-7956 ?
If yes, I'll try to backport it to 5.3.2.
And also I see Daisy created a similar JIRA:
https://issues.apache.org/jira/browse/SOLR-9830 . Although her root cause is
the too many open file, but could you make sure it's also the dup of
SOLR-7956?

Thanks!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-cannot-provide-index-service-after-a-large-GC-pause-but-core-state-in-ZK-is-still-active-tp4308942.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Very long young generation stop the world GC pause

2016-12-08 Thread forest_soup
Besides, will those JVM options make it better? 
-XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=10 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Very-long-young-generation-stop-the-world-GC-pause-tp4308911p4308937.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Very long young generation stop the world GC pause

2016-12-08 Thread forest_soup
As you can see in the gc log, the long GC pause is not a full GC. It's a
young generation GC instead.  
In our case, full gc is fast and young gc got some long stw pause.
Do you have any comments on that, as we usually believe full gc may cause
longer pause, but young generation should be ok?

2016-11-22T20:43:16.463+: 2942054.509: Total time for which application
threads were stopped: 0.0029195 seconds, Stopping threads took: 0.804
seconds
{Heap before GC invocations=2246 (full 0):
 garbage-first heap   total 26673152K, used 4683965K [0x7f0c1000,
0x7f0c108065c0, 0x7f141000)
  region size 8192K, 162 young (1327104K), 17 survivors (139264K)
 Metaspace   used 56487K, capacity 57092K, committed 58368K, reserved
59392K
2016-11-22T20:43:16.555+: 2942054.602: [GC pause (G1 Evacuation Pause)
(young)
Desired survivor size 88080384 bytes, new threshold 15 (max 15)
- age   1:   28176280 bytes,   28176280 total
- age   2:5632480 bytes,   33808760 total
- age   3:9719072 bytes,   43527832 total
- age   4:6219408 bytes,   49747240 total
- age   5:4465544 bytes,   54212784 total
- age   6:3417168 bytes,   57629952 total
- age   7:5343072 bytes,   62973024 total
- age   8:2784808 bytes,   65757832 total
- age   9:6538056 bytes,   72295888 total
- age  10:6368016 bytes,   78663904 total
- age  11: 695216 bytes,   79359120 total
, 97.2044320 secs]
   [Parallel Time: 19.8 ms, GC Workers: 18]
  [GC Worker Start (ms): Min: 2942054602.1, Avg: 2942054604.6, Max:
2942054612.7, Diff: 10.6]
  [Ext Root Scanning (ms): Min: 0.0, Avg: 2.4, Max: 6.7, Diff: 6.7, Sum:
43.5]
  [Update RS (ms): Min: 0.0, Avg: 3.0, Max: 15.9, Diff: 15.9, Sum: 54.0]
 [Processed Buffers: Min: 0, Avg: 10.7, Max: 39, Diff: 39, Sum: 192]
  [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.6]
  [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0,
Sum: 0.0]
  [Object Copy (ms): Min: 0.1, Avg: 9.2, Max: 13.4, Diff: 13.3, Sum:
165.9]
  [Termination (ms): Min: 0.0, Avg: 2.5, Max: 2.7, Diff: 2.7, Sum: 44.1]
 [Termination Attempts: Min: 1, Avg: 1.5, Max: 3, Diff: 2, Sum: 27]
  [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, Sum:
0.6]
  [GC Worker Total (ms): Min: 9.0, Avg: 17.1, Max: 19.7, Diff: 10.6,
Sum: 308.7]
  [GC Worker End (ms): Min: 2942054621.8, Avg: 2942054621.8, Max:
2942054621.8, Diff: 0.0]
   [Code Root Fixup: 0.1 ms]
   [Code Root Purge: 0.0 ms]
   [Clear CT: 0.2 ms]
   [Other: 97184.3 ms]
  [Choose CSet: 0.0 ms]
  [Ref Proc: 8.5 ms]
  [Ref Enq: 0.2 ms]
  [Redirty Cards: 0.2 ms]
  [Humongous Register: 0.1 ms]
  [Humongous Reclaim: 0.1 ms]
  [Free CSet: 0.4 ms]
   [Eden: 1160.0M(1160.0M)->0.0B(1200.0M) Survivors: 136.0M->168.0M Heap:
4574.2M(25.4G)->3450.8M(26.8G)]
Heap after GC invocations=2247 (full 0):
 garbage-first heap   total 28049408K, used 3533601K [0x7f0c1000,
0x7f0c10806b00, 0x7f141000)
  region size 8192K, 21 young (172032K), 21 survivors (172032K)
 Metaspace   used 56487K, capacity 57092K, committed 58368K, reserved
59392K
}
 [Times: user=0.00 sys=94.28, real=97.19 secs] 
2016-11-22T20:44:53.760+: 2942151.806: Total time for which application
threads were stopped: 97.2053747 seconds, Stopping threads took: 0.0001373
seconds



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Very-long-young-generation-stop-the-world-GC-pause-tp4308911p4308912.html
Sent from the Solr - User mailing list archive at Nabble.com.


Very long young generation stop the world GC pause

2016-12-07 Thread forest_soup
Hi Shawn, 

Thanks a lot for your response! 

I'll use this mail thread on tracking the issue in JIRA
https://issues.apache.org/jira/browse/SOLR-9828 .



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Very-long-young-generation-stop-the-world-GC-pause-tp4308911.html
Sent from the Solr - User mailing list archive at Nabble.com.


All solr cores in a solr server are down. Cannot find anything from log.

2016-09-12 Thread forest_soup
We have a 3 node solrcloud. Each solr collection has only 1 shard and 1
replica. 
When we restart the 3 solr nodes, we found all the cores in one solr node
are at down state and not changed to other state. The solr node is shown in
/live_nodes in zookeeper.

After restarting all the zk server and solr nodes, the issue is resolved.

We do not see any clue from the solr.log.
2016-09-06 19:23:16.474 WARN  (main) [   ] o.e.j.s.h.RequestLogHandler
!RequestLog
2016-09-06 19:23:17.418 WARN  (main) [   ] o.e.j.s.SecurityHandler
ServletContext@o.e.j.w.WebAppContext@26837057{/solr,file:/opt/ibm/solrsearch/SolrG2Cld101/solr/server/solr-webapp/webapp/,STARTING}{/opt/ibm/solrsearch/SolrG2Cld101/solr/server/solr-webapp/webapp}
has uncovered http methods for path: /
2016-09-06 19:23:18.567 WARN  (main) [   ] o.a.s.c.SolrResourceLoader Can't
find (or read) directory to add to classloader: lib (resolved as:
/mnt/solrdata1/solr/home/lib).




--
View this message in context: 
http://lucene.472066.n3.nabble.com/All-solr-cores-in-a-solr-server-are-down-Cannot-find-anything-from-log-tp4295679.html
Sent from the Solr - User mailing list archive at Nabble.com.


Downgraded Raid5 cause endless recovery and hang.

2016-07-24 Thread forest_soup
We have a 5 node solrcloud. When a solr node's disk had issue and Raid5
downgraded, a recovery on the node was triggered. But there's a hanging
happens. The node disappears in the live_nodes list. 

Could anyone help comment why this happens? Thanks!

The only meaningful call stacks are:
"zkCallback-4-thread-50-processing-n:sgdsolar17.swg.usma.ibm.com:8983_solr-EventThread"
#7791 daemon prio=5 os_prio=0 tid=0x7f7e26467800 nid=0x4df7 waiting on
condition [0x7f7e01adf000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x7f8315800070> (a
java.util.concurrent.FutureTask)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:429)
at java.util.concurrent.FutureTask.get(FutureTask.java:191)
at
org.apache.solr.update.DefaultSolrCoreState.cancelRecovery(DefaultSolrCoreState.java:349)
- locked <0x7f7fd0cefd28> (a java.lang.Object)
at
org.apache.solr.core.CoreContainer.cancelCoreRecoveries(CoreContainer.java:617)
at org.apache.solr.cloud.ZkController$1.command(ZkController.java:295)
at
org.apache.solr.common.cloud.ConnectionManager$1.update(ConnectionManager.java:158)
at
org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:56)
at
org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:132)
at
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)

"updateExecutor-2-thread-620-processing-n:sgdsolar17.swg.usma.ibm.com:8983_solr
x:collection36_shard1_replica2 s:shard1 c:collection36 r:core_node1" #7779
prio=5 os_prio=0 tid=0x7f7e8827e000 nid=0x4dea waiting on condition
[0x7f7ed0f9f000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x7f7fd562e860> (a
java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
at
java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
at org.apache.solr.update.VersionInfo.blockUpdates(VersionInfo.java:118)
at
org.apache.solr.update.UpdateLog.dropBufferedUpdates(UpdateLog.java:1140)
at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:467)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:227)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Downgraded-Raid5-cause-endless-recovery-and-hang-tp4288677.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Is there any JIRA changed the stored order of multivalued field?

2016-04-19 Thread forest_soup
Thanks! That's very helpful!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Is-there-any-JIRA-changed-the-stored-order-of-multivalued-field-tp4264325p4271312.html
Sent from the Solr - User mailing list archive at Nabble.com.


Is there any detailed condition on which the snapshot pull recovery will occur?

2016-04-19 Thread forest_soup
We have a SolrCloud with solr v5.3.2. 
collection1 contains 1 shard with 2 replicas on solr nodes: solr1 and solr2
respectively.
In solrconfig.xml, there are updateLog config and uploaded to ZK and
effective:

  ${solr.ulog.dir:}
  ${solr.ulog.numVersionBuckets:65536}
  1000
  100


We know with these settings, at first solr1 down and solr2 active, and solr2
received more than 1000 updates, after solr1 is restarted, the recovery of
the replica in solr1 will be snapshot pull.

But we noticed a case with below steps:
1, At first solr1 and solr2 are active and both replicas has lots of data;
2, solr2 is shutdown;
3, update to solr1 with less than 1000 updates;
4, solr1 is shutdown;
5, the replica's data dir in solr2 are missing due to bad device or
mis-deletion;
6, solr2 is startup;
7, update to solr2 with about 2 or 3 updates;
8, solr1 is startup;
9, we noticed both replicas in solr1 and solr2 have only those 2 or 3
update's data in step #7. 
Lots of data lost!

It seems the recovery in solr1 is snapshot pull from solr2. 
Our questions:
1, Is there any explanation on this case?
2, Is there any detailed condition on which the snapshot pull recovery will
occur? 

Thanks!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Is-there-any-detailed-condition-on-which-the-snapshot-pull-recovery-will-occur-tp4271311.html
Sent from the Solr - User mailing list archive at Nabble.com.


Is there any JIRA changed the stored order of multivalued field?

2016-03-19 Thread forest_soup
We have a field named "attachmentnames":
  

We do POST to add data to Solr v4.7 and Solr v5.3.2 respectively. The
attachmentnames are in 789, 456, 123 sequence: 

{
"add": {
"overwrite": true,
"doc": {
"id":"1",
"subject":"111",
  "owner":"1",
  "sequence":"1",
  "unid":"1",
  "customerid":"1",
  "servername":"1",
  "noteid":"1",
/*  "attachmentnames":"789",
  "attachmentnames":"456",
  "attachmentnames":"123"*/
}
}
}

And we do GET to select data from solr v4.7 and solr v5.3.2 respectively:
http://host:port/solr/collection1/select?q=id:1=json=true

solr v4.7 response:
{
  "responseHeader":{
"status":0,
"QTime":160,
"params":{
  "indent":"true",
  "q":"id:1",
  "wt":"json"}},
  "response":{"numFound":1,"start":0,"maxScore":0.30685282,"docs":[
  {
"id":"1",
"subject":"111",
"owner":"1",
"sequence":1,
"unid":"1",
"customerid":"1",
"servername":"1",
"noteid":"1",
/*"attachmentnames":["123",
  "456",
  "789"],*/
"_version_":1529020749012008960}]
  }}

solr v5.3.2 response:
{
  "responseHeader":{
"status":0,
"QTime":37,
"params":{
  "q":"id:1",
  "indent":"true",
  "wt":"json"}},
  "response":{"numFound":1,"start":0,"docs":[
  {
"id":"1",
"subject_store":"111",
"owner":"1",
"sequence":1,
"unid":"1",
"customerid":"1",
"servername":"1",
"noteid":"1",
/*"attachmentnames":["789",
  "456",
  "123"]*/,
"_version_":1529046229145616384}]
  }}

Is there any JIRA fixed making this order changed? Thanks!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Is-there-any-JIRA-changed-the-stored-order-of-multivalued-field-tp4264325.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: There is no jetty thread pool stats in solr JMX

2016-03-14 Thread forest_soup
I have read the articles below, but does not find the jetty.home/start.ini in
solr/server folder and there is no etc/jetty-jmx.xml config file.

http://www.eclipse.org/jetty/documentation/current/jmx-chapter.html
http://wiki.apache.org/solr/SolrJmx



--
View this message in context: 
http://lucene.472066.n3.nabble.com/There-is-no-jetty-thread-pool-stats-in-solr-JMX-tp4263586p4263603.html
Sent from the Solr - User mailing list archive at Nabble.com.


There is no jetty thread pool stats in solr JMX

2016-03-14 Thread forest_soup
I'm using solr v8.5.1 in SolrCloud mode and enabled  in solrconfig.xml,
and added those variables in solr.in.sh to enable jmx. 

 -Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.port=18983
-Dcom.sun.management.jmxremote.rmi.port=18983 

Now I can access the MBeans in JMX via port 18983. But I does not find any
jetty thread pool MBeans:
org.eclipse.jetty.util.thread:type=queuedthreadpool,id=*

Is it forbidden by Solr? What config should I do to make those Jetty thread
pool MBeans show in the SolrJMX? 

Thanks!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/There-is-no-jetty-thread-pool-stats-in-solr-JMX-tp4263586.html
Sent from the Solr - User mailing list archive at Nabble.com.


SolrCloud: collection creation: There are duplicate coreNodeName in core.properties in a same collection.

2015-08-27 Thread forest_soup
https://issues.apache.org/jira/browse/SOLR-7982

We have a 3 Zookeeper 5 solr server Solrcloud. 
We created collection1 and collection2 with 80 shards respectively in the
cloud, replicateFactor is 2. 
But after created, we found in a same collection, the coreNodeName has some
duplicate in core.properties in the core folder. For example:
[tanglin@solr64 home]$ ll collection1_shard13_replica2/core.properties
rw-rr- 1 solr solr 173 Jul 29 11:52
collection1_shard13_replica2/core.properties
[tanglin@solr64 home]$ ll collection1_shard66_replica1/core.properties
rw-rr- 1 solr solr 173 Jul 29 11:52
collection1_shard66_replica1/core.properties
[tanglin@solr64 home]$ cat collection1_shard66_replica1/core.properties
#Written by CorePropertiesLocator
#Wed Jul 29 11:52:54 UTC 2015
numShards=80
name=collection1_shard66_replica1
shard=shard66
collection=collection1
coreNodeName=core_node19
[tanglin@solr64 home]$ cat collection1_shard13_replica2/core.properties
#Written by CorePropertiesLocator
#Wed Jul 29 11:52:53 UTC 2015
numShards=80
name=collection1_shard13_replica2
shard=shard13
collection=collection1
coreNodeName=core_node19
[tanglin@solr64 home]$
The consequence of the issue is that the clusterstate.json in zookeeper is
also with wrong core_node#, and updating state of a core sometimes changed
the state of other core in other shard..
Snippet from clusterstate:
shard13:{
range:a666-a998,
state:active,
replicas:{
core_node33:
{ state:active, base_url:https://solr65.somesite.com:8443/solr;,
core:collection1_shard13_replica1,
node_name:solr65.somesite.com:8443_solr}
,
core_node19:
{ state:active, base_url:https://solr64.somesite.com:8443/solr;,
core:collection1_shard13_replica2,
node_name:solr64.somesite.com:8443_solr, leader:true}
}},
...
shard66:{
range:5000-5332,
state:active,
replicas:{
core_node105:
{ state:active, base_url:https://solr63.somesite.com:8443/solr;,
core:collection1_shard66_replica2,
node_name:solr63.somesite.com:8443_solr, leader:true}
,
core_node19:
{ state:active, base_url:https://solr64.somesite.com:8443/solr;,
core:collection1_shard66_replica1,
node_name:solr64.somesite.com:8443_solr}
}},



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-collection-creation-There-are-duplicate-coreNodeName-in-core-properties-in-a-same-collecti-tp4225532.html
Sent from the Solr - User mailing list archive at Nabble.com.


SolrCloud: /live_nodes in ZK shows the server is there, but all cores are down in /clusterstate.json.

2015-08-19 Thread forest_soup
Opened a JIRA - https://issues.apache.org/jira/browse/SOLR-7947

A SolrCloud with 2 solr node in Tomcat server on 2 VM servers. After restart
one solr node, the cores on it turns to down state and logs showing below
errors.

Logs are in attachmenent. solr.zip
http://lucene.472066.n3.nabble.com/file/n4224104/solr.zip  

ERROR - 2015-07-24 09:40:34.887; org.apache.solr.common.SolrException;
null:org.apache.solr.common.SolrException: Unable to create core:
collection1_shard1_replica1
at org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:989)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:606)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:258)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:250)
at java.util.concurrent.FutureTask.run(FutureTask.java:273)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:482)
at java.util.concurrent.FutureTask.run(FutureTask.java:273)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1156)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:626)
at java.lang.Thread.run(Thread.java:804)
Caused by: org.apache.solr.common.SolrException
at org.apache.solr.core.SolrCore.init(SolrCore.java:844)
at org.apache.solr.core.SolrCore.init(SolrCore.java:630)
at org.apache.solr.core.ZkContainer.createFromZk(ZkContainer.java:244)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:595)
... 8 more
Caused by: java.nio.channels.OverlappingFileLockException
at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:267)
at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:164)
at sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1078)
at java.nio.channels.FileChannel.tryLock(FileChannel.java:1165)
at org.apache.lucene.store.NativeFSLock.obtain(NativeFSLockFactory.java:217)
at
org.apache.lucene.store.NativeFSLock.isLocked(NativeFSLockFactory.java:319)
at org.apache.lucene.index.IndexWriter.isLocked(IndexWriter.java:4510)
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:485)
at org.apache.solr.core.SolrCore.init(SolrCore.java:761)
... 11 more



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-live-nodes-in-ZK-shows-the-server-is-there-but-all-cores-are-down-in-clusterstate-json-tp4224104.html
Sent from the Solr - User mailing list archive at Nabble.com.


Do we need to add docValues=true to _version_ field in schema.xml?

2015-06-16 Thread forest_soup
For the _version_ field in the schema.xml, do we need to set it be
docValues=true?
   field name=_version_ type=long indexed=true stored=true/

As we noticed there are FieldCache for _version_ in the solr stats:
http://lucene.472066.n3.nabble.com/file/n4212123/IMAGE%245A8381797719FDA9.jpg 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Do-we-need-to-add-docValues-true-to-version-field-in-schema-xml-tp4212123.html
Sent from the Solr - User mailing list archive at Nabble.com.


What contribute to a Solr core's FieldCache entry_count?

2015-06-16 Thread forest_soup
For the fieldCache, what determines the entries_count? 

Is each search request containing a sort on an non-docValues field
contribute one entry to the entries_count?

For example, search A ( q=owner:1sort=maildate asc ) and search b (
q=owner:2sort=maildate asc ) will contribute 2 field cache entries ?

I have a collection containing only one core, and there is only one doc
within it, why there are so many lucene fieldCache? 

http://lucene.472066.n3.nabble.com/file/n4212148/%244FA9F550C60D3BA2.jpg 
http://lucene.472066.n3.nabble.com/file/n4212148/Untitled.png 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/What-contribute-to-a-Solr-core-s-FieldCache-entry-count-tp4212148.html
Sent from the Solr - User mailing list archive at Nabble.com.


Exception while using group with timeAllowed on SolrCloud

2015-04-22 Thread forest_soup
We have the same issue as this JIRA. 
https://issues.apache.org/jira/browse/SOLR-6156

I have posted my query, response and solr logs to the JIAR. 

Could anyone please take a look? Thanks!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Exception-while-using-group-with-timeAllowed-on-SolrCloud-tp4201570.html
Sent from the Solr - User mailing list archive at Nabble.com.


Can we have [core name] in each log entry?

2015-04-21 Thread forest_soup
Can we have [core name] in each log entry?
It's hard for us to know the exact core having a such issue and the
sequence, if there are too many cores in a solr node in a SolrCloud env.

I post the request to this JIRA ticket.
https://issues.apache.org/jira/browse/SOLR-7434



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Can-we-have-core-name-in-each-log-entry-tp4201186.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Restart solr failed after applied the patch in https://issues.apache.org/jira/browse/SOLR-6359

2015-04-02 Thread forest_soup
Thanks Ramkumar!

Understood. We will try 100, 10. 

But with our original steps which we found the exception, can we say that
the patch has some issue? 
1, put the patch to all 5 running solr servers(tomcat) by replacing the
tomcat/webapps/solr/WEB-INF/lib/solr-core-4.7.0.jar with the patched
solr-core-4.7-SNAPSHOT.jar I built out. And we keep them all running.
2, uploaded the solrconfig.xml to zookeeper with below changes: 
updateLog
str name=dir${solr.ulog.dir:}/str
int name=numRecordsToKeep1/int
int name=maxNumLogsToKeep100/int
/updateLog
3, restarted solr server 1(tomcat), after it restarted, it has that
exception in my first POST.
4, restarted solr server 1 again, it still has the same issue.
5, restored the patch by replace the
tomcat/webapps/solr/WEB-INF/lib/solr-core-4.7-SNAPSHOT.jar with the orignal
4.7.0 one.
6, restarted solr server 1 again, there is no that issue. 

So we are thinking if we will have that in version 5.1, after we upgrade
solr, and doing rolling restart, will the issue emerge and we have to do a
full restart which causes service outage. 

Thanks! 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Restart-solr-failed-after-applied-the-patch-in-https-issues-apache-org-jira-browse-SOLR-6359-tp4196251p4197163.html
Sent from the Solr - User mailing list archive at Nabble.com.


Restart solr failed after applied the patch in https://issues.apache.org/jira/browse/SOLR-6359

2015-03-30 Thread forest_soup
https://issues.apache.org/jira/browse/SOLR-6359

I also posted the questions to the JIRA ticket.

We have a SolrCloud with 5 solr servers of Solr 4.7.0. There are one
collection with 80 shards(2 replicas per shard) on those 5 servers. And we
made a patch by merge the patch
(https://issues.apache.org/jira/secure/attachment/12702473/SOLR-6359.patch)
to 4.7.0 stream. And after applied the patch to our servers with the config
changing uploaded to ZooKeeper, we did a restart on one of the 5 solr
server, we met some issues on that server. Below is the details - 
The solrconfig.xml we changed:
updateLog
str name=dir$
{solr.ulog.dir:}
/str
int name=numRecordsToKeep1/int
int name=maxNumLogsToKeep100/int
/updateLog

After we restarted one solr server without other 4 servers are running, we
met below exceptions in the restarted one:
ERROR - 2015-03-16 20:48:48.214; org.apache.solr.common.SolrException;
org.apache.solr.common.SolrException: Exception writing document id
Q049bGx0bWFpbDIxL089bGxwX3VzMQ==41703656!B68BF5EC5A4A650D85257E0A00724A3B to
the index; possible analysis error.
at
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:164)
at
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
at
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:703)
at
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:857)
at
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:556)
at
org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:96)
at
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:166)
at
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:136)
at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:225)
at
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:190)
at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:116)
at
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:173)
at
org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:106)
at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58)
at
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040)
at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:314)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1156)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:626)
at
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:804)
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexWriter
is closed
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:645)
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:659)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1525)
at
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:236)
at
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:160)
... 37 

Re: Restart solr failed after applied the patch in https://issues.apache.org/jira/browse/SOLR-6359

2015-03-30 Thread forest_soup
Yes, I also doubt the patch. I restore the patch with original .jar file,
there is no that issue.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Restart-solr-failed-after-applied-the-patch-in-https-issues-apache-org-jira-browse-SOLR-6359-tp4196251p4196278.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Restart solr failed after applied the patch in https://issues.apache.org/jira/browse/SOLR-6359

2015-03-30 Thread forest_soup
But if the value can only be 100,10, is there any difference with no that
patch? Can we enlarge those 2 values? Thanks!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Restart-solr-failed-after-applied-the-patch-in-https-issues-apache-org-jira-browse-SOLR-6359-tp4196251p4196280.html
Sent from the Solr - User mailing list archive at Nabble.com.


Will commit/softcommit invalid filtercache?

2014-09-20 Thread forest_soup
Hi, all.

We have some questions of commit/softcommit and cache.
We understand that a softcommit will create a new searcher. Will the
filtercache be invalid after a softcommit is done? 

And also for commit, if we do commit with openSearcher, will the filtercache
be invalid?

Thanks!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Will-commit-softcommit-invalid-filtercache-tp4160153.html
Sent from the Solr - User mailing list archive at Nabble.com.


Cannot finish recovery due to always met ReplicationHandler SnapPull failed: Unable to download xxx.fdt completely

2014-08-07 Thread forest_soup
I have 2 solr nodes(solr1 and solr2) in a SolrCloud. 
After some issue happened, solr2 are in recovering state. The peersync
cannot finish in about 15 min, so it turn to snappull. 
But when it's doing snap pull, it always met this issue below. Meanwhile,
there are still update requests sent to this recovering node(solr2) and the
good node(solr1). And the index in the recovering node is deleted and
rebuild again and again. So it takes lots of time to finish. 

Is it a bug or as solr design? 
And could anyone help me on accelerate the progress of recovery? 

Thanks! 

2014年7月17日 下午5:12:50ERROR   ReplicationHandler  SnapPull failed
:org.apache.solr.common.SolrException: Unable to download _vdq.fdt
completely. Downloaded 0!=182945 
SnapPull failed :org.apache.solr.common.SolrException: Unable to download
_vdq.fdt completely. Downloaded 0!=182945 
   at
org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.cleanup(SnapPuller.java:1305)
 
   at
org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.fetchFile(SnapPuller.java:1185)
 
   at
org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:771) 
   at
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:421) 
   at
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:322) 
   at
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:155) 
   at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:437) 
   at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:247) 


We have below settings in solrconfig.xml: 
 autoCommit  
   maxDocs1000/maxDocs  
   maxTime${solr.autoCommit.maxTime:15000}/maxTime
   openSearchertrue/openSearcher  
 /autoCommit

 autoSoftCommit  

   maxTime${solr.autoSoftCommit.maxTime:-1}/maxTime
 /autoSoftCommit

and the maxIndexingThreads8/maxIndexingThreads is as default. 

my solrconfig.xml is as attached.  solrconfig.xml
http://lucene.472066.n3.nabble.com/file/n4151611/solrconfig.xml  



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Cannot-finish-recovery-due-to-always-met-ReplicationHandler-SnapPull-failed-Unable-to-download-xxx-fy-tp4151611.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Cannot finish recovery due to always met ReplicationHandler SnapPull failed: Unable to download xxx.fdt completely

2014-08-07 Thread forest_soup
Thanks. 
My env is 2 VM with good network condition. So not sure why it is happened.
We are trying to reproduce it. The peersync fail log is :
2014年7月25日 上午6:30:48
WARN
SnapPuller
Error in fetching packets
java.io.EOFException
at
org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:154)
at
org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:146)
at
org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.fetchPackets(SnapPuller.java:1211)
at
org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.fetchFile(SnapPuller.java:1174)
at
org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:771)
at 
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:421)
at
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:322)
at
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:155)
at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:437)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:247)




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Cannot-finish-recovery-due-to-always-met-ReplicationHandler-SnapPull-failed-Unable-to-download-xxx-fy-tp4151611p4151621.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Cannot finish recovery due to always met ReplicationHandler SnapPull failed: Unable to download xxx.fdt completely

2014-08-07 Thread forest_soup
I have opened one JIRA for it:
https://issues.apache.org/jira/browse/SOLR-6333



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Cannot-finish-recovery-due-to-always-met-ReplicationHandler-SnapPull-failed-Unable-to-download-xxx-fy-tp4151611p4151631.html
Sent from the Solr - User mailing list archive at Nabble.com.


org.apache.solr.common.SolrException: no servers hosting shard

2014-08-07 Thread forest_soup
I have 2 solr nodes(solr1 and solr2) in a SolrCloud. 
After this issue happened, solr2 are in recovering state. And after it takes
long time to finish recovery, there is this issue again, and it turn to
recovery again. It happens again and again. 

ERROR - 2014-08-04 21:12:27.917; org.apache.solr.common.SolrException;
org.apache.solr.common.SolrException: no servers hosting shard: 
at
org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:148)
at
org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:118)
at java.util.concurrent.FutureTask.run(FutureTask.java:273)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:482)
at java.util.concurrent.FutureTask.run(FutureTask.java:273)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1156)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:626)
at java.lang.Thread.run(Thread.java:804)

We have those settings in solrconfig.xml different with default:

maxIndexingThreads24/maxIndexingThreads  
ramBufferSizeMB200/ramBufferSizeMB
maxBufferedDocs1/maxBufferedDocs 

 autoCommit 
   maxDocs1000/maxDocs 
   maxTime${solr.autoCommit.maxTime:15000}/maxTime
   openSearchertrue/openSearcher 
 /autoCommit
 autoSoftCommit 
   
   maxTime${solr.autoSoftCommit.maxTime:-1}/maxTime
 /autoSoftCommit


filterCache class=solr.FastLRUCache
 size=16384
 initialSize=16384
 autowarmCount=4096/
queryResultCache class=solr.LRUCache
 size=16384
 initialSize=16384
 autowarmCount=4096/
documentCache class=solr.LRUCache
   size=16384
   initialSize=16384
   autowarmCount=4096/
   fieldValueCache class=solr.FastLRUCache
size=16384
autowarmCount=1024
showItems=32 /
   queryResultWindowSize50/queryResultWindowSize

The full solrconfig.xml is as attachment.
solrconfig_perf0804.xml
http://lucene.472066.n3.nabble.com/file/n4151637/solrconfig_perf0804.xml  



--
View this message in context: 
http://lucene.472066.n3.nabble.com/org-apache-solr-common-SolrException-no-servers-hosting-shard-tp4151637.html
Sent from the Solr - User mailing list archive at Nabble.com.