[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 414 - Still Failing!

2016-02-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/414/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, 
SolrCore]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor, SolrCore]
at __randomizedtesting.SeedInfo.seed([9F2E10F307F84B33]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:228)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=15811, name=searcherExecutor-6343-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=15811, name=searcherExecutor-6343-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at 

[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 3043 - Failure!

2016-02-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/3043/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.core.TestShardHandlerFactory.testXML

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([2B3BC06C8E2CDF6D]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestShardHandlerFactory

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([2B3BC06C8E2CDF6D]:0)


FAILED:  org.apache.solr.search.TestIndexSearcher.testReopen

Error Message:
expected:<_1b(5.5.0):c2> but was:<_1d(5.5.0):C4>

Stack Trace:
java.lang.AssertionError: expected:<_1b(5.5.0):c2> but was:<_1d(5.5.0):C4>
at 
__randomizedtesting.SeedInfo.seed([2B3BC06C8E2CDF6D:773117AFD10504E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.search.TestIndexSearcher.testReopen(TestIndexSearcher.java:121)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.8.0_72) - Build # 51 - Still Failing!

2016-02-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/51/
Java: 32bit/jdk1.8.0_72 -server -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.security.BasicAuthIntegrationTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([E54CADD1210983B]:0)


FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasics

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([E54CADD1210983B]:0)




Build Log:
[...truncated 12516 lines...]
   [junit4] Suite: org.apache.solr.security.BasicAuthIntegrationTest
   [junit4]   2> 1012419 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[E54CADD1210983B]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1012419 INFO  (Thread-2649) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1012419 INFO  (Thread-2649) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1012519 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[E54CADD1210983B]) [] 
o.a.s.c.ZkTestServer start zk server on port:56470
   [junit4]   2> 1012519 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[E54CADD1210983B]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1012519 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[E54CADD1210983B]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1012635 INFO  (zkCallback-987-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@fedd3c name:ZooKeeperConnection 
Watcher:127.0.0.1:56470 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 1012635 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[E54CADD1210983B]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 1012635 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[E54CADD1210983B]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 1012635 INFO  
(TEST-BasicAuthIntegrationTest.testBasics-seed#[E54CADD1210983B]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr/solr.xml
   [junit4]   2> 1013351 INFO  (jetty-launcher-986-thread-1) [] 
o.e.j.s.Server jetty-9.2.13.v20150730
   [junit4]   2> 1013351 INFO  (jetty-launcher-986-thread-3) [] 
o.e.j.s.Server jetty-9.2.13.v20150730
   [junit4]   2> 1013351 INFO  (jetty-launcher-986-thread-4) [] 
o.e.j.s.Server jetty-9.2.13.v20150730
   [junit4]   2> 1013351 INFO  (jetty-launcher-986-thread-2) [] 
o.e.j.s.Server jetty-9.2.13.v20150730
   [junit4]   2> 1013351 INFO  (jetty-launcher-986-thread-5) [] 
o.e.j.s.Server jetty-9.2.13.v20150730
   [junit4]   2> 1013355 INFO  (jetty-launcher-986-thread-3) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@1a51ae8{/solr,null,AVAILABLE}
   [junit4]   2> 1013355 INFO  (jetty-launcher-986-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@8b64c7{/solr,null,AVAILABLE}
   [junit4]   2> 1013355 INFO  (jetty-launcher-986-thread-4) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@168a80b{/solr,null,AVAILABLE}
   [junit4]   2> 1013355 INFO  (jetty-launcher-986-thread-2) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@10e48aa{/solr,null,AVAILABLE}
   [junit4]   2> 1013356 INFO  (jetty-launcher-986-thread-5) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@1142c66{/solr,null,AVAILABLE}
   [junit4]   2> 1013357 INFO  (jetty-launcher-986-thread-2) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@176898f{HTTP/1.1}{127.0.0.1:33883}
   [junit4]   2> 1013357 INFO  (jetty-launcher-986-thread-5) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@11b86c1{HTTP/1.1}{127.0.0.1:35590}
   [junit4]   2> 1013357 INFO  (jetty-launcher-986-thread-2) [] 
o.e.j.s.Server Started @1015088ms
   [junit4]   2> 1013357 INFO  (jetty-launcher-986-thread-3) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@10680f1{HTTP/1.1}{127.0.0.1:38458}
   [junit4]   2> 1013357 INFO  (jetty-launcher-986-thread-2) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=33883}
   [junit4]   2> 1013357 INFO  (jetty-launcher-986-thread-3) [] 
o.e.j.s.Server Started @1015088ms
   [junit4]   2> 1013357 INFO  (jetty-launcher-986-thread-4) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@1debd3e{HTTP/1.1}{127.0.0.1:54406}
   [junit4]   2> 1013357 INFO  (jetty-launcher-986-thread-5) [] 
o.e.j.s.Server Started @1015088ms
   [junit4]   2> 1013357 INFO  (jetty-launcher-986-thread-4) [] 
o.e.j.s.Server Started @1015088ms
   [junit4]   2> 1013357 INFO  

[jira] [Comment Edited] (SOLR-8591) Add BatchStream to the Streaming API

2016-02-19 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155395#comment-15155395
 ] 

Joel Bernstein edited comment on SOLR-8591 at 2/20/16 5:31 AM:
---

This is pretty low hanging fruit. May be able to squeeze this into 6.0.


was (Author: joel.bernstein):
This is pretty low hanging fruit. Maybe be able to squeeze this into 6.0.

> Add BatchStream to the Streaming API
> 
>
> Key: SOLR-8591
> URL: https://issues.apache.org/jira/browse/SOLR-8591
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.0
>
>
> Now that we have synchronous streaming and continuous streaming 
> (DaemonStream), it makes sense to add *batch streaming*.
> Code will be added to the /stream handler so that when it sees the 
> BatchStream it will send the stream to an executor to be run. 
> Sample syntax:
> {code}
> batch(parallel(update(rollup(search()
> {code}
> The pseudo code above runs a parallel rollup in batch mode and sends the 
> output to a SolrCloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8591) Add BatchStream to the Streaming API

2016-02-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8591:
-
Fix Version/s: 6.0

> Add BatchStream to the Streaming API
> 
>
> Key: SOLR-8591
> URL: https://issues.apache.org/jira/browse/SOLR-8591
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.0
>
>
> Now that we have synchronous streaming and continuous streaming 
> (DaemonStream), it makes sense to add *batch streaming*.
> Code will be added to the /stream handler so that when it sees the 
> BatchStream it will send the stream to an executor to be run. 
> Sample syntax:
> {code}
> batch(parallel(update(rollup(search()
> {code}
> The pseudo code above runs a parallel rollup in batch mode and sends the 
> output to a SolrCloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8591) Add BatchStream to the Streaming API

2016-02-19 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155395#comment-15155395
 ] 

Joel Bernstein commented on SOLR-8591:
--

This is pretty low hanging fruit. Maybe be able to squeeze this into 6.0.

> Add BatchStream to the Streaming API
> 
>
> Key: SOLR-8591
> URL: https://issues.apache.org/jira/browse/SOLR-8591
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.0
>
>
> Now that we have synchronous streaming and continuous streaming 
> (DaemonStream), it makes sense to add *batch streaming*.
> Code will be added to the /stream handler so that when it sees the 
> BatchStream it will send the stream to an executor to be run. 
> Sample syntax:
> {code}
> batch(parallel(update(rollup(search()
> {code}
> The pseudo code above runs a parallel rollup in batch mode and sends the 
> output to a SolrCloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8591) Add BatchStream to the Streaming API

2016-02-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-8591:


Assignee: Joel Bernstein

> Add BatchStream to the Streaming API
> 
>
> Key: SOLR-8591
> URL: https://issues.apache.org/jira/browse/SOLR-8591
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> Now that we have synchronous streaming and continuous streaming 
> (DaemonStream), it makes sense to add *batch streaming*.
> Code will be added to the /stream handler so that when it sees the 
> BatchStream it will send the stream to an executor to be run. 
> Sample syntax:
> {code}
> batch(parallel(update(rollup(search()
> {code}
> The pseudo code above runs a parallel rollup in batch mode and sends the 
> output to a SolrCloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 3097 - Failure!

2016-02-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/3097/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

62 tests failed.
FAILED:  org.apache.solr.TestDistributedMissingSort.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:61116/h/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:61116/h/collection1
at 
__randomizedtesting.SeedInfo.seed([7931998D0FE0AC10:F165A657A11CC1E8]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:591)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:895)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:858)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:873)
at 
org.apache.solr.BaseDistributedSearchTestCase.del(BaseDistributedSearchTestCase.java:544)
at 
org.apache.solr.TestDistributedMissingSort.index(TestDistributedMissingSort.java:48)
at 
org.apache.solr.TestDistributedMissingSort.test(TestDistributedMissingSort.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:990)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:939)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: git email format customizability: add branch to subject?

2016-02-19 Thread Shawn Heisey
On 2/19/2016 6:04 PM, Ryan Ernst wrote:
>
> This sounds good, but isn't the repo name redundant given it is
> implied by the email going to commits@l.a.o?
>

Right now, I think the only git repository we've got is lucene-solr, but
we also receive commit emails from the subversion repository that holds
the Lucene and Solr websites.  That repository will probably also be
converted to git eventually.

What would be really nice is to have the repo name removed if it's the
main code repository (lucene-solr in our case) and kept if it's
something else, but as Hoss mentioned, Infra probably will not be
willing to handle a lot of customization to the script that sends these
messages.  There is probably more configurability than they *want* to
support already.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8707) Distribute (auto)commit requests evenly over time in multi shard/replica collections

2016-02-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155341#comment-15155341
 ] 

Hoss Man commented on SOLR-8707:


bq. A long delay outside of configuration is a little worrying

sure ... with this type of approach, you'd want the "first trigger" to happen 
at the initialDelay, and then repeat every autoCommitTime (as opposed to the 
current logic which uses initialDelay == autoCommitTime)

it's also something that could be easily be added as a config option.

> Distribute (auto)commit requests evenly over time in multi shard/replica 
> collections
> 
>
> Key: SOLR-8707
> URL: https://issues.apache.org/jira/browse/SOLR-8707
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Reporter: Michael Sun
>
> In current implementation, all Solr nodes start commit for all cores in a 
> collection almost at the same time. As result, it creates a load spike in 
> cluster at regular interval, particular when collection is on HDFS. The main 
> reason is that all cores are created almost at the same time for a collection 
> and do commit in a fixed interval afterwards.
> It's good to distribute the the commit load evenly to avoid load spike. It 
> helps to improve performance and reliability in general.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8708) DaemonStream should catch InterruptedException when reading underlying stream.

2016-02-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8708:
-
Attachment: SOLR-8708.patch

Patch with improved error handling that handles the InterruptedException 
causing this issue.

> DaemonStream should catch InterruptedException when reading underlying stream.
> --
>
> Key: SOLR-8708
> URL: https://issues.apache.org/jira/browse/SOLR-8708
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Critical
> Fix For: 6.0
>
> Attachments: SOLR-8708.patch
>
>
> Currently the DaemonStream is only catching IOException when reading from the 
> underlying stream. This causes the DaemonStream to not shutdown properly. 
> Jenkins failures look like this:
> {code}
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([A9AE0C8FDE484A6D]:0)Throwable #2: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=2859, name=Thread-971, 
> state=RUNNABLE, group=TGRP-StreamExpressionTest]
>[junit4]> Caused by: org.apache.solr.common.SolrException: Could not 
> load collection from ZK: parallelDestinationCollection1
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([A9AE0C8FDE484A6D]:0)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:959)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:517)
>[junit4]>  at 
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:189)
>[junit4]>  at 
> org.apache.solr.common.cloud.ClusterState.hasCollection(ClusterState.java:119)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:833)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.UpdateStream.uploadBatchToCollection(UpdateStream.java:256)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.UpdateStream.read(UpdateStream.java:118)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.DaemonStream$StreamRunner.run(DaemonStream.java:245)
>[junit4]> Caused by: java.lang.InterruptedException
>[junit4]>  at java.lang.Object.wait(Native Method)
>[junit4]>  at java.lang.Object.wait(Object.java:502)
>[junit4]>  at 
> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>[junit4]>  at 
> org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1153)
>[junit4]>  at 
> org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:353)
>[junit4]>  at 
> org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:350)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
>[junit4]>  at 
> org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:350)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader.fetchCollectionState(ZkStateReader.java:967)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:954)
>[junit4]>  ... 12 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8705) ERROR while indexing/updating record

2016-02-19 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8705.

Resolution: Incomplete

> ERROR  while indexing/updating record
> -
>
> Key: SOLR-8705
> URL: https://issues.apache.org/jira/browse/SOLR-8705
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: mugeesh
> Fix For: 5.3
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8704) ERROR while indexing/updating record

2016-02-19 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8704.

Resolution: Invalid

The right place to ask about such errors is not JIRA but the solr user mailing 
list/irc. Kindly ask there with more information like:
* Solr version
* setup
* what were you trying to do
* and everything else


> ERROR  while indexing/updating record
> -
>
> Key: SOLR-8704
> URL: https://issues.apache.org/jira/browse/SOLR-8704
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: mugeesh
> Fix For: 5.3
>
>
> null:org.apache.solr.common.SolrException: Error trying to proxy request for 
> url: http://45.33.57.46:8984/solr/Restaurant_Restaurant_2_replica1/update
>   at 
> org.apache.solr.servlet.HttpSolrCall.remoteQuery(HttpSolrCall.java:598)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:446)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
>   at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.http.NoHttpResponseException: 45.33.57.46:8984 failed 
> to respond
>   at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:143)
>   at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
>   at 
> org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
>   at 
> org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
>   at 
> org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
>   at 
> org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
>   at 
> org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
>   at 
> org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
>   at 
> org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
>   at 
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
>   at 
> 

[jira] [Resolved] (SOLR-8703) ERROR

2016-02-19 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8703.

Resolution: Invalid

> ERROR 
> --
>
> Key: SOLR-8703
> URL: https://issues.apache.org/jira/browse/SOLR-8703
> Project: Solr
>  Issue Type: Bug
>Reporter: mugeesh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.5 - Build # 7 - Still Failing

2016-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.5/7/

2 tests failed.
FAILED:  org.apache.solr.spelling.SpellCheckCollatorTest.testEstimatedHitCounts

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([DE25A916905B004D:EF9E17233564109D]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:754)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:721)
at 
org.apache.solr.spelling.SpellCheckCollatorTest.testEstimatedHitCounts(SpellCheckCollatorTest.java:561)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//lst[@name='spellcheck']/lst[@name='collations']/lst[@name='collation']/int[@name='hits'
 and 6 <= . and . <= 10]
xml response was: 

031918everyotherteststop:everyother14everyother


request 

[JENKINS] Lucene-Solr-5.5-Linux (64bit/jdk1.7.0_80) - Build # 50 - Failure!

2016-02-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/50/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
6 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=7292, 
name=zkCallback-1003-thread-3, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=7126, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[140C4DFE77815F2E]-EventThread,
 state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
3) Thread[id=7304, name=zkCallback-1003-thread-5, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)4) Thread[id=7265, 
name=zkCallback-1003-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)5) Thread[id=7125, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[140C4DFE77815F2E]-SendThread(127.0.0.1:55041),
 state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003)
6) Thread[id=7303, name=zkCallback-1003-thread-4, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 6 threads leaked from 

[jira] [Commented] (SOLR-8707) Distribute (auto)commit requests evenly over time in multi shard/replica collections

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155264#comment-15155264
 ] 

Mark Miller commented on SOLR-8707:
---

A long delay outside of configuration is a little worrying to me because some 
people will restart a cluster while doing full indexing (message service or 
something). If you basically ignore the first autocommit, you can drastically 
raise the tlog ram reqs I think. Just something to consider.

> Distribute (auto)commit requests evenly over time in multi shard/replica 
> collections
> 
>
> Key: SOLR-8707
> URL: https://issues.apache.org/jira/browse/SOLR-8707
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Reporter: Michael Sun
>
> In current implementation, all Solr nodes start commit for all cores in a 
> collection almost at the same time. As result, it creates a load spike in 
> cluster at regular interval, particular when collection is on HDFS. The main 
> reason is that all cores are created almost at the same time for a collection 
> and do commit in a fixed interval afterwards.
> It's good to distribute the the commit load evenly to avoid load spike. It 
> helps to improve performance and reliability in general.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8588) Add TopicStream to the streaming API to support publish/subscribe messaging

2016-02-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-8588.
--
Resolution: Implemented

> Add TopicStream to the streaming API to support publish/subscribe messaging
> ---
>
> Key: SOLR-8588
> URL: https://issues.apache.org/jira/browse/SOLR-8588
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.0
>
> Attachments: SOLR-8588.patch, SOLR-8588.patch, SOLR-8588.patch, 
> SOLR-8588.patch, SOLR-8588.patch
>
>
> The TopicStream is a *publish/subscribe messaging service* built on top of 
> SolrCloud.  The TopicStream returns all *new* documents for a specific query. 
> Version numbers will be used as checkpoints for Topics to ensure single 
> delivery of each document. When combined with the DaemonStream (SOLR-8550), 
> Topics can provide continuous streaming. Sample syntax:
> {code}
> topic(checkpointCollection, dataCollection, id="topicA",  q="awesome stuff" 
> checkpointEvery="1000")
> {code}
> The checkpoint collection will be used to persist the topic checkpoints.
> Example combined with the DaemonStream:
> {code}
> daemon(topic(...)...)
> {code}
> When combined with SOLR-7739 this allows for messaging based on *machine 
> learned* classifications.
> The TopicStream supports 3 models of publish/subscribe messaging:
> 1) *Request & response*: In this model a topic(...) expression can be saved 
> and executed at any time. In this scenario the TopicStream will always 
> retrieve it's checkpoints and start from where it left off.
> 2) *Continuous pull streaming*: In this model you would wrap the TopicStream 
> in a DaemonStream and call read() in a loop inside a java program.  This 
> would provide a continuous stream of new content as it arrives in the index.
> 3) *Continuous push streaming*: In this model you would send an expression 
> like this to the /stream handler: *daemon(update(topic(...)...)...)*. This 
> daemon process would run inside Solr and continuously stream new documents 
> from the topic and push them to another SolrCloud collection. Other pushing 
> expressions can be created to push documents in different ways or take other 
> types of actions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: git email format customizability: add branch to subject?

2016-02-19 Thread Ryan Ernst
This sounds good, but isn't the repo name redundant given it is implied by
the email going to commits@l.a.o?
On Feb 19, 2016 4:38 PM, "Chris Hostetter"  wrote:

>
> : https://git-wip-us.apache.org/docs/switching-to-git.html seems to
> : suggest there is per project flexibility. Branch not one of the
> : (currently) available variables though, no?
> :
> : +1 for "the branch be included in the subject"
>
> Thanks for finding that link Christine,
>
> I pinged #infra on HipChat to try and find the actual code in question to
> see how hard it would be to add "branch" based variables so I could
> propose a patch to infra rather then just a general "can we do this?" type
> request, but aparently that code is ASF specific and lives in a private
> infra repo, so only infra members can read/write.  Gavin said new subject
> variables are usually not a big deal though.
>
> That said, before I request any changes, I want to make sure I'm
> not wasting the time of any infra volunteers -- so I'd like to make sure
> we have some concensus on what we'd ideally like...
>
>
> : Perhaps the script could only include the last N elements of the name,
> : so we get lucene-5438-nrt-replication or
> : jira/lucene-5438-nrt-replication instead of the full branch name.  Or
> : maybe a regex could be used to target refs\/.*?\/ (or something more
> : complex) for removal -- for some of the existing branch names, having
> : the last three path elements would be good, but for others, one or two
> : would be better.
>
> good point ... given that this is a general infra tool for all projects,
> and currently the only per-project configuration is (aparently) what the
> subject should be comprised of, i'm hesitent to try and request a lot of
> custom regex rules, and/or making any general assumptions about only using
> the last "N" elements of the name.
>
> (a common workflow i've seen is things
> like refs/head/jira/solr-xyz for a shared collaboration on that feature,
> while refs/head/hossman/jira/solr-xyz might be my proposed new direction
> for the code to take -- we wouldn't want those to get confused.)
>
> That said, i think it would totally make sense to request that
> "%(branch)s" should refering to the full branch path, and
> "%(shortbranch)s" should be the result of regex stripping
> "^refs\/(heads\/)?" from the full branch path.
>
> So "refs/heads/branch_7_5 => "branch_7_5"
>
> But "refs/tags/releases/lucene-solr/7.5.0"
>  => "tags/releases/lucene-solr/7.5.0"
>
> : There is normally a fairly limited amount of space for the subject in
> : the list view of an email client, so it seems like a good idea to keep
> : it short but relevant.
>
> Agreed -- so perhaps we should also request reducing some other
> redundencies? (ie: "git commit")
>
>
> what do folks think about requesting as our pattern...
>
>   "git: %(repo_name)s:%(shortbranch)s: %(subject)s"
>
>
> With some examples of what that would look like for a handful of commits
> from the past month...
>
>
> git: lucene-solr:branch_5x: fix test bug, using different randomness when
> creating the two IWCs
>
> http://mail-archives.apache.org/mod_mbox/lucene-commits/201602.mbox/%3Ca9231a5cb3444a9ba70f1b67658d2844%40git.apache.org%3E
>
> [1/3] git: lucene-solr:lucene-6835: cut back to
> Directory.deleteFile(String); disable 'could not removed segments_N so I
> don't remove any other files it may reference' heroics
>
> http://mail-archives.apache.org/mod_mbox/lucene-commits/201602.mbox/%3C68acb868408348da8941e473725abda0%40git.apache.org%3E
>
> [1/2] git: lucene-solr:master: LUCENE-7002: Fixed MultiCollector to not
> throw a NPE if setScorer is called after one of the sub collectors is done
> collecting.
>
> http://mail-archives.apache.org/mod_mbox/lucene-commits/201602.mbox/%3ca53313b79d1b4286a655b03d2e2b2...@git.apache.org%3E
>
> [2/2] git: lucene-solr:branch_5x: LUCENE-7002: Fixed MultiCollector to not
> throw a NPE if setScorer is called after one of the sub collectors is done
> collecting.
>
> http://mail-archives.apache.org/mod_mbox/lucene-commits/201602.mbox/%3c40d0b62a4e2245ff85211c4fe4401...@git.apache.org%3E
>
>
> ...note in particular those last two emails.  As I understand it they
> were two commits from the same "push", on diff branches (the master change
> and the 5x backport) ... which is now more clear with the branch name in
> the subject.
>
> Are folks in favor of requesting this from infra?
>
>
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-8588) Add TopicStream to the streaming API to support publish/subscribe messaging

2016-02-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155248#comment-15155248
 ] 

ASF subversion and git services commented on SOLR-8588:
---

Commit b2475bf9fdc59c02454f730a6cc4916cff03f862 in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b2475bf ]

SOLR-8588: Add TopicStream to the streaming API to support publish/subscribe 
messaging


> Add TopicStream to the streaming API to support publish/subscribe messaging
> ---
>
> Key: SOLR-8588
> URL: https://issues.apache.org/jira/browse/SOLR-8588
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.0
>
> Attachments: SOLR-8588.patch, SOLR-8588.patch, SOLR-8588.patch, 
> SOLR-8588.patch, SOLR-8588.patch
>
>
> The TopicStream is a *publish/subscribe messaging service* built on top of 
> SolrCloud.  The TopicStream returns all *new* documents for a specific query. 
> Version numbers will be used as checkpoints for Topics to ensure single 
> delivery of each document. When combined with the DaemonStream (SOLR-8550), 
> Topics can provide continuous streaming. Sample syntax:
> {code}
> topic(checkpointCollection, dataCollection, id="topicA",  q="awesome stuff" 
> checkpointEvery="1000")
> {code}
> The checkpoint collection will be used to persist the topic checkpoints.
> Example combined with the DaemonStream:
> {code}
> daemon(topic(...)...)
> {code}
> When combined with SOLR-7739 this allows for messaging based on *machine 
> learned* classifications.
> The TopicStream supports 3 models of publish/subscribe messaging:
> 1) *Request & response*: In this model a topic(...) expression can be saved 
> and executed at any time. In this scenario the TopicStream will always 
> retrieve it's checkpoints and start from where it left off.
> 2) *Continuous pull streaming*: In this model you would wrap the TopicStream 
> in a DaemonStream and call read() in a loop inside a java program.  This 
> would provide a continuous stream of new content as it arrives in the index.
> 3) *Continuous push streaming*: In this model you would send an expression 
> like this to the /stream handler: *daemon(update(topic(...)...)...)*. This 
> daemon process would run inside Solr and continuously stream new documents 
> from the topic and push them to another SolrCloud collection. Other pushing 
> expressions can be created to push documents in different ways or take other 
> types of actions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8588) Add TopicStream to the streaming API to support publish/subscribe messaging

2016-02-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155249#comment-15155249
 ] 

ASF subversion and git services commented on SOLR-8588:
---

Commit f9127a919ac212c4a5c36e66fb0d0c15a7867c0e in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f9127a9 ]

SOLR-8588: Update CHANGES.txt


> Add TopicStream to the streaming API to support publish/subscribe messaging
> ---
>
> Key: SOLR-8588
> URL: https://issues.apache.org/jira/browse/SOLR-8588
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.0
>
> Attachments: SOLR-8588.patch, SOLR-8588.patch, SOLR-8588.patch, 
> SOLR-8588.patch, SOLR-8588.patch
>
>
> The TopicStream is a *publish/subscribe messaging service* built on top of 
> SolrCloud.  The TopicStream returns all *new* documents for a specific query. 
> Version numbers will be used as checkpoints for Topics to ensure single 
> delivery of each document. When combined with the DaemonStream (SOLR-8550), 
> Topics can provide continuous streaming. Sample syntax:
> {code}
> topic(checkpointCollection, dataCollection, id="topicA",  q="awesome stuff" 
> checkpointEvery="1000")
> {code}
> The checkpoint collection will be used to persist the topic checkpoints.
> Example combined with the DaemonStream:
> {code}
> daemon(topic(...)...)
> {code}
> When combined with SOLR-7739 this allows for messaging based on *machine 
> learned* classifications.
> The TopicStream supports 3 models of publish/subscribe messaging:
> 1) *Request & response*: In this model a topic(...) expression can be saved 
> and executed at any time. In this scenario the TopicStream will always 
> retrieve it's checkpoints and start from where it left off.
> 2) *Continuous pull streaming*: In this model you would wrap the TopicStream 
> in a DaemonStream and call read() in a loop inside a java program.  This 
> would provide a continuous stream of new content as it arrives in the index.
> 3) *Continuous push streaming*: In this model you would send an expression 
> like this to the /stream handler: *daemon(update(topic(...)...)...)*. This 
> daemon process would run inside Solr and continuously stream new documents 
> from the topic and push them to another SolrCloud collection. Other pushing 
> expressions can be created to push documents in different ways or take other 
> types of actions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8707) Distribute (auto)commit requests evenly over time in multi shard/replica collections

2016-02-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155241#comment-15155241
 ] 

Hoss Man commented on SOLR-8707:


bq. For example, in case there are 6 cores and auto commit time is 60 second, 
the first core commit without delay, the second core do first commit after 10 
seconds and commit in 60 seconds interval afterwards, and so on.

interesting ... a naive effort for individual cores to "space themselves out" 
in time could probably be done fairly trivially when initializing the auto 
commit timers on core load w/o a lot of continual coordination even if replicas 
are added/removed over time:

if ZK mode:
* determine what shard we are
* request a list of all (known) replicas for our shard (even if they aren't 
currently active)
* sort list of replicas by name, and locate our position N in the list and the 
list size S
* assign "delayUnit = autoCommitTime / S"
* set an initial delay on the auto commit timer thread to "(delayUnit * N) + 
rand(0, delayUnit)"

(The small amount of randomness seeming like a good idea to me in case some 
replica is replaced by a new replica with a diff name, causing a different 
existing replica (that doesn't pay know about the change to the list of ll 
replicas) to shift up/down one in the list and think it has the same N as the 
new replica)



> Distribute (auto)commit requests evenly over time in multi shard/replica 
> collections
> 
>
> Key: SOLR-8707
> URL: https://issues.apache.org/jira/browse/SOLR-8707
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Reporter: Michael Sun
>
> In current implementation, all Solr nodes start commit for all cores in a 
> collection almost at the same time. As result, it creates a load spike in 
> cluster at regular interval, particular when collection is on HDFS. The main 
> reason is that all cores are created almost at the same time for a collection 
> and do commit in a fixed interval afterwards.
> It's good to distribute the the commit load evenly to avoid load spike. It 
> helps to improve performance and reliability in general.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8707) Distribute (auto)commit requests evenly over time in multi shard/replica collections

2016-02-19 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8707:
---
Summary: Distribute (auto)commit requests evenly over time in multi 
shard/replica collections  (was: Distribute commit requests evenly)

tweaked subject to clarify: 1) "evenly" refers to time; 2) this is specific to 
autocommit

> Distribute (auto)commit requests evenly over time in multi shard/replica 
> collections
> 
>
> Key: SOLR-8707
> URL: https://issues.apache.org/jira/browse/SOLR-8707
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Reporter: Michael Sun
>
> In current implementation, all Solr nodes start commit for all cores in a 
> collection almost at the same time. As result, it creates a load spike in 
> cluster at regular interval, particular when collection is on HDFS. The main 
> reason is that all cores are created almost at the same time for a collection 
> and do commit in a fixed interval afterwards.
> It's good to distribute the the commit load evenly to avoid load spike. It 
> helps to improve performance and reliability in general.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: git email format customizability: add branch to subject?

2016-02-19 Thread Chris Hostetter

: https://git-wip-us.apache.org/docs/switching-to-git.html seems to 
: suggest there is per project flexibility. Branch not one of the
: (currently) available variables though, no?   
:
: +1 for "the branch be included in the subject"

Thanks for finding that link Christine,

I pinged #infra on HipChat to try and find the actual code in question to 
see how hard it would be to add "branch" based variables so I could 
propose a patch to infra rather then just a general "can we do this?" type 
request, but aparently that code is ASF specific and lives in a private 
infra repo, so only infra members can read/write.  Gavin said new subject 
variables are usually not a big deal though.

That said, before I request any changes, I want to make sure I'm 
not wasting the time of any infra volunteers -- so I'd like to make sure 
we have some concensus on what we'd ideally like...


: Perhaps the script could only include the last N elements of the name,
: so we get lucene-5438-nrt-replication or
: jira/lucene-5438-nrt-replication instead of the full branch name.  Or
: maybe a regex could be used to target refs\/.*?\/ (or something more
: complex) for removal -- for some of the existing branch names, having
: the last three path elements would be good, but for others, one or two
: would be better.

good point ... given that this is a general infra tool for all projects, 
and currently the only per-project configuration is (aparently) what the 
subject should be comprised of, i'm hesitent to try and request a lot of 
custom regex rules, and/or making any general assumptions about only using 
the last "N" elements of the name.

(a common workflow i've seen is things 
like refs/head/jira/solr-xyz for a shared collaboration on that feature, 
while refs/head/hossman/jira/solr-xyz might be my proposed new direction 
for the code to take -- we wouldn't want those to get confused.)

That said, i think it would totally make sense to request that 
"%(branch)s" should refering to the full branch path, and 
"%(shortbranch)s" should be the result of regex stripping 
"^refs\/(heads\/)?" from the full branch path.  

So "refs/heads/branch_7_5 => "branch_7_5" 

But "refs/tags/releases/lucene-solr/7.5.0"
 => "tags/releases/lucene-solr/7.5.0" 

: There is normally a fairly limited amount of space for the subject in
: the list view of an email client, so it seems like a good idea to keep
: it short but relevant.

Agreed -- so perhaps we should also request reducing some other 
redundencies? (ie: "git commit")


what do folks think about requesting as our pattern...

  "git: %(repo_name)s:%(shortbranch)s: %(subject)s"


With some examples of what that would look like for a handful of commits 
from the past month...


git: lucene-solr:branch_5x: fix test bug, using different randomness when 
creating the two IWCs
http://mail-archives.apache.org/mod_mbox/lucene-commits/201602.mbox/%3Ca9231a5cb3444a9ba70f1b67658d2844%40git.apache.org%3E

[1/3] git: lucene-solr:lucene-6835: cut back to Directory.deleteFile(String); 
disable 'could not removed segments_N so I don't remove any other files it may 
reference' heroics
http://mail-archives.apache.org/mod_mbox/lucene-commits/201602.mbox/%3C68acb868408348da8941e473725abda0%40git.apache.org%3E

[1/2] git: lucene-solr:master: LUCENE-7002: Fixed MultiCollector to not throw a 
NPE if setScorer is called after one of the sub collectors is done collecting.
http://mail-archives.apache.org/mod_mbox/lucene-commits/201602.mbox/%3ca53313b79d1b4286a655b03d2e2b2...@git.apache.org%3E

[2/2] git: lucene-solr:branch_5x: LUCENE-7002: Fixed MultiCollector to not 
throw a NPE if setScorer is called after one of the sub collectors is done 
collecting.
http://mail-archives.apache.org/mod_mbox/lucene-commits/201602.mbox/%3c40d0b62a4e2245ff85211c4fe4401...@git.apache.org%3E


...note in particular those last two emails.  As I understand it they 
were two commits from the same "push", on diff branches (the master change 
and the 5x backport) ... which is now more clear with the branch name in 
the subject.

Are folks in favor of requesting this from infra?



-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8696) Optimize overseer + startup

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155214#comment-15155214
 ] 

Mark Miller commented on SOLR-8696:
---

bq. But I'm not getting a hit. What gives?

I'm confused. Does that test even try to run in legacy mode? Can you elaborate 
a bit? Not sure I fully understand. If I set a break point at 
SliceMutator.addReplica and run 
CollectionsAPISolrJTests.testAddAndDeleteReplica, I hit the break point. What 
other change are you making?

> Optimize overseer + startup
> ---
>
> Key: SOLR-8696
> URL: https://issues.apache.org/jira/browse/SOLR-8696
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>  Labels: patch, performance, solrcloud, startup
> Attachments: SOLR-8696.patch
>
>
> ZkController.publishAndWaitForDownStates() occurs before overseer election.  
> That means if there is currently no overseer, there is ironically no one to 
> actually service the down state changes it's waiting on.  This particularly 
> affects a single-node cluster such as you might run locally for development.
> Additionally, we're doing an unnecessary ZkStateReader forced refresh on all 
> Overseer operations.  This isn't necessary because ZkStateReader keeps itself 
> up to date.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155203#comment-15155203
 ] 

Scott Blum commented on SOLR-8697:
--

I think it's such a subtle race, that it would only generally show up with code 
changes-- but as little as changing logging could trigger it / not trigger it.  
So it might have been beaten into a state where it happened to work unless you 
breathed on it.  Drove me nuts for the longest time debugging it until I 
stumbled on the race. :D

> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8693) Improve ZkStateReader logging

2016-02-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-8693.
---
   Resolution: Fixed
Fix Version/s: master

Thanks Scott!

> Improve ZkStateReader logging
> -
>
> Key: SOLR-8693
> URL: https://issues.apache.org/jira/browse/SOLR-8693
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>Priority: Minor
>  Labels: easy, logging
> Fix For: master
>
> Attachments: SOLR-8693.patch, SOLR-8693.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Tweaking a couple of log levels and logging in ZkStateReader, we've been 
> trying to debug a rare issue where certain solr nodes will have an 
> inconsistent view of live_ nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8693) Improve ZkStateReader logging

2016-02-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155190#comment-15155190
 ] 

ASF subversion and git services commented on SOLR-8693:
---

Commit 3124a4debdeae794cd64b4d0e8b78d23aad73c5e in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3124a4d ]

SOLR-8693: Improve ZkStateReader logging.


> Improve ZkStateReader logging
> -
>
> Key: SOLR-8693
> URL: https://issues.apache.org/jira/browse/SOLR-8693
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>Priority: Minor
>  Labels: easy, logging
> Attachments: SOLR-8693.patch, SOLR-8693.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Tweaking a couple of log levels and logging in ZkStateReader, we've been 
> trying to debug a rare issue where certain solr nodes will have an 
> inconsistent view of live_ nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8693) Improve ZkStateReader logging

2016-02-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-8693:
-

Assignee: Mark Miller

> Improve ZkStateReader logging
> -
>
> Key: SOLR-8693
> URL: https://issues.apache.org/jira/browse/SOLR-8693
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>Priority: Minor
>  Labels: easy, logging
> Attachments: SOLR-8693.patch, SOLR-8693.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Tweaking a couple of log levels and logging in ZkStateReader, we've been 
> trying to debug a rare issue where certain solr nodes will have an 
> inconsistent view of live_ nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes

2016-02-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155186#comment-15155186
 ] 

ASF subversion and git services commented on LUCENE-6989:
-

Commit 0f29b3ec7fd638341915f83384656e72dff868ec in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0f29b3e ]

LUCENE-6989: Make casting to Runnable interface in cleaner hack easier to 
understand


> Implement MMapDirectory unmapping for coming Java 9 changes
> ---
>
> Key: LUCENE-6989
> URL: https://issues.apache.org/jira/browse/LUCENE-6989
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master
>
> Attachments: LUCENE-6989-disable5x.patch, 
> LUCENE-6989-disable5x.patch, LUCENE-6989-v2.patch, LUCENE-6989.patch, 
> LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch
>
>
> Originally, the sun.misc.Cleaner interface was declared as "critical API" in 
> [JEP 260|http://openjdk.java.net/jeps/260 ]
> Unfortunately the decission was changed in favor of a oficially supported 
> {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all 
> existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes 
> our forceful unmapping to no longer work, because we can get the cleaner 
> instance via reflection, but trying to invoke it will throw one of the new 
> Jigsaw RuntimeException because it is completely inaccessible. This will make 
> our forceful unmapping fail. There are also no changes in Garbage collector, 
> the problem still exists.
> For more information see this [mailing list 
> thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243].
> This commit will likely be done, making our unmapping efforts no longer 
> working. Alan Bateman is aware of this issue and will open a new issue at 
> OpenJDK to allow forceful unmapping without using the now private 
> sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner 
> implement the Runable interface, so we can simply cast to runable and call 
> the run() method to unmap. The code would then work. This will lead to minor 
> changes in our unmapper in MMapDirectory: An instanceof check and casting if 
> possible.
> I opened this issue to keep track and implement the changes as soon as 
> possible, so people will have working unmapping when java 9 comes out. 
> Current Lucene versions will no longer work with Java 9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes

2016-02-19 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6989:
--
Attachment: LUCENE-6989-v2.patch

Here is a rewrite of the Java 8 / Lucene 6 code to make it easier to understand 
(the casting of the Runnable interface).

This helped me to debug the Java 9 b105 issue we have seen today.

> Implement MMapDirectory unmapping for coming Java 9 changes
> ---
>
> Key: LUCENE-6989
> URL: https://issues.apache.org/jira/browse/LUCENE-6989
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master
>
> Attachments: LUCENE-6989-disable5x.patch, 
> LUCENE-6989-disable5x.patch, LUCENE-6989-v2.patch, LUCENE-6989.patch, 
> LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch
>
>
> Originally, the sun.misc.Cleaner interface was declared as "critical API" in 
> [JEP 260|http://openjdk.java.net/jeps/260 ]
> Unfortunately the decission was changed in favor of a oficially supported 
> {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all 
> existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes 
> our forceful unmapping to no longer work, because we can get the cleaner 
> instance via reflection, but trying to invoke it will throw one of the new 
> Jigsaw RuntimeException because it is completely inaccessible. This will make 
> our forceful unmapping fail. There are also no changes in Garbage collector, 
> the problem still exists.
> For more information see this [mailing list 
> thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243].
> This commit will likely be done, making our unmapping efforts no longer 
> working. Alan Bateman is aware of this issue and will open a new issue at 
> OpenJDK to allow forceful unmapping without using the now private 
> sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner 
> implement the Runable interface, so we can simply cast to runable and call 
> the run() method to unmap. The code would then work. This will lead to minor 
> changes in our unmapper in MMapDirectory: An instanceof check and casting if 
> possible.
> I opened this issue to keep track and implement the changes as soon as 
> possible, so people will have working unmapping when java 9 comes out. 
> Current Lucene versions will no longer work with Java 9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-02-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155160#comment-15155160
 ] 

Steve Rowe commented on LUCENE-6993:


Yeah, the generated code underwent some changes there, so the hack we use to 
disable buffer expansion will require some adjustment.  This technique should 
be in JFlex though, I'll take a look this weekend at getting it in before the 
1.7 release.

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-02-19 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155154#comment-15155154
 ] 

Mike Drob commented on LUCENE-6993:
---

Using newer version of jflex breaks our existing macros...

{code}
  



  
  

  

  
{code}

There is no longer a {{totalRead}} variable tracked by the JFlex generated 
code, instead we could check numRead I think. However, from reading LUCENE-5897 
it is unclear whether this behaviour would have been fixed in later JFlex 
releases and we don't need the "-and-disable-buffer-expansion" marco at all.

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.5-Windows (64bit/jdk1.8.0_72) - Build # 14 - Still Failing!

2016-02-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Windows/14/
Java: 64bit/jdk1.8.0_72 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.rule.RulesTest.doIntegrationTest

Error Message:
Error from server at http://127.0.0.1:53280/e/jk: Could not identify nodes 
matching the rules [{"cores":"<4"}, {   "replica":"<2",   "node":"*"}, 
{"freedisk":">1"}]  tag values{   "127.0.0.1:53297_e%2Fjk":{ 
"node":"127.0.0.1:53297_e%2Fjk", "cores":1, "freedisk":1},   
"127.0.0.1:53347_e%2Fjk":{ "node":"127.0.0.1:53347_e%2Fjk", "cores":2,  
   "freedisk":1},   "127.0.0.1:53313_e%2Fjk":{ 
"node":"127.0.0.1:53313_e%2Fjk", "cores":2, "freedisk":1},   
"127.0.0.1:53262_e%2Fjk":{ "node":"127.0.0.1:53262_e%2Fjk", "cores":1,  
   "freedisk":1},   "127.0.0.1:53280_e%2Fjk":{ 
"node":"127.0.0.1:53280_e%2Fjk", "cores":1, "freedisk":1},   
"127.0.0.1:5_e%2Fjk":{ "node":"127.0.0.1:5_e%2Fjk", "cores":1,  
   "freedisk":1}} Initial state for the coll : {"shard1":{ 
"127.0.0.1:53313_e%2Fjk":1, "127.0.0.1:53347_e%2Fjk":1}}

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:53280/e/jk: Could not identify nodes matching 
the rules [{"cores":"<4"}, {
  "replica":"<2",
  "node":"*"}, {"freedisk":">1"}]
 tag values{
  "127.0.0.1:53297_e%2Fjk":{
"node":"127.0.0.1:53297_e%2Fjk",
"cores":1,
"freedisk":1},
  "127.0.0.1:53347_e%2Fjk":{
"node":"127.0.0.1:53347_e%2Fjk",
"cores":2,
"freedisk":1},
  "127.0.0.1:53313_e%2Fjk":{
"node":"127.0.0.1:53313_e%2Fjk",
"cores":2,
"freedisk":1},
  "127.0.0.1:53262_e%2Fjk":{
"node":"127.0.0.1:53262_e%2Fjk",
"cores":1,
"freedisk":1},
  "127.0.0.1:53280_e%2Fjk":{
"node":"127.0.0.1:53280_e%2Fjk",
"cores":1,
"freedisk":1},
  "127.0.0.1:5_e%2Fjk":{
"node":"127.0.0.1:5_e%2Fjk",
"cores":1,
"freedisk":1}}
Initial state for the coll : {"shard1":{
"127.0.0.1:53313_e%2Fjk":1,
"127.0.0.1:53347_e%2Fjk":1}}
at 
__randomizedtesting.SeedInfo.seed([25880A5E9E88BC44:C0BB4DDF82FC4E46]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:576)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:240)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:229)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.rule.RulesTest.doIntegrationTest(RulesTest.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:964)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:939)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 

[jira] [Resolved] (SOLR-8701) CloudSolrClient decides that there are no healthy nodes to handle a request too early.

2016-02-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-8701.
---
   Resolution: Fixed
Fix Version/s: 6.0

> CloudSolrClient decides that there are no healthy nodes to handle a request 
> too early.
> --
>
> Key: SOLR-8701
> URL: https://issues.apache.org/jira/browse/SOLR-8701
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 6.0
>
> Attachments: SOLR-8701.patch
>
>
> CloudSolrClient bails when it finds no leaders before trying replicas. We 
> should try all nodes before declaring we cannot serve the request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes

2016-02-19 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6989:
--
Priority: Major  (was: Critical)

> Implement MMapDirectory unmapping for coming Java 9 changes
> ---
>
> Key: LUCENE-6989
> URL: https://issues.apache.org/jira/browse/LUCENE-6989
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master
>
> Attachments: LUCENE-6989-disable5x.patch, 
> LUCENE-6989-disable5x.patch, LUCENE-6989.patch, LUCENE-6989.patch, 
> LUCENE-6989.patch, LUCENE-6989.patch
>
>
> Originally, the sun.misc.Cleaner interface was declared as "critical API" in 
> [JEP 260|http://openjdk.java.net/jeps/260 ]
> Unfortunately the decission was changed in favor of a oficially supported 
> {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all 
> existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes 
> our forceful unmapping to no longer work, because we can get the cleaner 
> instance via reflection, but trying to invoke it will throw one of the new 
> Jigsaw RuntimeException because it is completely inaccessible. This will make 
> our forceful unmapping fail. There are also no changes in Garbage collector, 
> the problem still exists.
> For more information see this [mailing list 
> thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243].
> This commit will likely be done, making our unmapping efforts no longer 
> working. Alan Bateman is aware of this issue and will open a new issue at 
> OpenJDK to allow forceful unmapping without using the now private 
> sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner 
> implement the Runable interface, so we can simply cast to runable and call 
> the run() method to unmap. The code would then work. This will lead to minor 
> changes in our unmapper in MMapDirectory: An instanceof check and casting if 
> possible.
> I opened this issue to keep track and implement the changes as soon as 
> possible, so people will have working unmapping when java 9 comes out. 
> Current Lucene versions will no longer work with Java 9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



VOTE: RC1 Release apache-solr-ref-guide-5.5.pdf

2016-02-19 Thread Chris Hostetter


Please VOTE to release the following artifacts as 
apache-solr-ref-guide-5.5.pdf ...


https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.5-RC1/

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-02-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155062#comment-15155062
 ] 

Steve Rowe commented on LUCENE-6993:


+1

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: RC0 Release apache-solr-ref-guide-5.5.pdf

2016-02-19 Thread Chris Hostetter

Christine & Bernhard Frauendienst spotted a couple of very confusing 
formatting glitches, so i'm going to respin an RC1 in a few minutes.


: Date: Fri, 19 Feb 2016 11:07:46 -0700 (MST)
: From: Chris Hostetter 
: To: Lucene Dev 
: Cc: gene...@lucene.apache.org
: Subject: Re: VOTE: RC0 Release apache-solr-ref-guide-5.5.pdf
: 
: 
: Heh ... replying back with general@lucene CC'ed correctly this time.
: 
: 
: 
: : Date: Fri, 19 Feb 2016 11:06:36 -0700 (MST)
: : From: Chris Hostetter 
: : To: Lucene Dev 
: : Cc: gene...@lucene.apache.og
: : Subject: VOTE: RC0 Release apache-solr-ref-guide-5.5.pdf
: : 
: : 
: : Please vote to release the following artifacts as
: : apache-solr-ref-guide-5.5.pdf
: : 
: : 
https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.5-RC0
: : 
: : 
: : 
: : -Hoss
: : http://www.lucidworks.com/
: : 
: 
: -Hoss
: http://www.lucidworks.com/
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes

2016-02-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155033#comment-15155033
 ] 

Uwe Schindler edited comment on LUCENE-6989 at 2/19/16 10:42 PM:
-

No, keep it open, as the final word is not yet spoken. The next changes in Java 
is underway.

In addition I have a small change for this commit, which makes code more easier 
to understand (comes a bit later). I am also debugging issues with build 105 at 
the moment.

This issue is about master branch. The fixes in 5.x was just the "backport".


was (Author: thetaphi):
No, keep it open, as the final word is not yet spoken. The next changes in Java 
is underway.

In addition I have a small change for this commit, which makes code more easier 
to understand (comes a bit later). I am also debugging issues with build 105 at 
the moment.

> Implement MMapDirectory unmapping for coming Java 9 changes
> ---
>
> Key: LUCENE-6989
> URL: https://issues.apache.org/jira/browse/LUCENE-6989
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
>  Labels: Java9
> Fix For: master
>
> Attachments: LUCENE-6989-disable5x.patch, 
> LUCENE-6989-disable5x.patch, LUCENE-6989.patch, LUCENE-6989.patch, 
> LUCENE-6989.patch, LUCENE-6989.patch
>
>
> Originally, the sun.misc.Cleaner interface was declared as "critical API" in 
> [JEP 260|http://openjdk.java.net/jeps/260 ]
> Unfortunately the decission was changed in favor of a oficially supported 
> {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all 
> existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes 
> our forceful unmapping to no longer work, because we can get the cleaner 
> instance via reflection, but trying to invoke it will throw one of the new 
> Jigsaw RuntimeException because it is completely inaccessible. This will make 
> our forceful unmapping fail. There are also no changes in Garbage collector, 
> the problem still exists.
> For more information see this [mailing list 
> thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243].
> This commit will likely be done, making our unmapping efforts no longer 
> working. Alan Bateman is aware of this issue and will open a new issue at 
> OpenJDK to allow forceful unmapping without using the now private 
> sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner 
> implement the Runable interface, so we can simply cast to runable and call 
> the run() method to unmap. The code would then work. This will lead to minor 
> changes in our unmapper in MMapDirectory: An instanceof check and casting if 
> possible.
> I opened this issue to keep track and implement the changes as soon as 
> possible, so people will have working unmapping when java 9 comes out. 
> Current Lucene versions will no longer work with Java 9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7036) nio.Paths and nio.Files package are used in StringHelper, but they are restricted in many infrastructure and platforms

2016-02-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155029#comment-15155029
 ] 

Shawn Heisey edited comment on LUCENE-7036 at 2/19/16 10:43 PM:


I hope I'm relaying correct information here.  This is my current understanding:

When using the standard directory implementation, Lucene has a fundamental 
dependency on the classes that you're asking about.  This has been the 
situation since version 4.8.0, when Java 7 became the minimum.  Before that 
version, Lucene used the File class for I/O.  The switch to NIO2 has made 
Lucene a lot more stable.

If the abstraction layers you mentioned are specific to the service providers, 
changing this would make Lucene incompatible with off-the-shelf JVMs.

It might be possible to create your own Directory implementation that uses the 
abstraction layer provided by the service.  If their license is compatible with 
the Apache license, that addition could be included in Lucene as a contrib 
module.



was (Author: elyograg):
I hope I'm relaying correct information here.  This is my current understanding:

When using the standard directory implementation, Lucene has a fundamental 
dependency on the classes that you're asking about.  This has been the 
situation since version 4.8.0, when Java 7 became the minimum.  Before that 
version, Lucene used the File class for I/O.  The switch to NIO2 has made 
Lucene a lot more stable.

If the abstraction layers you mentioned are specific to the service providers, 
changing this would make Lucene incompatible with off-the-shelf JVMs.

It might be possible to create your own Directory implementation that uses the 
abstraction layer provided by the service, and such an addition could be 
included in Lucene as a contrib module.


> nio.Paths and nio.Files package are used in StringHelper, but they are 
> restricted in many infrastructure and platforms
> --
>
> Key: LUCENE-7036
> URL: https://issues.apache.org/jira/browse/LUCENE-7036
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: 5.1, 5.2, 5.3, 5.4, 5.5
>Reporter: Forrest Sun
>
> nio.Paths and nio.Files package are used in StringHelper, but they are 
> restricted in many infrastructure and platforms like Google App Engine.
> The use of Paths and Fiiles are not related to the main function of Lucene.
> It's better to provide an interface to store system properties instead of 
> using File API in String Helper directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes

2016-02-19 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6989:
--
Fix Version/s: master

> Implement MMapDirectory unmapping for coming Java 9 changes
> ---
>
> Key: LUCENE-6989
> URL: https://issues.apache.org/jira/browse/LUCENE-6989
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
>  Labels: Java9
> Fix For: master
>
> Attachments: LUCENE-6989-disable5x.patch, 
> LUCENE-6989-disable5x.patch, LUCENE-6989.patch, LUCENE-6989.patch, 
> LUCENE-6989.patch, LUCENE-6989.patch
>
>
> Originally, the sun.misc.Cleaner interface was declared as "critical API" in 
> [JEP 260|http://openjdk.java.net/jeps/260 ]
> Unfortunately the decission was changed in favor of a oficially supported 
> {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all 
> existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes 
> our forceful unmapping to no longer work, because we can get the cleaner 
> instance via reflection, but trying to invoke it will throw one of the new 
> Jigsaw RuntimeException because it is completely inaccessible. This will make 
> our forceful unmapping fail. There are also no changes in Garbage collector, 
> the problem still exists.
> For more information see this [mailing list 
> thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243].
> This commit will likely be done, making our unmapping efforts no longer 
> working. Alan Bateman is aware of this issue and will open a new issue at 
> OpenJDK to allow forceful unmapping without using the now private 
> sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner 
> implement the Runable interface, so we can simply cast to runable and call 
> the run() method to unmap. The code would then work. This will lead to minor 
> changes in our unmapper in MMapDirectory: An instanceof check and casting if 
> possible.
> I opened this issue to keep track and implement the changes as soon as 
> possible, so people will have working unmapping when java 9 comes out. 
> Current Lucene versions will no longer work with Java 9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-02-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155035#comment-15155035
 ] 

Robert Muir commented on LUCENE-6993:
-

I think we should be ok. As far as i understand it, jflex will respect that 
unicode directive and the grammar and generate the equivalent state machine. 
But regenerating the "old grammars" means we get bugfixes from jflex: e.g. 
performance or memory improvements or whatever improved there, so I think its 
the right thing to do.

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2016-02-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155031#comment-15155031
 ] 

Hoss Man commented on SOLR-445:
---

bq. Huh? What does SOLR-8633 have to do with calling setException?

Sorry, nothing ... it's been a while since i looked at the specifics of that 
code and i spaced out on what we were actually talking about.

> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
>Assignee: Hoss Man
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes

2016-02-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155033#comment-15155033
 ] 

Uwe Schindler commented on LUCENE-6989:
---

No, keep it open, as the final word is not yet spoken. The next changes in Java 
is underway.

In addition I have a small change for this commit, which makes code more easier 
to understand (comes a bit later). I am also debugging issues with build 105 at 
the moment.

> Implement MMapDirectory unmapping for coming Java 9 changes
> ---
>
> Key: LUCENE-6989
> URL: https://issues.apache.org/jira/browse/LUCENE-6989
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
>  Labels: Java9
> Attachments: LUCENE-6989-disable5x.patch, 
> LUCENE-6989-disable5x.patch, LUCENE-6989.patch, LUCENE-6989.patch, 
> LUCENE-6989.patch, LUCENE-6989.patch
>
>
> Originally, the sun.misc.Cleaner interface was declared as "critical API" in 
> [JEP 260|http://openjdk.java.net/jeps/260 ]
> Unfortunately the decission was changed in favor of a oficially supported 
> {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all 
> existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes 
> our forceful unmapping to no longer work, because we can get the cleaner 
> instance via reflection, but trying to invoke it will throw one of the new 
> Jigsaw RuntimeException because it is completely inaccessible. This will make 
> our forceful unmapping fail. There are also no changes in Garbage collector, 
> the problem still exists.
> For more information see this [mailing list 
> thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243].
> This commit will likely be done, making our unmapping efforts no longer 
> working. Alan Bateman is aware of this issue and will open a new issue at 
> OpenJDK to allow forceful unmapping without using the now private 
> sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner 
> implement the Runable interface, so we can simply cast to runable and call 
> the run() method to unmap. The code would then work. This will lead to minor 
> changes in our unmapper in MMapDirectory: An instanceof check and casting if 
> possible.
> I opened this issue to keep track and implement the changes as soon as 
> possible, so people will have working unmapping when java 9 comes out. 
> Current Lucene versions will no longer work with Java 9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7036) nio.Paths and nio.Files package are used in StringHelper, but they are restricted in many infrastructure and platforms

2016-02-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155029#comment-15155029
 ] 

Shawn Heisey commented on LUCENE-7036:
--

I hope I'm relaying correct information here.  This is my current understanding:

When using the standard directory implementation, Lucene has a fundamental 
dependency on the classes that you're asking about.  This has been the 
situation since version 4.8.0, when Java 7 became the minimum.  Before that 
version, Lucene used the File class for I/O.  The switch to NIO2 has made 
Lucene a lot more stable.

If the abstraction layers you mentioned are specific to the service providers, 
changing this would make Lucene incompatible with off-the-shelf JVMs.

It might be possible to create your own Directory implementation that uses the 
abstraction layer provided by the service, and such an addition could be 
included in Lucene as a contrib module.


> nio.Paths and nio.Files package are used in StringHelper, but they are 
> restricted in many infrastructure and platforms
> --
>
> Key: LUCENE-7036
> URL: https://issues.apache.org/jira/browse/LUCENE-7036
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: 5.1, 5.2, 5.3, 5.4, 5.5
>Reporter: Forrest Sun
>
> nio.Paths and nio.Files package are used in StringHelper, but they are 
> restricted in many infrastructure and platforms like Google App Engine.
> The use of Paths and Fiiles are not related to the main function of Lucene.
> It's better to provide an interface to store system properties instead of 
> using File API in String Helper directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7036) nio.Paths and nio.Files package are used in StringHelper, but they are restricted in many infrastructure and platforms

2016-02-19 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155028#comment-15155028
 ] 

Uwe Schindler commented on LUCENE-7036:
---

You won't be able to use Lucene at all without {{java.nio.file.*}} classes. In 
addition StringHelper catches SecurityExceptions and handles accordingly.

> nio.Paths and nio.Files package are used in StringHelper, but they are 
> restricted in many infrastructure and platforms
> --
>
> Key: LUCENE-7036
> URL: https://issues.apache.org/jira/browse/LUCENE-7036
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: 5.1, 5.2, 5.3, 5.4, 5.5
>Reporter: Forrest Sun
>
> nio.Paths and nio.Files package are used in StringHelper, but they are 
> restricted in many infrastructure and platforms like Google App Engine.
> The use of Paths and Fiiles are not related to the main function of Lucene.
> It's better to provide an interface to store system properties instead of 
> using File API in String Helper directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154996#comment-15154996
 ] 

Mark Miller commented on SOLR-8697:
---

Okay, good, but then was it after your changes? I don't recall seeing that test 
fail in a long, long time on our Jenkins 'cluster', and that's a bunch of 
machines running the tests continuously. I also don't recall it locally.

> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154993#comment-15154993
 ] 

Scott Blum commented on SOLR-8697:
--

Actually OverseerTest.testShardLeaderChange() DID catch this race for me.  But 
only rarely.  Debugging that flake is how I uncovered the race.

> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154976#comment-15154976
 ] 

Mark Miller commented on SOLR-8697:
---

bq. cancelElection() and runLeaderProcess() can race with each other. If the 
local process is trying to cancel right as it becomes leader, cancelElection() 
won't see a leaderZkNodeParentVersion yet, so it won't try to delete the leader 
registration. Meanwhile, runLeaderProcess() still succeeds in creating the 
leader registration. The call to super.cancelElection() does remove us from the 
queue, but the dead leader registration is left there.

Any thoughts on why the existing stress tests for leader election can't catch 
this? Can we beef something up?

> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154965#comment-15154965
 ] 

ASF subversion and git services commented on SOLR-8697:
---

Commit 9418369b46586818467109e482b70ba41e90d4ed in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9418369 ]

SOLR-8697: Scope ZK election nodes by session to prevent elections from 
interfering with each other and other small LeaderElector improvements.


> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154950#comment-15154950
 ] 

Mark Miller commented on SOLR-8697:
---

FYI, we are in a special little place where we can break back compat and don't 
have to consider rolling upgrades because the next release is 6.0. We don't 
have much time before 6.0 branches though I think.

Patch looks good to me. Others should take a look as well, but I'll commit to 
get Jenkins cranking on it.

> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-02-19 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154949#comment-15154949
 ] 

Mike Drob commented on LUCENE-6993:
---

bq. I think we need to regenerate still, because there are new 
characters/character property changes so the actual tokenizer will change (even 
if the rules stay the same: the alphabet got bigger).
Ok. My current plan will be to copy all existing tokenizers to std50 packages, 
update the factories to be cognizant of lucene version, update current jflex 
files to all use unicode 8.0 and then regenerate all of the new tokenizer 
classes.

Some of the tokenizers have a unicode 3.0 directive, which indicates that they 
haven't been touched in a long time. This worries me a bit, but I'll see how it 
goes.

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7036) nio.Paths and nio.Files package are used in StringHelper, but they are restricted in many infrastructure and platforms

2016-02-19 Thread Forrest Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154947#comment-15154947
 ] 

Forrest Sun commented on LUCENE-7036:
-

In Google App Engine and other Platform as Service infrastructure, They provide 
a layer of abstraction to manage storage. File IOs are restricted in many 
cases. And the services will present a class can not initialize error if users 
use these class.

> nio.Paths and nio.Files package are used in StringHelper, but they are 
> restricted in many infrastructure and platforms
> --
>
> Key: LUCENE-7036
> URL: https://issues.apache.org/jira/browse/LUCENE-7036
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: 5.1, 5.2, 5.3, 5.4, 5.5
>Reporter: Forrest Sun
>
> nio.Paths and nio.Files package are used in StringHelper, but they are 
> restricted in many infrastructure and platforms like Google App Engine.
> The use of Paths and Fiiles are not related to the main function of Lucene.
> It's better to provide an interface to store system properties instead of 
> using File API in String Helper directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 938 - Still Failing

2016-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/938/

2 tests failed.
FAILED:  org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic

Error Message:
Connection reset

Stack Trace:
java.net.SocketException: Connection reset
at 
__randomizedtesting.SeedInfo.seed([293C398EC0ED4E00:82C6249B1F31C82E]:0)
at java.net.SocketInputStream.read(SocketInputStream.java:209)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:165)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.lucene.replicator.http.HttpClientBase.executeGET(HttpClientBase.java:158)
at 
org.apache.lucene.replicator.http.HttpReplicator.checkForUpdate(HttpReplicator.java:50)
at 
org.apache.lucene.replicator.ReplicationClient.doUpdate(ReplicationClient.java:195)
at 
org.apache.lucene.replicator.ReplicationClient.updateNow(ReplicationClient.java:401)
at 
org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic(HttpReplicatorTest.java:116)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154939#comment-15154939
 ] 

Mark Miller commented on SOLR-8697:
---

bq. Bringing in Curator at some point would be something I'd only advocate for 
incrementally and in pieces, like replace our DQ with Curator's, etc.

Yeah, I suppose if we had some consensus to push it forward over time, that's a 
viable option.

bq.  If an outside party forcibly deletes our node, we should put ourselves at 
the back of the line.

Yeah, that sounds like an interesting improvement. Much nicer than making a 
bunch of distrib calls.

> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-8697:
-

Assignee: Mark Miller

> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-02-19 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154936#comment-15154936
 ] 

Mike Drob commented on LUCENE-6993:
---

Question about what is proper behaviour in terms of backwards compatibility 
here...

Upgrading JFlex from 1.6.0 to 1.6.1 (and 1.7.0, I assume) changes the generated 
output. I have no idea if the behaviour is identical between the new class 
files and the old. I imagine that we want to keep the Impls generated by the 
old version when operating with an old lucene match version, rather than 
regenerating those with the new jflex. If so, I'll drop the work I did on 
updating jflex-legacy task, since it doesn't make sense to keep around (it 
woudn't generate code to match what is in source control).

Does this make sense?

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-02-19 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154933#comment-15154933
 ] 

Robert Muir commented on LUCENE-6993:
-

I think we need to regenerate still, because there are new characters/character 
property changes so the actual tokenizer will change (even if the rules stay 
the same: the alphabet got bigger).

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154919#comment-15154919
 ] 

Mark Miller commented on SOLR-8697:
---

You can leave the old patches by the way. We tend to leave the history and just 
pull the latest patch with the same file name.

> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8656) PeerSync should use same nUpdates everywhere

2016-02-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-8656.
---
   Resolution: Fixed
Fix Version/s: master

Thanks Ramsey!

> PeerSync should use same nUpdates everywhere
> 
>
> Key: SOLR-8656
> URL: https://issues.apache.org/jira/browse/SOLR-8656
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: master, 5.4.1
>Reporter: Ramsey Haddad
>Assignee: Mark Miller
>Priority: Minor
> Fix For: master
>
> Attachments: solr-8656.patch
>
>
> PeerSync requests information on the most recent nUpdates updates from 
> another instance to determine whether PeerSync can succeed. The value of 
> nUpdates can be customized in solrconfig.xml: 
> UpdateHandler.UpdateLog.NumRecordsToKeep.
> PeerSync can be initiated in a number of different paths. One path to start 
> PeerSync (leader-initiated sync) is incorrectly still using a hard-coded 
> value of nUpdates=100.
> This change fixes leader-initiated-sync code path to also pick up the value 
> of nUpdates from the customized/configured value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8656) PeerSync should use same nUpdates everywhere

2016-02-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154910#comment-15154910
 ] 

ASF subversion and git services commented on SOLR-8656:
---

Commit 771f14cb6e476373e94169be05c1eadf816ca5b6 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=771f14c ]

SOLR-8656: PeerSync should use same nUpdates everywhere.


> PeerSync should use same nUpdates everywhere
> 
>
> Key: SOLR-8656
> URL: https://issues.apache.org/jira/browse/SOLR-8656
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: master, 5.4.1
>Reporter: Ramsey Haddad
>Assignee: Mark Miller
>Priority: Minor
> Fix For: master
>
> Attachments: solr-8656.patch
>
>
> PeerSync requests information on the most recent nUpdates updates from 
> another instance to determine whether PeerSync can succeed. The value of 
> nUpdates can be customized in solrconfig.xml: 
> UpdateHandler.UpdateLog.NumRecordsToKeep.
> PeerSync can be initiated in a number of different paths. One path to start 
> PeerSync (leader-initiated sync) is incorrectly still using a hard-coded 
> value of nUpdates=100.
> This change fixes leader-initiated-sync code path to also pick up the value 
> of nUpdates from the customized/configured value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7024) smokeTestRelease.py's maven checker needs to switch from svn to git

2016-02-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-7024.

Resolution: Fixed
  Assignee: Steve Rowe

Yes, thanks for the reminder Mike, I pushed the changes to branch_5_5, master 
and branch_5x.

> smokeTestRelease.py's maven checker needs to switch from svn to git
> ---
>
> Key: LUCENE-7024
> URL: https://issues.apache.org/jira/browse/LUCENE-7024
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Steve Rowe
> Fix For: 5.5, master
>
> Attachments: LUCENE-7024.patch
>
>
> The {{checkMaven}} function in the smoke tester seems to be loading known 
> branches from SVN to locate the branch currently being released and then 
> crawling for {{pom.xml.template}} files from the svn server.  We need to 
> switch this to crawling git instead, but I'm not too familiar with what's 
> happening here ...
> Maybe [~steve_rowe] can help?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8416) The collections create API should return after all replicas are active.

2016-02-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-8416.
---
   Resolution: Fixed
Fix Version/s: master

Thanks Michael!

> The collections create API should return after all replicas are active. 
> 
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
>Assignee: Mark Miller
> Fix For: master
>
> Attachments: SOLR-8416.patch, SOLR-8416.patch, SOLR-8416.patch, 
> SOLR-8416.patch, SOLR-8416.patch, SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7024) smokeTestRelease.py's maven checker needs to switch from svn to git

2016-02-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-7024:
---
Fix Version/s: master
   5.5

> smokeTestRelease.py's maven checker needs to switch from svn to git
> ---
>
> Key: LUCENE-7024
> URL: https://issues.apache.org/jira/browse/LUCENE-7024
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Fix For: 5.5, master
>
> Attachments: LUCENE-7024.patch
>
>
> The {{checkMaven}} function in the smoke tester seems to be loading known 
> branches from SVN to locate the branch currently being released and then 
> crawling for {{pom.xml.template}} files from the svn server.  We need to 
> switch this to crawling git instead, but I'm not too familiar with what's 
> happening here ...
> Maybe [~steve_rowe] can help?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7024) smokeTestRelease.py's maven checker needs to switch from svn to git

2016-02-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154891#comment-15154891
 ] 

Michael McCandless commented on LUCENE-7024:


[~steve_rowe] can this be closed now?

> smokeTestRelease.py's maven checker needs to switch from svn to git
> ---
>
> Key: LUCENE-7024
> URL: https://issues.apache.org/jira/browse/LUCENE-7024
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Attachments: LUCENE-7024.patch
>
>
> The {{checkMaven}} function in the smoke tester seems to be loading known 
> branches from SVN to locate the branch currently being released and then 
> crawling for {{pom.xml.template}} files from the svn server.  We need to 
> switch this to crawling git instead, but I'm not too familiar with what's 
> happening here ...
> Maybe [~steve_rowe] can help?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154888#comment-15154888
 ] 

Scott Blum commented on SOLR-8697:
--

Yeah, totally agreed on refactoring and trying to fix core bugs!  Bringing in 
Curator at some point would be something I'd only advocate for incrementally 
and in pieces, like replace our DQ with Curator's, etc.  Moving everything over 
at in a short period of time would be a pipe dream anyway.

Back on the topic of LeaderElector, I think this patch is in a pretty good 
state now.  The only thing I want to consider doing in the short term (after 
this patch) is that, in addition to watching the node ahead of you, I think we 
should also be watching our own node, whether or not we're leader.  If an 
outside party forcibly deletes our node, we should put ourselves at the back of 
the line.  If you think about it, if we could trust that behavior, something 
like RebalanceLeaders wouldn't even need to be a distributed request; overseer 
could just delete the current leader elect node and trust the owner to do the 
right thing.

> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes

2016-02-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154889#comment-15154889
 ] 

Michael McCandless commented on LUCENE-6989:


[~thetaphi] can this be closed now?

> Implement MMapDirectory unmapping for coming Java 9 changes
> ---
>
> Key: LUCENE-6989
> URL: https://issues.apache.org/jira/browse/LUCENE-6989
> Project: Lucene - Core
>  Issue Type: Task
>  Components: core/store
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Critical
>  Labels: Java9
> Attachments: LUCENE-6989-disable5x.patch, 
> LUCENE-6989-disable5x.patch, LUCENE-6989.patch, LUCENE-6989.patch, 
> LUCENE-6989.patch, LUCENE-6989.patch
>
>
> Originally, the sun.misc.Cleaner interface was declared as "critical API" in 
> [JEP 260|http://openjdk.java.net/jeps/260 ]
> Unfortunately the decission was changed in favor of a oficially supported 
> {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all 
> existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes 
> our forceful unmapping to no longer work, because we can get the cleaner 
> instance via reflection, but trying to invoke it will throw one of the new 
> Jigsaw RuntimeException because it is completely inaccessible. This will make 
> our forceful unmapping fail. There are also no changes in Garbage collector, 
> the problem still exists.
> For more information see this [mailing list 
> thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243].
> This commit will likely be done, making our unmapping efforts no longer 
> working. Alan Bateman is aware of this issue and will open a new issue at 
> OpenJDK to allow forceful unmapping without using the now private 
> sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner 
> implement the Runable interface, so we can simply cast to runable and call 
> the run() method to unmap. The code would then work. This will lead to minor 
> changes in our unmapper in MMapDirectory: An instanceof check and casting if 
> possible.
> I opened this issue to keep track and implement the changes as soon as 
> possible, so people will have working unmapping when java 9 comes out. 
> Current Lucene versions will no longer work with Java 9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7033) ant prepare-release-no-sign fails intermittently

2016-02-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154883#comment-15154883
 ] 

Michael McCandless commented on LUCENE-7033:


There was a long email thread that degenerated into a gitstorm about this 
issue, and I couldn't tell: is this now solved?

> ant prepare-release-no-sign fails intermittently
> 
>
> Key: LUCENE-7033
> URL: https://issues.apache.org/jira/browse/LUCENE-7033
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Priority: Minor
> Attachments: capture-2.png
>
>
> Mike reported this on the mailing list. This is fully reproducible, you just 
> need to run it twice:
> {code}
> cd lucene
> # succeeds
> ant prepare-release-no-sign
> # fails
> ant prepare-release-no-sign
> {code}
> The problem is with this target that runs conditionally:
> {code}
>   
> 
>   
> 
> 
>  dest="${lucene.tgz.unpack.dir}">
>   
> 
>   
> {code}
> I attach a diff from the two runs -- you can see the second one skipped this 
> task and then cleaned the output folder, which doesn't make sense.
> I don't know how to fix, but I think it's this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8416) The collections create API should return after all replicas are active.

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154879#comment-15154879
 ] 

Mark Miller commented on SOLR-8416:
---

For some reason the commit doesn't seem to have been tagged in JIRA. This is 
committed though. SHA:31437c9b43cf93128e284e278470a39b2012a6cb

> The collections create API should return after all replicas are active. 
> 
>
> Key: SOLR-8416
> URL: https://issues.apache.org/jira/browse/SOLR-8416
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Michael Sun
>Assignee: Mark Miller
> Attachments: SOLR-8416.patch, SOLR-8416.patch, SOLR-8416.patch, 
> SOLR-8416.patch, SOLR-8416.patch, SOLR-8416.patch
>
>
> Currently the collection creation API returns once all cores are created. In 
> large cluster the cores may not be alive for some period of time after cores 
> are created. For any thing requested during that period, Solr appears 
> unstable and can return failure. Therefore it's better  the collection 
> creation API waits for all cores to become alive and returns after that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154853#comment-15154853
 ] 

Mark Miller commented on SOLR-8697:
---

bq. full disclosure: I'm a committer

And you worked on GWT! Awesome. Have not used it in years, still madly in love 
with GWT.

bq. using third party libs 

We may have started with curator but when we started (query side of SolrCloud 
was 2009 or 2010?) it was not around or we did not know about it back then if 
it was.

> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154833#comment-15154833
 ] 

Mark Miller edited comment on SOLR-8697 at 2/19/16 8:42 PM:


Curator has come up before. Personally, I have not wanted to try and mimic what 
we have or go through a protracted hardening process again. This stuff is all 
very touchy, and our tests def do not catch everything, so a rip and replace at 
that low level would be both very difficult and sure to introduce a lot of 
issues.

I think a lot of the problem is that devs like to favor just tossing crap on 
top of what exists, rather than trying to wholistically move the design forward 
or make it right for what they want to add (Examples: OverseerNodePrioritizer 
and RebalanceLeaders - which also made the election code much more dense). I 
feel a lot of "let's just make this work". I can't tell you how surprised I've 
been that some devs have come and built so much on some of the prototype code I 
initially laid out. I've always thought, how do you build so much on this 
without finding/fixing more core bugs and seeing other necessary improvements 
more things as you go? Not that it doesn't happen, but the scale has 
historically been way below what I think makes sense. Easy for me to say I 
guess. Anyway, it's great that you have already filed a bunch of issues :)

I'd rather focus on some refactoring than bringing in curator though. The 
implications of that would be pretty large and we have plenty of other more 
pressing issues I think.


was (Author: markrmil...@gmail.com):
Curator has come up before. Personally, I have not wanted to try and mimic what 
we have or go through a protracted hardening process again. This stuff is all 
very touchy, and our tests def do not catch anything, so a rip and replace at 
that low level would be both very difficult and sure to introduce a lot of 
issues.

I think a lot of the problem is that devs like to favor just tossing crap on 
top of what exists, rather than trying to wholistically move the design forward 
or make it right for what they want to add (Examples: OverseerNodePrioritizer 
and RebalanceLeaders - which also made the election code much more dense). I 
feel a lot of "let's just make this work". I can't tell you how surprised I've 
been that some devs have come and built so much on some of the prototype code I 
initially laid out. I've always thought, how do you build so much on this 
without finding/fixing more core bugs and seeing other necessary improvments 
more things as you go? Not that it doesn't happen, but the scale has 
historically been way below what I think makes sense. Easy for me to say I 
guess. Anyway, it's great that you have already filed a bunch of issues :)

I'd rather focus on some refactoring than bringing in curator though. The 
implications of that would be pretty large and we have plenty of other more 
pressing issues I think.

> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154833#comment-15154833
 ] 

Mark Miller commented on SOLR-8697:
---

Curator has come up before. Personally, I have not wanted to try and mimic what 
we have or go through a protracted hardening process again. This stuff is all 
very touchy, and our tests def do not catch anything, so a rip and replace at 
that low level would be both very difficult and sure to introduce a lot of 
issues.

I think a lot of the problem is that devs like to favor just tossing crap on 
top of what exists, rather than trying to wholistically move the design forward 
or make it right for what they want to add (Examples: OverseerNodePrioritizer 
and RebalanceLeaders - which also made the election code much more dense). I 
feel a lot of "let's just make this work". I can't tell you how surprised I've 
been that some devs have come and built so much on some of the prototype code I 
initially laid out. I've always thought, how do you build so much on this 
without finding/fixing more core bugs and seeing other necessary improvments 
more things as you go? Not that it doesn't happen, but the scale has 
historically been way below what I think makes sense. Easy for me to say I 
guess. Anyway, it's great that you have already filed a bunch of issues :)

I'd rather focus on some refactoring than bringing in curator though. The 
implications of that would be pretty large and we have plenty of other more 
pressing issues I think.

> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154817#comment-15154817
 ] 

Scott Blum edited comment on SOLR-8697 at 2/19/16 8:31 PM:
---

I think part of the general problem with a lot of the ZK-interacting code is a 
lack of clean separation of concerns.  The relationships between LeaderElector 
and the various ElectionContext subclasses are pretty gnarly and incestuous.  
DistributedQueue had a similar kind of design problem before I extracted the 
app specific gnarly parts into OverseerTaskQueue.

Have we considered trying to migrate to, say, Apache Curator (full disclosure: 
I'm a committer)?  There are a lot of advantages to using third party libs for 
some of these common patterns like distributed queue, leader election, or even 
observing changes in a tree.  Those components tend to be reusable, better 
documented, with cleaner APIs, and have a natural resistance to spaghetti 
invasion.  (Examples: OverseerNodePrioritizer and RebalanceLeaders are 
intricately tied to implementation details of LeaderElector.)

A clean, reusable leader election component (with its own tests) that could 
simply be used in a few different contexts seems like a good place to be longer 
term.

That said, I hope this patch can simply clean up some up the existing bugs 
without being too disruptive.



was (Author: dragonsinth):
I think part of the general problem with a lot of the ZK-interacting code is a 
lack of clean separation of concerns.  The relationships between LeaderElector 
and the various ElectionContext subclasses are pretty gnarly and incestuous.  
DistributedQueue had a similar kind of design problem before I extracted the 
app specific gnarly parts into OverseerTaskQueue.

Have we considered trying to migrate to, say, Apache Curator (full disclosure: 
I'm a committer)?  There are a lot of advantages to using third party libs for 
some of this common patterns like distributed queue, leader election, or even 
observing changes in a tree.  Those components tend to be reusable, better 
documented, with cleaner APIs, and have a natural resistance to spaghetti 
invasion.  (Examples: OverseerNodePrioritizer and RebalanceLeaders are 
intricately tied to implementation details of LeaderElector.)

A clean, reusable leader election component (with its own tests) that could 
simply be used in a few different contexts seems like a good place to be longer 
term.

That said, I hope this patch can simply clean up some up the existing bugs 
without being too disruptive.


> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154817#comment-15154817
 ] 

Scott Blum commented on SOLR-8697:
--

I think part of the general problem with a lot of the ZK-interacting code is a 
lack of clean separation of concerns.  The relationships between LeaderElector 
and the various ElectionContext subclasses are pretty gnarly and incestuous.  
DistributedQueue had a similar kind of design problem before I extracted the 
app specific gnarly parts into OverseerTaskQueue.

Have we considered trying to migrate to, say, Apache Curator (full disclosure: 
I'm a committer)?  There are a lot of advantages to using third party libs for 
some of this common patterns like distributed queue, leader election, or even 
observing changes in a tree.  Those components tend to be reusable, better 
documented, with cleaner APIs, and have a natural resistance to spaghetti 
invasion.  (Examples: OverseerNodePrioritizer and RebalanceLeaders are 
intricately tied to implementation details of LeaderElector.)

A clean, reusable leader election component (with its own tests) that could 
simply be used in a few different contexts seems like a good place to be longer 
term.

That said, I hope this patch can simply clean up some up the existing bugs 
without being too disruptive.


> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8707) Distribute commit requests evenly

2016-02-19 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154807#comment-15154807
 ] 

Michael Sun commented on SOLR-8707:
---

[~markrmil...@gmail.com] Randomly stagger the starting of auto commit can help. 
Another way is to delay the first commit for every core for a certain amount of 
time. For example, in case there are 6 cores and auto commit time is 60 second, 
the first core commit without delay, the second core do first commit after 10 
seconds and commit in 60 seconds interval afterwards, and so on.

> Distribute commit requests evenly
> -
>
> Key: SOLR-8707
> URL: https://issues.apache.org/jira/browse/SOLR-8707
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Reporter: Michael Sun
>
> In current implementation, all Solr nodes start commit for all cores in a 
> collection almost at the same time. As result, it creates a load spike in 
> cluster at regular interval, particular when collection is on HDFS. The main 
> reason is that all cores are created almost at the same time for a collection 
> and do commit in a fixed interval afterwards.
> It's good to distribute the the commit load evenly to avoid load spike. It 
> helps to improve performance and reliability in general.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Solaris (multiarch/jdk1.7.0) - Build # 403 - Failure!

2016-02-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/403/
Java: multiarch/jdk1.7.0 -d64 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud

Error Message:
ObjectTracker found 3 object(s) that were not released!!! 
[MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 3 object(s) that were not 
released!!! [MockDirectoryWrapper, MDCAwareThreadPoolExecutor, 
MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([DB131CB3D78F284A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:228)
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12043 lines...]
   [junit4] Suite: org.apache.solr.handler.TestSolrConfigHandlerCloud
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-5.x-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.TestSolrConfigHandlerCloud_DB131CB3D78F284A-001/init-core-data-001
   [junit4]   2> 3090084 INFO  
(SUITE-TestSolrConfigHandlerCloud-seed#[DB131CB3D78F284A]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false)
   [junit4]   2> 3090084 INFO  
(SUITE-TestSolrConfigHandlerCloud-seed#[DB131CB3D78F284A]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 3090087 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[DB131CB3D78F284A]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 3090087 INFO  (Thread-8684) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 3090087 INFO  (Thread-8684) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 3090187 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[DB131CB3D78F284A]) [] 
o.a.s.c.ZkTestServer start zk server on port:58745
   [junit4]   2> 3090187 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[DB131CB3D78F284A]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 3090188 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[DB131CB3D78F284A]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 3090190 INFO  (zkCallback-3041-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@28a664df 
name:ZooKeeperConnection Watcher:127.0.0.1:58745 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 3090190 INFO  

[jira] [Commented] (SOLR-8707) Distribute commit requests evenly

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154782#comment-15154782
 ] 

Mark Miller commented on SOLR-8707:
---

What do you want to do? Randomly stagger the starting of the auto commit a bit?

> Distribute commit requests evenly
> -
>
> Key: SOLR-8707
> URL: https://issues.apache.org/jira/browse/SOLR-8707
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Reporter: Michael Sun
>
> In current implementation, all Solr nodes start commit for all cores in a 
> collection almost at the same time. As result, it creates a load spike in 
> cluster at regular interval, particular when collection is on HDFS. The main 
> reason is that all cores are created almost at the same time for a collection 
> and do commit in a fixed interval afterwards.
> It's good to distribute the the commit load evenly to avoid load spike. It 
> helps to improve performance and reliability in general.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154793#comment-15154793
 ] 

Mark Miller commented on SOLR-445:
--

Let's not get too pedantic about adding comments to help future devs avoid bad 
decisions when we find bad decisions. Easier to just add the comment and make 
the code base a little easier to understand (which I've taken a stab at in the 
above branch).

> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
>Assignee: Hoss Man
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8633) DistributedUpdateProcess processCommit/deleteByQuery call finish on DUP and SolrCmdDistributor, which violates the lifecycle and can cause bugs.

2016-02-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-8633.
---
   Resolution: Fixed
Fix Version/s: master

> DistributedUpdateProcess processCommit/deleteByQuery call finish on DUP and 
> SolrCmdDistributor, which violates the lifecycle and can cause bugs.
> 
>
> Key: SOLR-8633
> URL: https://issues.apache.org/jira/browse/SOLR-8633
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Mark Miller
> Fix For: master
>
> Attachments: SOLR-8633.patch
>
>
> trying to wrap my head around a weird bug in my experiements with SOLR-445, i 
> realized that {{DUP.processDelete}} has a direct call to {{finish()}}.
> This violates the normal lifecycle of an UpdateProcessor (finish is only 
> suppose to be called exactly once after processing any/all UpdateCommands) 
> and could potentially break any UpdateProcessors configured after DUP (or in 
> my case: processors configured _before_ DUP that expect to be in charge of 
> calling finish, and catching any resulting exceptions, as part of the normal 
> life cycle)
> Independent of how it impacts other update processors, this also means that:
> # all the logic in {{DUP.doFinish}} is getting executed twice -- which seems 
> kind of expensive/dangerous to me since there is leader initiated recovery 
> involved in this method
> # {{SolrCmdDistributor.finish()}} gets called twice, which means 
> {{StreamingSolrClients.shutdown()}} gets called twice, which means 
> {{ConcurrentUpdateSolrClient.close()}} gets called twice ... it seems like 
> we're just getting really lucky that (as configured by DUP) all of these 
> resources are still usable after being finished/shutdown/closed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8633) DistributedUpdateProcess processCommit/deleteByQuery call finish on DUP and SolrCmdDistributor, which violates the lifecycle and can cause bugs.

2016-02-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8633:
--
Summary: DistributedUpdateProcess processCommit/deleteByQuery call finish 
on DUP and SolrCmdDistributor, which violates the lifecycle and can cause bugs. 
 (was: DistributedUpdateProcess.processCommit calls finish() - violates 
lifecycle, causes finish to be called twice (redundent code execution))

> DistributedUpdateProcess processCommit/deleteByQuery call finish on DUP and 
> SolrCmdDistributor, which violates the lifecycle and can cause bugs.
> 
>
> Key: SOLR-8633
> URL: https://issues.apache.org/jira/browse/SOLR-8633
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Mark Miller
> Fix For: master
>
> Attachments: SOLR-8633.patch
>
>
> trying to wrap my head around a weird bug in my experiements with SOLR-445, i 
> realized that {{DUP.processDelete}} has a direct call to {{finish()}}.
> This violates the normal lifecycle of an UpdateProcessor (finish is only 
> suppose to be called exactly once after processing any/all UpdateCommands) 
> and could potentially break any UpdateProcessors configured after DUP (or in 
> my case: processors configured _before_ DUP that expect to be in charge of 
> calling finish, and catching any resulting exceptions, as part of the normal 
> life cycle)
> Independent of how it impacts other update processors, this also means that:
> # all the logic in {{DUP.doFinish}} is getting executed twice -- which seems 
> kind of expensive/dangerous to me since there is leader initiated recovery 
> involved in this method
> # {{SolrCmdDistributor.finish()}} gets called twice, which means 
> {{StreamingSolrClients.shutdown()}} gets called twice, which means 
> {{ConcurrentUpdateSolrClient.close()}} gets called twice ... it seems like 
> we're just getting really lucky that (as configured by DUP) all of these 
> resources are still usable after being finished/shutdown/closed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8633) DistributedUpdateProcess.processCommit calls finish() - violates lifecycle, causes finish to be called twice (redundent code execution)

2016-02-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154788#comment-15154788
 ] 

ASF subversion and git services commented on SOLR-8633:
---

Commit 8cd53a076b579ebc3be1fbb26875321e66a41608 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8cd53a0 ]

SOLR-8633: DistributedUpdateProcess processCommit/deleteByQuery call finish on 
DUP and SolrCmdDistributor, which violates the lifecycle and can cause bugs.


> DistributedUpdateProcess.processCommit calls finish() - violates lifecycle, 
> causes finish to be called twice (redundent code execution)
> ---
>
> Key: SOLR-8633
> URL: https://issues.apache.org/jira/browse/SOLR-8633
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Mark Miller
> Attachments: SOLR-8633.patch
>
>
> trying to wrap my head around a weird bug in my experiements with SOLR-445, i 
> realized that {{DUP.processDelete}} has a direct call to {{finish()}}.
> This violates the normal lifecycle of an UpdateProcessor (finish is only 
> suppose to be called exactly once after processing any/all UpdateCommands) 
> and could potentially break any UpdateProcessors configured after DUP (or in 
> my case: processors configured _before_ DUP that expect to be in charge of 
> calling finish, and catching any resulting exceptions, as part of the normal 
> life cycle)
> Independent of how it impacts other update processors, this also means that:
> # all the logic in {{DUP.doFinish}} is getting executed twice -- which seems 
> kind of expensive/dangerous to me since there is leader initiated recovery 
> involved in this method
> # {{SolrCmdDistributor.finish()}} gets called twice, which means 
> {{StreamingSolrClients.shutdown()}} gets called twice, which means 
> {{ConcurrentUpdateSolrClient.close()}} gets called twice ... it seems like 
> we're just getting really lucky that (as configured by DUP) all of these 
> resources are still usable after being finished/shutdown/closed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8708) DaemonStream should catch InterruptedException when reading underlying stream.

2016-02-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8708:
-
Affects Version/s: 6.0

> DaemonStream should catch InterruptedException when reading underlying stream.
> --
>
> Key: SOLR-8708
> URL: https://issues.apache.org/jira/browse/SOLR-8708
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Critical
> Fix For: 6.0
>
>
> Currently the DaemonStream is only catching IOException when reading from the 
> underlying stream. This causes the DaemonStream to not shutdown properly. 
> Jenkins failures look like this:
> {code}
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([A9AE0C8FDE484A6D]:0)Throwable #2: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=2859, name=Thread-971, 
> state=RUNNABLE, group=TGRP-StreamExpressionTest]
>[junit4]> Caused by: org.apache.solr.common.SolrException: Could not 
> load collection from ZK: parallelDestinationCollection1
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([A9AE0C8FDE484A6D]:0)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:959)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:517)
>[junit4]>  at 
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:189)
>[junit4]>  at 
> org.apache.solr.common.cloud.ClusterState.hasCollection(ClusterState.java:119)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:833)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.UpdateStream.uploadBatchToCollection(UpdateStream.java:256)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.UpdateStream.read(UpdateStream.java:118)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.DaemonStream$StreamRunner.run(DaemonStream.java:245)
>[junit4]> Caused by: java.lang.InterruptedException
>[junit4]>  at java.lang.Object.wait(Native Method)
>[junit4]>  at java.lang.Object.wait(Object.java:502)
>[junit4]>  at 
> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>[junit4]>  at 
> org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1153)
>[junit4]>  at 
> org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:353)
>[junit4]>  at 
> org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:350)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
>[junit4]>  at 
> org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:350)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader.fetchCollectionState(ZkStateReader.java:967)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:954)
>[junit4]>  ... 12 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8708) DaemonStream should catch InterruptedException when reading underlying stream.

2016-02-19 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-8708:


 Summary: DaemonStream should catch InterruptedException when 
reading underlying stream.
 Key: SOLR-8708
 URL: https://issues.apache.org/jira/browse/SOLR-8708
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein


Currently the DaemonStream is only catching IOException when reading from the 
underlying stream. This causes the DaemonStream to not shutdown properly. 
Jenkins failures look like this:

{code}
  [junit4]> at 
__randomizedtesting.SeedInfo.seed([A9AE0C8FDE484A6D]:0)Throwable #2: 
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2859, name=Thread-971, state=RUNNABLE, 
group=TGRP-StreamExpressionTest]
   [junit4]> Caused by: org.apache.solr.common.SolrException: Could not 
load collection from ZK: parallelDestinationCollection1
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([A9AE0C8FDE484A6D]:0)
   [junit4]>at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:959)
   [junit4]>at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:517)
   [junit4]>at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:189)
   [junit4]>at 
org.apache.solr.common.cloud.ClusterState.hasCollection(ClusterState.java:119)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:833)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
   [junit4]>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
   [junit4]>at 
org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
   [junit4]>at 
org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)
   [junit4]>at 
org.apache.solr.client.solrj.io.stream.UpdateStream.uploadBatchToCollection(UpdateStream.java:256)
   [junit4]>at 
org.apache.solr.client.solrj.io.stream.UpdateStream.read(UpdateStream.java:118)
   [junit4]>at 
org.apache.solr.client.solrj.io.stream.DaemonStream$StreamRunner.run(DaemonStream.java:245)
   [junit4]> Caused by: java.lang.InterruptedException
   [junit4]>at java.lang.Object.wait(Native Method)
   [junit4]>at java.lang.Object.wait(Object.java:502)
   [junit4]>at 
org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
   [junit4]>at 
org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1153)
   [junit4]>at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:353)
   [junit4]>at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:350)
   [junit4]>at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
   [junit4]>at 
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:350)
   [junit4]>at 
org.apache.solr.common.cloud.ZkStateReader.fetchCollectionState(ZkStateReader.java:967)
   [junit4]>at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:954)
   [junit4]>... 12 more
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8708) DaemonStream should catch InterruptedException when reading underlying stream.

2016-02-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8708:
-
Fix Version/s: 6.0

> DaemonStream should catch InterruptedException when reading underlying stream.
> --
>
> Key: SOLR-8708
> URL: https://issues.apache.org/jira/browse/SOLR-8708
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.0
>
>
> Currently the DaemonStream is only catching IOException when reading from the 
> underlying stream. This causes the DaemonStream to not shutdown properly. 
> Jenkins failures look like this:
> {code}
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([A9AE0C8FDE484A6D]:0)Throwable #2: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=2859, name=Thread-971, 
> state=RUNNABLE, group=TGRP-StreamExpressionTest]
>[junit4]> Caused by: org.apache.solr.common.SolrException: Could not 
> load collection from ZK: parallelDestinationCollection1
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([A9AE0C8FDE484A6D]:0)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:959)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:517)
>[junit4]>  at 
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:189)
>[junit4]>  at 
> org.apache.solr.common.cloud.ClusterState.hasCollection(ClusterState.java:119)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:833)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.UpdateStream.uploadBatchToCollection(UpdateStream.java:256)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.UpdateStream.read(UpdateStream.java:118)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.DaemonStream$StreamRunner.run(DaemonStream.java:245)
>[junit4]> Caused by: java.lang.InterruptedException
>[junit4]>  at java.lang.Object.wait(Native Method)
>[junit4]>  at java.lang.Object.wait(Object.java:502)
>[junit4]>  at 
> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>[junit4]>  at 
> org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1153)
>[junit4]>  at 
> org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:353)
>[junit4]>  at 
> org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:350)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
>[junit4]>  at 
> org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:350)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader.fetchCollectionState(ZkStateReader.java:967)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:954)
>[junit4]>  ... 12 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8708) DaemonStream should catch InterruptedException when reading underlying stream.

2016-02-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8708:
-
Priority: Critical  (was: Major)

> DaemonStream should catch InterruptedException when reading underlying stream.
> --
>
> Key: SOLR-8708
> URL: https://issues.apache.org/jira/browse/SOLR-8708
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Critical
> Fix For: 6.0
>
>
> Currently the DaemonStream is only catching IOException when reading from the 
> underlying stream. This causes the DaemonStream to not shutdown properly. 
> Jenkins failures look like this:
> {code}
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([A9AE0C8FDE484A6D]:0)Throwable #2: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=2859, name=Thread-971, 
> state=RUNNABLE, group=TGRP-StreamExpressionTest]
>[junit4]> Caused by: org.apache.solr.common.SolrException: Could not 
> load collection from ZK: parallelDestinationCollection1
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([A9AE0C8FDE484A6D]:0)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:959)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:517)
>[junit4]>  at 
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:189)
>[junit4]>  at 
> org.apache.solr.common.cloud.ClusterState.hasCollection(ClusterState.java:119)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:833)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.UpdateStream.uploadBatchToCollection(UpdateStream.java:256)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.UpdateStream.read(UpdateStream.java:118)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.DaemonStream$StreamRunner.run(DaemonStream.java:245)
>[junit4]> Caused by: java.lang.InterruptedException
>[junit4]>  at java.lang.Object.wait(Native Method)
>[junit4]>  at java.lang.Object.wait(Object.java:502)
>[junit4]>  at 
> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>[junit4]>  at 
> org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1153)
>[junit4]>  at 
> org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:353)
>[junit4]>  at 
> org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:350)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
>[junit4]>  at 
> org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:350)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader.fetchCollectionState(ZkStateReader.java:967)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:954)
>[junit4]>  ... 12 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8708) DaemonStream should catch InterruptedException when reading underlying stream.

2016-02-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-8708:


Assignee: Joel Bernstein

> DaemonStream should catch InterruptedException when reading underlying stream.
> --
>
> Key: SOLR-8708
> URL: https://issues.apache.org/jira/browse/SOLR-8708
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.0
>
>
> Currently the DaemonStream is only catching IOException when reading from the 
> underlying stream. This causes the DaemonStream to not shutdown properly. 
> Jenkins failures look like this:
> {code}
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([A9AE0C8FDE484A6D]:0)Throwable #2: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=2859, name=Thread-971, 
> state=RUNNABLE, group=TGRP-StreamExpressionTest]
>[junit4]> Caused by: org.apache.solr.common.SolrException: Could not 
> load collection from ZK: parallelDestinationCollection1
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([A9AE0C8FDE484A6D]:0)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:959)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:517)
>[junit4]>  at 
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:189)
>[junit4]>  at 
> org.apache.solr.common.cloud.ClusterState.hasCollection(ClusterState.java:119)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:833)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.UpdateStream.uploadBatchToCollection(UpdateStream.java:256)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.UpdateStream.read(UpdateStream.java:118)
>[junit4]>  at 
> org.apache.solr.client.solrj.io.stream.DaemonStream$StreamRunner.run(DaemonStream.java:245)
>[junit4]> Caused by: java.lang.InterruptedException
>[junit4]>  at java.lang.Object.wait(Native Method)
>[junit4]>  at java.lang.Object.wait(Object.java:502)
>[junit4]>  at 
> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
>[junit4]>  at 
> org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1153)
>[junit4]>  at 
> org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:353)
>[junit4]>  at 
> org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:350)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
>[junit4]>  at 
> org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:350)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader.fetchCollectionState(ZkStateReader.java:967)
>[junit4]>  at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:954)
>[junit4]>  ... 12 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154775#comment-15154775
 ] 

Mark Miller commented on SOLR-445:
--

Huh? What does SOLR-8633 have to do with calling setException?

I'd say it fits right here. Here is where it's talked about, here is where it's 
changed in a patch...

> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
>Assignee: Hoss Man
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8640) CloudSolrClient does not send the credentials set in the UpdateRequest

2016-02-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154755#comment-15154755
 ] 

Noble Paul commented on SOLR-8640:
--

It;s fixed in 6.0 and 5.5 

Unfortunately the CHAGES section does not have it for 5.5

> CloudSolrClient does not send the credentials set in the UpdateRequest
> --
>
> Key: SOLR-8640
> URL: https://issues.apache.org/jira/browse/SOLR-8640
> Project: Solr
>  Issue Type: Bug
>  Components: security, SolrJ
>Affects Versions: 5.4
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.0
>
> Attachments: SOLR-8640.patch
>
>
> CloudSolrClient copies the UpdateRequest, but not the credentials. So 
> BasicAuth does not work if u use CloudSolrClient 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8633) DistributedUpdateProcess.processCommit calls finish() - violates lifecycle, causes finish to be called twice (redundent code execution)

2016-02-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-8633:
-

Assignee: Mark Miller

> DistributedUpdateProcess.processCommit calls finish() - violates lifecycle, 
> causes finish to be called twice (redundent code execution)
> ---
>
> Key: SOLR-8633
> URL: https://issues.apache.org/jira/browse/SOLR-8633
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Mark Miller
> Attachments: SOLR-8633.patch
>
>
> trying to wrap my head around a weird bug in my experiements with SOLR-445, i 
> realized that {{DUP.processDelete}} has a direct call to {{finish()}}.
> This violates the normal lifecycle of an UpdateProcessor (finish is only 
> suppose to be called exactly once after processing any/all UpdateCommands) 
> and could potentially break any UpdateProcessors configured after DUP (or in 
> my case: processors configured _before_ DUP that expect to be in charge of 
> calling finish, and catching any resulting exceptions, as part of the normal 
> life cycle)
> Independent of how it impacts other update processors, this also means that:
> # all the logic in {{DUP.doFinish}} is getting executed twice -- which seems 
> kind of expensive/dangerous to me since there is leader initiated recovery 
> involved in this method
> # {{SolrCmdDistributor.finish()}} gets called twice, which means 
> {{StreamingSolrClients.shutdown()}} gets called twice, which means 
> {{ConcurrentUpdateSolrClient.close()}} gets called twice ... it seems like 
> we're just getting really lucky that (as configured by DUP) all of these 
> resources are still usable after being finished/shutdown/closed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8640) CloudSolrClient does not send the credentials set in the UpdateRequest

2016-02-19 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8640:
-
Fix Version/s: 6.0

> CloudSolrClient does not send the credentials set in the UpdateRequest
> --
>
> Key: SOLR-8640
> URL: https://issues.apache.org/jira/browse/SOLR-8640
> Project: Solr
>  Issue Type: Bug
>  Components: security, SolrJ
>Affects Versions: 5.4
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.0
>
> Attachments: SOLR-8640.patch
>
>
> CloudSolrClient copies the UpdateRequest, but not the credentials. So 
> BasicAuth does not work if u use CloudSolrClient 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8695) Consistent process(WatchedEvent) handling

2016-02-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-8695.
---
   Resolution: Fixed
Fix Version/s: master

Thanks Scott!

> Consistent process(WatchedEvent) handling
> -
>
> Key: SOLR-8695
> URL: https://issues.apache.org/jira/browse/SOLR-8695
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>Priority: Minor
>  Labels: easy, patch
> Fix For: master
>
> Attachments: SOLR-8695.patch
>
>
> Audited implementations of process(WatchedEvent) for consistency in treatment 
> of connection state events, and comment.  This does NOT include fixes for 
> DistributedMap/DistributedQueue.  See SOLR-8694.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8694) DistributedMap/Queue simplifications and fixes.

2016-02-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154745#comment-15154745
 ] 

ASF subversion and git services commented on SOLR-8694:
---

Commit 32fbca6ea7b65043041e622660e07915f04090fe in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=32fbca6 ]

SOLR-8694: DistributedMap/Queue can create too many Watchers and some code 
simplification.


> DistributedMap/Queue simplifications and fixes.
> ---
>
> Key: SOLR-8694
> URL: https://issues.apache.org/jira/browse/SOLR-8694
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>  Labels: patch, reliability
> Fix For: master
>
> Attachments: SOLR-8694.patch
>
>
> Bugfix in DistributedQueue, it could add too many watchers since it assumed 
> the watcher was cleared on connection events.
> Huge simplification to DistributedMap; it implemented a lot of tricky stuff 
> that no one is actually using.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8694) DistributedMap/Queue simplifications and fixes.

2016-02-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-8694.
---
   Resolution: Fixed
Fix Version/s: master

Thanks Scott!

> DistributedMap/Queue simplifications and fixes.
> ---
>
> Key: SOLR-8694
> URL: https://issues.apache.org/jira/browse/SOLR-8694
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>  Labels: patch, reliability
> Fix For: master
>
> Attachments: SOLR-8694.patch
>
>
> Bugfix in DistributedQueue, it could add too many watchers since it assumed 
> the watcher was cleared on connection events.
> Huge simplification to DistributedMap; it implemented a lot of tricky stuff 
> that no one is actually using.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8695) Consistent process(WatchedEvent) handling

2016-02-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154746#comment-15154746
 ] 

ASF subversion and git services commented on SOLR-8695:
---

Commit e30d638c51f9c6cf9d462741d05e91302ff4b56d in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e30d638 ]

SOLR-8695: Ensure ZK watchers are not triggering our watch logic on connection 
events and make this handling more consistent.


> Consistent process(WatchedEvent) handling
> -
>
> Key: SOLR-8695
> URL: https://issues.apache.org/jira/browse/SOLR-8695
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>Priority: Minor
>  Labels: easy, patch
> Fix For: master
>
> Attachments: SOLR-8695.patch
>
>
> Audited implementations of process(WatchedEvent) for consistency in treatment 
> of connection state events, and comment.  This does NOT include fixes for 
> DistributedMap/DistributedQueue.  See SOLR-8694.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8696) Optimize overseer + startup

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154744#comment-15154744
 ] 

Mark Miller commented on SOLR-8696:
---

I made a run at removing legacyMode before the 6 release, but while it was 
pretty easy to take from non test code, it requires a really large change to 
the tests to move away from it.

> Optimize overseer + startup
> ---
>
> Key: SOLR-8696
> URL: https://issues.apache.org/jira/browse/SOLR-8696
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>  Labels: patch, performance, solrcloud, startup
> Attachments: SOLR-8696.patch
>
>
> ZkController.publishAndWaitForDownStates() occurs before overseer election.  
> That means if there is currently no overseer, there is ironically no one to 
> actually service the down state changes it's waiting on.  This particularly 
> affects a single-node cluster such as you might run locally for development.
> Additionally, we're doing an unnecessary ZkStateReader forced refresh on all 
> Overseer operations.  This isn't necessary because ZkStateReader keeps itself 
> up to date.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8697) Fix LeaderElector issues

2016-02-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15154740#comment-15154740
 ] 

Mark Miller commented on SOLR-8697:
---

bq. TBH, the code is pretty hard to follow in its existing form

Yup. It was mildly hairy in its first form (copying the ZK recipe as described) 
and took a while to harden. Then some contributions came that just made it 
insane to follow. I've brought it up before, instead of trying to avoid 
thundering herd issues with what will be a reasonably low number of replicas 
trying to be leader, we probably should just have very simple leader elections. 
All of the original logic, and the logic that was added that made it really 
hard for me to follow, would be really simple if we gave up the cool elegant 
approach we used to avoid a mostly non existent thundering herd issue. That 
thicket is just a ripe breeding ground for random bugs our tests just don't 
easily expose.

At this point, the effort to change reliably is probably really high though.

> Fix LeaderElector issues
> 
>
> Key: SOLR-8697
> URL: https://issues.apache.org/jira/browse/SOLR-8697
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>  Labels: patch, reliability, solrcloud
> Attachments: SOLR-8697.patch
>
>
> This patch is still somewhat WIP for a couple of reasons:
> 1) Still debugging test failures.
> 2) This will more scrutiny from knowledgable folks!
> There are some subtle bugs with the current implementation of LeaderElector, 
> best demonstrated by the following test:
> 1) Start up a small single-node solrcloud.  it should be become Overseer.
> 2) kill -9 the solrcloud process and immediately start a new one.
> 3) The new process won't become overseer.  The old process's ZK leader elect 
> node has not yet disappeared, and the new process fails to set appropriate 
> watches.
> NOTE: this is only reproducible if the new node is able to start up and join 
> the election quickly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7038) QueryScorer.init returns empty TokenStream if setMaxDocCharsToAnalyze is not previously called

2016-02-19 Thread Jeff Stein (JIRA)
Jeff Stein created LUCENE-7038:
--

 Summary: QueryScorer.init returns empty TokenStream if 
setMaxDocCharsToAnalyze is not previously called
 Key: LUCENE-7038
 URL: https://issues.apache.org/jira/browse/LUCENE-7038
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Affects Versions: 5.4
Reporter: Jeff Stein
Priority: Minor


This is a regression since Lucene 4.10 regarding The QueryScorer class in the 
Highlighter module.

In 4.10, the `QueryScorer.init` method returns a working tokenStream even if 
the maxCharsToAnalyze variable is set to zero. In both versions, zero is the 
default value and in 4.10 it indicated that the entire stream should be 
returned, not an empty stream.

The problem is with the `WeightedSpanTermExtractor` always wrapping the 
tokenStream in a `OffsetLimitTokenFilter` filter, even when the passed down 
maxDocCharsToAnalyze variable is zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8694) DistributedMap/Queue simplifications and fixes.

2016-02-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-8694:
-

Assignee: Mark Miller

> DistributedMap/Queue simplifications and fixes.
> ---
>
> Key: SOLR-8694
> URL: https://issues.apache.org/jira/browse/SOLR-8694
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>  Labels: patch, reliability
> Attachments: SOLR-8694.patch
>
>
> Bugfix in DistributedQueue, it could add too many watchers since it assumed 
> the watcher was cleared on connection events.
> Huge simplification to DistributedMap; it implemented a lot of tricky stuff 
> that no one is actually using.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8695) Consistent process(WatchedEvent) handling

2016-02-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-8695:
-

Assignee: Mark Miller

> Consistent process(WatchedEvent) handling
> -
>
> Key: SOLR-8695
> URL: https://issues.apache.org/jira/browse/SOLR-8695
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Mark Miller
>Priority: Minor
>  Labels: easy, patch
> Attachments: SOLR-8695.patch
>
>
> Audited implementations of process(WatchedEvent) for consistency in treatment 
> of connection state events, and comment.  This does NOT include fixes for 
> DistributedMap/DistributedQueue.  See SOLR-8694.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >